#### Sample records for algorithmic information theory

1. Algorithmic information theory mathematics of digital information processing

CERN Document Server

Seibt, Peter

2007-01-01

Treats the Mathematics of many important areas in digital information processing. This book covers, in a unified presentation, five topics: Data Compression, Cryptography, Sampling (Signal Theory), Error Control Codes, Data Reduction. It is useful for teachers, students and practitioners in Electronic Engineering, Computer Science and Mathematics.

2. Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory

Science.gov (United States)

Kitazono, Jun; Kanai, Ryota; Oizumi, Masafumi

2018-03-01

The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ($\\Phi$) in the brain is related to the level of consciousness. IIT proposes that to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that if a measure of $\\Phi$ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of $\\Phi$ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of $\\Phi$ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure $\\Phi$ in large systems within a practical amount of time.

3. [Prediction of regional soil quality based on mutual information theory integrated with decision tree algorithm].

Science.gov (United States)

Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu

2012-02-01

In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.

4. Understanding how replication processes can maintain systems away from equilibrium using Algorithmic Information Theory.

Science.gov (United States)

Devine, Sean D

2016-02-01

Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

5. Algorithms in invariant theory

CERN Document Server

Sturmfels, Bernd

2008-01-01

J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

6. Algorithm Theory - SWAT 2006

DEFF Research Database (Denmark)

This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

7. A statistical mechanical interpretation of algorithmic information theory: Total statistical mechanical interpretation based on physical argument

International Nuclear Information System (INIS)

2010-01-01

The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T is an element of (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.

8. Algorithms in combinatorial design theory

CERN Document Server

Colbourn, CJ

1985-01-01

The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

9. Recognition algorithms in knot theory

International Nuclear Information System (INIS)

Dynnikov, I A

2003-01-01

In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

10. Imperishable Networks: Complexity Theory and Communication Networking-Bridging the Gap Between Algorithmic Information Theory and Communication Networking

National Research Council Canada - National Science Library

Bush, Stephen

2003-01-01

... other. Our goal has been to reduce the requirement and dependence upon detailed a priori information about known attacks and detect novel attacks by computing vulnerability and detecting anomalous behavior...

11. Planar graphs theory and algorithms

CERN Document Server

Nishizeki, T

1988-01-01

Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

12. Optimisation combinatoire Theorie et algorithmes

CERN Document Server

Korte, Bernhard; Fonlupt, Jean

2010-01-01

Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

13. Machine vision theory, algorithms, practicalities

CERN Document Server

Davies, E R

2005-01-01

In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

14. Scheduling theory, algorithms, and systems

CERN Document Server

Pinedo, Michael L

2016-01-01

This new edition of the well-established text Scheduling: Theory, Algorithms, and Systems provides an up-to-date coverage of important theoretical models in the scheduling literature as well as important scheduling problems that appear in the real world. The accompanying website includes supplementary material in the form of slide-shows from industry as well as movies that show actual implementations of scheduling systems. The main structure of the book, as per previous editions, consists of three parts. The first part focuses on deterministic scheduling and the related combinatorial problems. The second part covers probabilistic scheduling models; in this part it is assumed that processing times and other problem data are random and not known in advance. The third part deals with scheduling in practice; it covers heuristics that are popular with practitioners and discusses system design and implementation issues. All three parts of this new edition have been revamped, streamlined, and extended. The reference...

15. Galois theory and algorithms for linear differential equations

NARCIS (Netherlands)

Put, Marius van der

2005-01-01

This paper is an informal introduction to differential Galois theory. It surveys recent work on differential Galois groups, related algorithms and some applications. (c) 2005 Elsevier Ltd. All rights reserved.

16. The theory of hybrid stochastic algorithms

International Nuclear Information System (INIS)

Duane, S.; Kogut, J.B.

1986-01-01

The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)

17. Science and information theory

CERN Document Server

Brillouin, Léon

1962-01-01

A classic source for exploring the connections between information theory and physics, this text is geared toward upper-level undergraduates and graduate students. The author, a giant of 20th-century mathematics, applies the principles of information theory to a variety of issues, including Maxwell's demon, thermodynamics, and measurement problems. 1962 edition.

18. Quantum algorithms and learning theory

NARCIS (Netherlands)

Arunachalam, S.

2018-01-01

This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem

19. Generalized phase retrieval algorithm based on information measures

OpenAIRE

Shioya, Hiroyuki; Gohara, Kazutoshi

2006-01-01

An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

20. Information Design Theories

Science.gov (United States)

2014-01-01

Information design has practical and theoretical components. As an academic discipline we may view information design as a combined discipline, a practical theory, or as a theoretical practice. So far information design has incorporated facts, influences, methods, practices, principles, processes, strategies, and tools from a large number of…

1. Quantum biological information theory

CERN Document Server

Djordjevic, Ivan B

2016-01-01

This book is a self-contained, tutorial-based introduction to quantum information theory and quantum biology. It serves as a single-source reference to the topic for researchers in bioengineering, communications engineering, electrical engineering, applied mathematics, biology, computer science, and physics. The book provides all the essential principles of the quantum biological information theory required to describe the quantum information transfer from DNA to proteins, the sources of genetic noise and genetic errors as well as their effects. Integrates quantum information and quantum biology concepts; Assumes only knowledge of basic concepts of vector algebra at undergraduate level; Provides a thorough introduction to basic concepts of quantum information processing, quantum information theory, and quantum biology; Includes in-depth discussion of the quantum biological channel modelling, quantum biological channel capacity calculation, quantum models of aging, quantum models of evolution, quantum models o...

2. Theories of information behavior

CERN Document Server

Erdelez, Sandra; McKechnie, Lynne

2005-01-01

This unique book presents authoritative overviews of more than 70 conceptual frameworks for understanding how people seek, manage, share, and use information in different contexts. A practical and readable reference to both well-established and newly proposed theories of information behavior, the book includes contributions from 85 scholars from 10 countries. Each theory description covers origins, propositions, methodological implications, usage, links to related conceptual frameworks, and listings of authoritative primary and secondary references. The introductory chapters explain key concepts, theory–method connections, and the process of theory development.

3. Combinatorial optimization theory and algorithms

CERN Document Server

Korte, Bernhard

2018-01-01

This comprehensive textbook on combinatorial optimization places special emphasis on theoretical results and algorithms with provably good performance, in contrast to heuristics. It is based on numerous courses on combinatorial optimization and specialized topics, mostly at graduate level. This book reviews the fundamentals, covers the classical topics (paths, flows, matching, matroids, NP-completeness, approximation algorithms) in detail, and proceeds to advanced and recent topics, some of which have not appeared in a textbook before. Throughout, it contains complete but concise proofs, and also provides numerous exercises and references. This sixth edition has again been updated, revised, and significantly extended. Among other additions, there are new sections on shallow-light trees, submodular function maximization, smoothed analysis of the knapsack problem, the (ln 4+ɛ)-approximation for Steiner trees, and the VPN theorem. Thus, this book continues to represent the state of the art of combinatorial opti...

4. Learning theory of distributed spectral algorithms

International Nuclear Information System (INIS)

Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan

2017-01-01

Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms. (paper)

5. The theory of hybrid stochastic algorithms

International Nuclear Information System (INIS)

Kennedy, A.D.

1989-01-01

These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs

6. Quantum Information Theory - an Invitation

Science.gov (United States)

Werner, Reinhard F.

Quantum information and quantum computers have received a lot of public attention recently. Quantum computers have been advertised as a kind of warp drive for computing, and indeed the promise of the algorithms of Shor and Grover is to perform computations which are extremely hard or even provably impossible on any merely classical'' computer.In this article I shall give an account of the basic concepts of quantum information theory is given, staying as much as possible in the area of general agreement.The article is divided into two parts. The first (up to the end of Sect. 2.5) is mostly in plain English, centered around the exploration of what can or cannot be done with quantum systems as information carriers. The second part, Sect. 2.6, then gives a description of the mathematical structures and of some of the tools needed to develop the theory.

7. Dynamic statistical information theory

Institute of Scientific and Technical Information of China (English)

2006-01-01

In recent years we extended Shannon static statistical information theory to dynamic processes and established a Shannon dynamic statistical information theory, whose core is the evolution law of dynamic entropy and dynamic information. We also proposed a corresponding Boltzmman dynamic statistical information theory. Based on the fact that the state variable evolution equation of respective dynamic systems, i.e. Fokker-Planck equation and Liouville diffusion equation can be regarded as their information symbol evolution equation, we derived the nonlinear evolution equations of Shannon dynamic entropy density and dynamic information density and the nonlinear evolution equations of Boltzmann dynamic entropy density and dynamic information density, that describe respectively the evolution law of dynamic entropy and dynamic information. The evolution equations of these two kinds of dynamic entropies and dynamic informations show in unison that the time rate of change of dynamic entropy densities is caused by their drift, diffusion and production in state variable space inside the systems and coordinate space in the transmission processes; and that the time rate of change of dynamic information densities originates from their drift, diffusion and dissipation in state variable space inside the systems and coordinate space in the transmission processes. Entropy and information have been combined with the state and its law of motion of the systems. Furthermore we presented the formulas of two kinds of entropy production rates and information dissipation rates, the expressions of two kinds of drift information flows and diffusion information flows. We proved that two kinds of information dissipation rates (or the decrease rates of the total information) were equal to their corresponding entropy production rates (or the increase rates of the total entropy) in the same dynamic system. We obtained the formulas of two kinds of dynamic mutual informations and dynamic channel

8. Constructor theory of information

Science.gov (United States)

Deutsch, David; Marletto, Chiara

2015-01-01

We propose a theory of information expressed solely in terms of which transformations of physical systems are possible and which are impossible—i.e. in constructor-theoretic terms. It includes conjectured, exact laws of physics expressing the regularities that allow information to be physically instantiated. Although these laws are directly about information, independently of the details of particular physical instantiations, information is not regarded as an a priori mathematical or logical concept, but as something whose nature and properties are determined by the laws of physics alone. This theory solves a problem at the foundations of existing information theory, namely that information and distinguishability are each defined in terms of the other. It also explains the relationship between classical and quantum information, and reveals the single, constructor-theoretic property underlying the most distinctive phenomena associated with the latter, including the lack of in-principle distinguishability of some states, the impossibility of cloning, the existence of pairs of variables that cannot simultaneously have sharp values, the fact that measurement processes can be both deterministic and unpredictable, the irreducible perturbation caused by measurement, and locally inaccessible information (as in entangled systems). PMID:25663803

9. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

Directory of Open Access Journals (Sweden)

Huimin Lu

2013-01-01

Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

10. Wave theory of information

CERN Document Server

Franceschetti, Massimo

2017-01-01

Understand the relationship between information theory and the physics of wave propagation with this expert guide. Balancing fundamental theory with engineering applications, it describes the mechanism and limits for the representation and communication of information using electromagnetic waves. Information-theoretic laws relating functional approximation and quantum uncertainty principles to entropy, capacity, mutual information, rate distortion, and degrees of freedom of band-limited radiation are derived and explained. Both stochastic and deterministic approaches are explored, and applications for sensing and signal reconstruction, wireless communication, and networks of multiple transmitters and receivers are reviewed. With end-of-chapter exercises and suggestions for further reading enabling in-depth understanding of key concepts, it is the ideal resource for researchers and graduate students in electrical engineering, physics and applied mathematics looking for a fresh perspective on classical informat...

11. Quantum information theory

CERN Document Server

Wilde, Mark M

2017-01-01

Developing many of the major, exciting, pre- and post-millennium developments from the ground up, this book is an ideal entry point for graduate students into quantum information theory. Significant attention is given to quantum mechanics for quantum information theory, and careful studies of the important protocols of teleportation, superdense coding, and entanglement distribution are presented. In this new edition, readers can expect to find over 100 pages of new material, including detailed discussions of Bell's theorem, the CHSH game, Tsirelson's theorem, the axiomatic approach to quantum channels, the definition of the diamond norm and its interpretation, and a proof of the Choi–Kraus theorem. Discussion of the importance of the quantum dynamic capacity formula has been completely revised, and many new exercises and references have been added. This new edition will be welcomed by the upcoming generation of quantum information theorists and the already established community of classical information theo...

12. Nonequilibrium molecular dynamics theory, algorithms and applications

CERN Document Server

Todd, Billy D

2017-01-01

Written by two specialists with over twenty-five years of experience in the field, this valuable text presents a wide range of topics within the growing field of nonequilibrium molecular dynamics (NEMD). It introduces theories which are fundamental to the field - namely, nonequilibrium statistical mechanics and nonequilibrium thermodynamics - and provides state-of-the-art algorithms and advice for designing reliable NEMD code, as well as examining applications for both atomic and molecular fluids. It discusses homogenous and inhomogenous flows and pays considerable attention to highly confined fluids, such as nanofluidics. In addition to statistical mechanics and thermodynamics, the book covers the themes of temperature and thermodynamic fluxes and their computation, the theory and algorithms for homogenous shear and elongational flows, response theory and its applications, heat and mass transport algorithms, applications in molecular rheology, highly confined fluids (nanofluidics), the phenomenon of slip and...

13. Interior point algorithms theory and analysis

CERN Document Server

Ye, Yinyu

2011-01-01

The first comprehensive review of the theory and practice of one of today's most powerful optimization techniques. The explosive growth of research into and development of interior point algorithms over the past two decades has significantly improved the complexity of linear programming and yielded some of today's most sophisticated computing techniques. This book offers a comprehensive and thorough treatment of the theory, analysis, and implementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basic and advanced aspects of the subject.

14. Monte Carlo algorithms for lattice gauge theory

International Nuclear Information System (INIS)

Creutz, M.

1987-05-01

Various techniques are reviewed which have been used in numerical simulations of lattice gauge theories. After formulating the problem, the Metropolis et al. algorithm and some interesting variations are discussed. The numerous proposed schemes for including fermionic fields in the simulations are summarized. Langevin, microcanonical, and hybrid approaches to simulating field theories via differential evolution in a fictitious time coordinate are treated. Some speculations are made on new approaches to fermionic simulations

15. Linear programming mathematics, theory and algorithms

CERN Document Server

1996-01-01

Linear Programming provides an in-depth look at simplex based as well as the more recent interior point techniques for solving linear programming problems. Starting with a review of the mathematical underpinnings of these approaches, the text provides details of the primal and dual simplex methods with the primal-dual, composite, and steepest edge simplex algorithms. This then is followed by a discussion of interior point techniques, including projective and affine potential reduction, primal and dual affine scaling, and path following algorithms. Also covered is the theory and solution of the linear complementarity problem using both the complementary pivot algorithm and interior point routines. A feature of the book is its early and extensive development and use of duality theory. Audience: The book is written for students in the areas of mathematics, economics, engineering and management science, and professionals who need a sound foundation in the important and dynamic discipline of linear programming.

16. Computer and machine vision theory, algorithms, practicalities

CERN Document Server

Davies, E R

2012-01-01

Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

17. Accuracy verification methods theory and algorithms

CERN Document Server

Mali, Olli; Repin, Sergey

2014-01-01

The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control.   The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.

18. An introduction to information theory

CERN Document Server

Reza, Fazlollah M

1994-01-01

Graduate-level study for engineering students presents elements of modern probability theory, information theory, coding theory, more. Emphasis on sample space, random variables, capacity, etc. Many reference tables and extensive bibliography. 1961 edition.

19. Data clustering theory, algorithms, and applications

CERN Document Server

Gan, Guojun; Wu, Jianhong

2007-01-01

Cluster analysis is an unsupervised process that divides a set of objects into homogeneous groups. This book starts with basic information on cluster analysis, including the classification of data and the corresponding similarity measures, followed by the presentation of over 50 clustering algorithms in groups according to some specific baseline methodologies such as hierarchical, center-based, and search-based methods. As a result, readers and users can easily identify an appropriate algorithm for their applications and compare novel ideas with existing results. The book also provides examples of clustering applications to illustrate the advantages and shortcomings of different clustering architectures and algorithms. Application areas include pattern recognition, artificial intelligence, information technology, image processing, biology, psychology, and marketing. Readers also learn how to perform cluster analysis with the C/C++ and MATLAB® programming languages.

20. Advanced algorithms for information science

Energy Technology Data Exchange (ETDEWEB)

Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

1998-12-31

This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.

1. Advanced algorithms for information science

International Nuclear Information System (INIS)

Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

1998-01-01

This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression

2. Information theory of molecular systems

CERN Document Server

Nalewajski, Roman F

2006-01-01

As well as providing a unified outlook on physics, Information Theory (IT) has numerous applications in chemistry and biology owing to its ability to provide a measure of the entropy/information contained within probability distributions and criteria of their information ""distance"" (similarity) and independence. Information Theory of Molecular Systems applies standard IT to classical problems in the theory of electronic structure and chemical reactivity. The book starts by introducing the basic concepts of modern electronic structure/reactivity theory based upon the Density Functional Theory

3. Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.

Science.gov (United States)

Petry, Frederick E.; And Others

1993-01-01

Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…

4. Information systems theory

CERN Document Server

Dwivedi, Yogesh K; Schneberger, Scott L

2011-01-01

The overall mission of this book is to provide a comprehensive understanding and coverage of the various theories and models used in IS research. Specifically, it aims to focus on the following key objectives: To describe the various theories and models applicable to studying IS/IT management issues. To outline and describe, for each of the various theories and models, independent and dependent constructs, reference discipline/originating area, originating author(s), seminal articles, level of analysis (i.e. firm, individual, industry) and links with other theories. To provide a critical revie

5. Information theory in analytical chemistry

National Research Council Canada - National Science Library

Eckschlager, Karel; Danzer, Klaus

1994-01-01

Contents: The aim of analytical chemistry - Basic concepts of information theory - Identification of components - Qualitative analysis - Quantitative analysis - Multicomponent analysis - Optimum analytical...

6. System parameter identification information criteria and algorithms

CERN Document Server

Chen, Badong; Hu, Jinchun; Principe, Jose C

2013-01-01

Recently, criterion functions based on information theoretic measures (entropy, mutual information, information divergence) have attracted attention and become an emerging area of study in signal processing and system identification domain. This book presents a systematic framework for system identification and information processing, investigating system identification from an information theory point of view. The book is divided into six chapters, which cover the information needed to understand the theory and application of system parameter identification. The authors' research pr

7. Informed Grounded Theory

Science.gov (United States)

Thornberg, Robert

2012-01-01

There is a widespread idea that in grounded theory (GT) research, the researcher has to delay the literature review until the end of the analysis to avoid contamination--a dictum that might turn educational researchers away from GT. Nevertheless, in this article the author (a) problematizes the dictum of delaying a literature review in classic…

8. Genre theory in information studies

CERN Document Server

Andersen, Jack

2015-01-01

This book highlights the important role genre theory plays within information studies. It illustrates how modern genre studies inform and enrich the study of information, and conversely how the study of information makes its own independent contributions to the study of genre.

9. Geometric theory of information

CERN Document Server

2014-01-01

This book brings together geometric tools and their applications for Information analysis. It collects current and many uses of in the interdisciplinary fields of Information Geometry Manifolds in Advanced Signal, Image & Video Processing, Complex Data Modeling and Analysis, Information Ranking and Retrieval, Coding, Cognitive Systems, Optimal Control, Statistics on Manifolds, Machine Learning, Speech/sound recognition, and natural language treatment which are also substantially relevant for the industry.

10. Algorithms, architectures and information systems security

CERN Document Server

Sur-Kolay, Susmita; Nandy, Subhas C; Bagchi, Aditya

2008-01-01

This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security coverin

11. Information theory and statistics

CERN Document Server

Kullback, Solomon

1968-01-01

Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.

12. Quantum: information theory: technological challenge

International Nuclear Information System (INIS)

Calixto, M.

2001-01-01

The new Quantum Information Theory augurs powerful machines that obey the entangled logic of the subatomic world. Parallelism, entanglement, teleportation, no-cloning and quantum cryptography are typical peculiarities of this novel way of understanding computation. (Author) 24 refs

13. An informational theory of privacy

NARCIS (Netherlands)

Schottmuller, C.; Jann, Ole

2016-01-01

We develop a theory that explains how and when privacy can increase welfare. Without privacy, some individuals misrepresent their preferences, because they will otherwise be statistically discriminated against. This "chilling effect" hurts them individually, and impairs information aggregation. The

14. Information theory in molecular biology

OpenAIRE

2004-01-01

This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon's theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the...

15. Bilinear Inverse Problems: Theory, Algorithms, and Applications

Science.gov (United States)

Ling, Shuyang

We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical

16. The theory of quantum information

CERN Document Server

Watrous, John

2018-01-01

This largely self-contained book on the theory of quantum information focuses on precise mathematical formulations and proofs of fundamental facts that form the foundation of the subject. It is intended for graduate students and researchers in mathematics, computer science, and theoretical physics seeking to develop a thorough understanding of key results, proof techniques, and methodologies that are relevant to a wide range of research topics within the theory of quantum information and computation. The book is accessible to readers with an understanding of basic mathematics, including linear algebra, mathematical analysis, and probability theory. An introductory chapter summarizes these necessary mathematical prerequisites, and starting from this foundation, the book includes clear and complete proofs of all results it presents. Each subsequent chapter includes challenging exercises intended to help readers to develop their own skills for discovering proofs concerning the theory of quantum information.

17. The Quantitative Theory of Information

DEFF Research Database (Denmark)

Topsøe, Flemming; Harremoës, Peter

2008-01-01

Information Theory as developed by Shannon and followers is becoming more and more important in a number of sciences. The concepts appear to be just the right ones with intuitively appealing operational interpretations. Furthermore, the information theoretical quantities are connected by powerful...

18. A general algorithm for distributing information in a graph

OpenAIRE

Aji, Srinivas M.; McEliece, Robert J.

1997-01-01

We present a general “message-passing” algorithm for distributing information in a graph. This algorithm may help us to understand the approximate correctness of both the Gallager-Tanner-Wiberg algorithm, and the turbo-decoding algorithm.

19. Quantum information and relativity theory

International Nuclear Information System (INIS)

Peres, Asher; Terno, Daniel R.

2004-01-01

This article discusses the intimate relationship between quantum mechanics, information theory, and relativity theory. Taken together these are the foundations of present-day theoretical physics, and their interrelationship is an essential part of the theory. The acquisition of information from a quantum system by an observer occurs at the interface of classical and quantum physics. The authors review the essential tools needed to describe this interface, i.e., Kraus matrices and positive-operator-valued measures. They then discuss how special relativity imposes severe restrictions on the transfer of information between distant systems and the implications of the fact that quantum entropy is not a Lorentz-covariant concept. This leads to a discussion of how it comes about that Lorentz transformations of reduced density matrices for entangled systems may not be completely positive maps. Quantum field theory is, of course, necessary for a consistent description of interactions. Its structure implies a fundamental tradeoff between detector reliability and localizability. Moreover, general relativity produces new and counterintuitive effects, particularly when black holes (or, more generally, event horizons) are involved. In this more general context the authors discuss how most of the current concepts in quantum information theory may require a reassessment

20. Optimized Bayesian dynamic advising theory and algorithms

CERN Document Server

Karny, Miroslav

2006-01-01

Written by one of the world's leading groups in the area of Bayesian identification, control, and decision making, this book provides the theoretical and algorithmic basis of optimized probabilistic advising. Starting from abstract ideas and formulations, and culminating in detailed algorithms, the book comprises a unified treatment of an important problem of the design of advisory systems supporting supervisors of complex processes. It introduces the theoretical and algorithmic basis of developed advising, relying on novel and powerful combination black-box modelling by dynamic mixture models

1. Epistemology as Information Theory: From Leibniz to Omega

OpenAIRE

Chaitin, G. J.

2005-01-01

In 1686 in his Discours de Metaphysique, Leibniz points out that if an arbitrarily complex theory is permitted then the notion of "theory" becomes vacuous because there is always a theory. This idea is developed in the modern theory of algorithmic information, which deals with the size of computer programs and provides a new view of Godel's work on incompleteness and Turing's work on uncomputability. Of particular interest is the halting probability Omega, whose bits are irreducible, i.e., ma...

2. Glowworm swarm optimization theory, algorithms, and applications

CERN Document Server

Kaipa, Krishnanand N

2017-01-01

This book provides a comprehensive account of the glowworm swarm optimization (GSO) algorithm, including details of the underlying ideas, theoretical foundations, algorithm development, various applications, and MATLAB programs for the basic GSO algorithm. It also discusses several research problems at different levels of sophistication that can be attempted by interested researchers. The generality of the GSO algorithm is evident in its application to diverse problems ranging from optimization to robotics. Examples include computation of multiple optima, annual crop planning, cooperative exploration, distributed search, multiple source localization, contaminant boundary mapping, wireless sensor networks, clustering, knapsack, numerical integration, solving fixed point equations, solving systems of nonlinear equations, and engineering design optimization. The book is a valuable resource for researchers as well as graduate and undergraduate students in the area of swarm intelligence and computational intellige...

3. Autonomous intelligent vehicles theory, algorithms, and implementation

CERN Document Server

Cheng, Hong

2011-01-01

Here is the latest on intelligent vehicles, covering object and obstacle detection and recognition and vehicle motion control. Includes a navigation approach using global views; introduces algorithms for lateral and longitudinal motion control and more.

4. Information Theory and Plasma Turbulence

International Nuclear Information System (INIS)

Dendy, R. O.

2009-01-01

Information theory, applied directly to measured signals, yields new perspectives on, and quantitative knowledge of, the physics of strongly nonlinear and turbulent phenomena in plasmas. It represents a new and productive element of the topical research programmes that use modern techniques to characterise strongly nonlinear signals from plasmas, and that address global plasma behaviour from a complex systems perspective. We here review some pioneering studies of mutual information in solar wind and magnetospheric plasmas, using techniques tested on standard complex systems.

5. Theory and Algorithms for Global/Local Design Optimization

National Research Council Canada - National Science Library

Watson, Layne T; Guerdal, Zafer; Haftka, Raphael T

2005-01-01

The motivating application for this research is the global/local optimal design of composite aircraft structures such as wings and fuselages, but the theory and algorithms are more widely applicable...

6. On Representation in Information Theory

Directory of Open Access Journals (Sweden)

Joseph E. Brenner

2011-09-01

Full Text Available Semiotics is widely applied in theories of information. Following the original triadic characterization of reality by Peirce, the linguistic processes involved in information—production, transmission, reception, and understanding—would all appear to be interpretable in terms of signs and their relations to their objects. Perhaps the most important of these relations is that of the representation-one, entity, standing for or representing some other. For example, an index—one of the three major kinds of signs—is said to represent something by being directly related to its object. My position, however, is that the concept of symbolic representations having such roles in information, as intermediaries, is fraught with the same difficulties as in representational theories of mind. I have proposed an extension of logic to complex real phenomena, including mind and information (Logic in Reality; LIR, most recently at the 4th International Conference on the Foundations of Information Science (Beijing, August, 2010. LIR provides explanations for the evolution of complex processes, including information, that do not require any entities other than the processes themselves. In this paper, I discuss the limitations of the standard relation of representation. I argue that more realistic pictures of informational systems can be provided by reference to information as an energetic process, following the categorial ontology of LIR. This approach enables naïve, anti-realist conceptions of anti-representationalism to be avoided, and enables an approach to both information and meaning in the same novel logical framework.

7. Fundamentals of information theory and coding design

CERN Document Server

Togneri, Roberto

2003-01-01

In a clear, concise, and modular format, this book introduces the fundamental concepts and mathematics of information and coding theory. The authors emphasize how a code is designed and discuss the main properties and characteristics of different coding algorithms along with strategies for selecting the appropriate codes to meet specific requirements. They provide comprehensive coverage of source and channel coding, address arithmetic, BCH, and Reed-Solomon codes and explore some more advanced topics such as PPM compression and turbo codes. Worked examples and sets of basic and advanced exercises in each chapter reinforce the text's clear explanations of all concepts and methodologies.

8. New MPPT algorithm based on hybrid dynamical theory

KAUST Repository

Elmetennani, Shahrazed

2014-11-01

This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

9. New MPPT algorithm based on hybrid dynamical theory

KAUST Repository

Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem; Benmansour, K.; Boucherit, M. S.; Tadjine, M.

2014-01-01

This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

10. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

Science.gov (United States)

Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

2016-03-03

This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

11. Quantum information theory mathematical foundation

CERN Document Server

Hayashi, Masahito

2017-01-01

This graduate textbook provides a unified view of quantum information theory. Clearly explaining the necessary mathematical basis, it merges key topics from both information-theoretic and quantum- mechanical viewpoints and provides lucid explanations of the basic results. Thanks to this unified approach, it makes accessible such advanced topics in quantum communication as quantum teleportation, superdense coding, quantum state transmission (quantum error-correction) and quantum encryption. Since the publication of the preceding book Quantum Information: An Introduction, there have been tremendous strides in the field of quantum information. In particular, the following topics – all of which are addressed here – made seen major advances: quantum state discrimination, quantum channel capacity, bipartite and multipartite entanglement, security analysis on quantum communication, reverse Shannon theorem and uncertainty relation. With regard to the analysis of quantum security, the present book employs an impro...

12. Astrophysical data analysis with information field theory

International Nuclear Information System (INIS)

Enßlin, Torsten

2014-01-01

Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented

13. Astrophysical data analysis with information field theory

Science.gov (United States)

Enßlin, Torsten

2014-12-01

Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented.

14. Astrophysical data analysis with information field theory

Energy Technology Data Exchange (ETDEWEB)

Enßlin, Torsten, E-mail: ensslin@mpa-garching.mpg.de [Max Planck Institut für Astrophysik, Karl-Schwarzschild-Straße 1, D-85748 Garching, Germany and Ludwig-Maximilians-Universität München, Geschwister-Scholl-Platz 1, D-80539 München (Germany)

2014-12-05

Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented.

15. Support vector machines optimization based theory, algorithms, and extensions

CERN Document Server

Deng, Naiyang; Zhang, Chunhua

2013-01-01

Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions presents an accessible treatment of the two main components of support vector machines (SVMs)-classification problems and regression problems. The book emphasizes the close connection between optimization theory and SVMs since optimization is one of the pillars on which SVMs are built.The authors share insight on many of their research achievements. They give a precise interpretation of statistical leaning theory for C-support vector classification. They also discuss regularized twi

16. Digital and discrete geometry theory and algorithms

CERN Document Server

Chen, Li

2014-01-01

This book provides comprehensive coverage of the modern methods for geometric problems in the computing sciences. It also covers concurrent topics in data sciences including geometric processing, manifold learning, Google search, cloud data, and R-tree for wireless networks and BigData.The author investigates digital geometry and its related constructive methods in discrete geometry, offering detailed methods and algorithms. The book is divided into five sections: basic geometry; digital curves, surfaces and manifolds; discretely represented objects; geometric computation and processing; and a

17. An Algorithmic Information Calculus for Causal Discovery and Reprogramming Systems

KAUST Repository

Zenil, Hector

2017-09-08

We introduce a conceptual framework and an interventional calculus to steer and manipulate systems based on their intrinsic algorithmic probability using the universal principles of the theory of computability and algorithmic information. By applying sequences of controlled interventions to systems and networks, we estimate how changes in their algorithmic information content are reflected in positive/negative shifts towards and away from randomness. The strong connection between approximations to algorithmic complexity (the size of the shortest generating mechanism) and causality induces a sequence of perturbations ranking the network elements by the steering capabilities that each of them is capable of. This new dimension unmasks a separation between causal and non-causal components providing a suite of powerful parameter-free algorithms of wide applicability ranging from optimal dimension reduction, maximal randomness analysis and system control. We introduce methods for reprogramming systems that do not require the full knowledge or access to the system\\'s actual kinetic equations or any probability distributions. A causal interventional analysis of synthetic and regulatory biological networks reveals how the algorithmic reprogramming qualitatively reshapes the system\\'s dynamic landscape. For example, during cellular differentiation we find a decrease in the number of elements corresponding to a transition away from randomness and a combination of the system\\'s intrinsic properties and its intrinsic capabilities to be algorithmically reprogrammed can reconstruct an epigenetic landscape. The interventional calculus is broadly applicable to predictive causal inference of systems such as networks and of relevance to a variety of machine and causal learning techniques driving model-based approaches to better understanding and manipulate complex systems.

18. Nonlinear model predictive control theory and algorithms

CERN Document Server

Grüne, Lars

2017-01-01

This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

19. Higher arithmetic an algorithmic introduction to number theory

CERN Document Server

Edwards, Harold M

2008-01-01

Although number theorists have sometimes shunned and even disparaged computation in the past, today's applications of number theory to cryptography and computer security demand vast arithmetical computations. These demands have shifted the focus of studies in number theory and have changed attitudes toward computation itself. The important new applications have attracted a great many students to number theory, but the best reason for studying the subject remains what it was when Gauss published his classic Disquisitiones Arithmeticae in 1801: Number theory is the equal of Euclidean geometry--some would say it is superior to Euclidean geometry--as a model of pure, logical, deductive thinking. An arithmetical computation, after all, is the purest form of deductive argument. Higher Arithmetic explains number theory in a way that gives deductive reasoning, including algorithms and computations, the central role. Hands-on experience with the application of algorithms to computational examples enables students to m...

20. Recoverability in quantum information theory

Science.gov (United States)

Wilde, Mark

The fact that the quantum relative entropy is non-increasing with respect to quantum physical evolutions lies at the core of many optimality theorems in quantum information theory and has applications in other areas of physics. In this work, we establish improvements of this entropy inequality in the form of physically meaningful remainder terms. One of the main results can be summarized informally as follows: if the decrease in quantum relative entropy between two quantum states after a quantum physical evolution is relatively small, then it is possible to perform a recovery operation, such that one can perfectly recover one state while approximately recovering the other. This can be interpreted as quantifying how well one can reverse a quantum physical evolution. Our proof method is elementary, relying on the method of complex interpolation, basic linear algebra, and the recently introduced Renyi generalization of a relative entropy difference. The theorem has a number of applications in quantum information theory, which have to do with providing physically meaningful improvements to many known entropy inequalities. This is based on arXiv:1505.04661, now accepted for publication in Proceedings of the Royal Society A. I acknowledge support from startup funds from the Department of Physics and Astronomy at LSU, the NSF under Award No. CCF-1350397, and the DARPA Quiness Program through US Army Research Office award W31P4Q-12-1-0019.

1. New Theory and Algorithms for Compressive Sensing

National Research Council Canada - National Science Library

Baraniuk, Richard G

2009-01-01

.... We first demonstrated the information scalability of CS. We applied CS principles to analog-to-digital conversion, showing ADC can be accomplished on structured high rate signals with sub-Nyquist sampling...

2. AN EDUCATIONAL THEORY MODEL--(SIGGS), AN INTEGRATION OF SET THEORY, INFORMATION THEORY, AND GRAPH THEORY WITH GENERAL SYSTEMS THEORY.

Science.gov (United States)

MACCIA, ELIZABETH S.; AND OTHERS

AN ANNOTATED BIBLIOGRAPHY OF 20 ITEMS AND A DISCUSSION OF ITS SIGNIFICANCE WAS PRESENTED TO DESCRIBE CURRENT UTILIZATION OF SUBJECT THEORIES IN THE CONSTRUCTION OF AN EDUCATIONAL THEORY. ALSO, A THEORY MODEL WAS USED TO DEMONSTRATE CONSTRUCTION OF A SCIENTIFIC EDUCATIONAL THEORY. THE THEORY MODEL INCORPORATED SET THEORY (S), INFORMATION THEORY…

3. Nonsmooth Optimization Algorithms, System Theory, and Software Tools

Science.gov (United States)

1993-04-13

Optimization Algorithms, System Theory , and Scftware Tools" AFOSR-90-OO68 L AUTHOR(\$) Elijah Polak -Professor and Principal Investigator 7. PERFORMING...NSN 754Q-01-2W0-S500 Standard Form 295 (69O104 Draft) F’wsa*W by hA Sit 230.1""V AFOSR-90-0068 NONSMO0 TH OPTIMIZA TION A L GORI THMS, SYSTEM THEORY , AND

4. Neighborhood Hypergraph Based Classification Algorithm for Incomplete Information System

Directory of Open Access Journals (Sweden)

Feng Hu

2015-01-01

Full Text Available The problem of classification in incomplete information system is a hot issue in intelligent information processing. Hypergraph is a new intelligent method for machine learning. However, it is hard to process the incomplete information system by the traditional hypergraph, which is due to two reasons: (1 the hyperedges are generated randomly in traditional hypergraph model; (2 the existing methods are unsuitable to deal with incomplete information system, for the sake of missing values in incomplete information system. In this paper, we propose a novel classification algorithm for incomplete information system based on hypergraph model and rough set theory. Firstly, we initialize the hypergraph. Second, we classify the training set by neighborhood hypergraph. Third, under the guidance of rough set, we replace the poor hyperedges. After that, we can obtain a good classifier. The proposed approach is tested on 15 data sets from UCI machine learning repository. Furthermore, it is compared with some existing methods, such as C4.5, SVM, NavieBayes, and KNN. The experimental results show that the proposed algorithm has better performance via Precision, Recall, AUC, and F-measure.

5. Information theory and coding solved problems

CERN Document Server

Ivaniš, Predrag

2017-01-01

This book is offers a comprehensive overview of information theory and error control coding, using a different approach then in existed literature. The chapters are organized according to the Shannon system model, where one block affects the others. A relatively brief theoretical introduction is provided at the beginning of every chapter, including a few additional examples and explanations, but without any proofs. And a short overview of some aspects of abstract algebra is given at the end of the corresponding chapters. The characteristic complex examples with a lot of illustrations and tables are chosen to provide detailed insights into the nature of the problem. Some limiting cases are presented to illustrate the connections with the theoretical bounds. The numerical values are carefully selected to provide in-depth explanations of the described algorithms. Although the examples in the different chapters can be considered separately, they are mutually connected and the conclusions for one considered proble...

6. Chemical Thermodynamics and Information Theory with Applications

CERN Document Server

Graham, Daniel J

2011-01-01

Thermodynamics and information touch theory every facet of chemistry. However, the physical chemistry curriculum digested by students worldwide is still heavily skewed toward heat/work principles established more than a century ago. Rectifying this situation, Chemical Thermodynamics and Information Theory with Applications explores applications drawn from the intersection of thermodynamics and information theory--two mature and far-reaching fields. In an approach that intertwines information science and chemistry, this book covers: The informational aspects of thermodynamic state equations The

7. Introduction to coding and information theory

CERN Document Server

Roman, Steven

1997-01-01

This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

8. Inference algorithms and learning theory for Bayesian sparse factor analysis

International Nuclear Information System (INIS)

Rattray, Magnus; Sharp, Kevin; Stegle, Oliver; Winn, John

2009-01-01

Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

9. Inference algorithms and learning theory for Bayesian sparse factor analysis

Energy Technology Data Exchange (ETDEWEB)

Rattray, Magnus; Sharp, Kevin [School of Computer Science, University of Manchester, Manchester M13 9PL (United Kingdom); Stegle, Oliver [Max-Planck-Institute for Biological Cybernetics, Tuebingen (Germany); Winn, John, E-mail: magnus.rattray@manchester.ac.u [Microsoft Research Cambridge, Roger Needham Building, Cambridge, CB3 0FB (United Kingdom)

2009-12-01

Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

10. Parallelization of a spherical Sn transport theory algorithm

International Nuclear Information System (INIS)

Haghighat, A.

1989-01-01

The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)

11. Novel information theory techniques for phonon spectroscopy

International Nuclear Information System (INIS)

Hague, J P

2007-01-01

The maximum entropy method (MEM) and spectral reverse Monte Carlo (SRMC) techniques are applied to the determination of the phonon density of states (PDOS) from heat-capacity data. The approach presented here takes advantage of the standard integral transform relating the PDOS with the specific heat at constant volume. MEM and SRMC are highly successful numerical approaches for inverting integral transforms. The formalism and algorithms necessary to carry out the inversion of specific heat curves are introduced, and where possible, I have concentrated on algorithms and experimental details for practical usage. Simulated data are used to demonstrate the accuracy of the approach. The main strength of the techniques presented here is that the resulting spectra are always physical: Computed PDOS is always positive and properly applied information theory techniques only show statistically significant detail. The treatment set out here provides a simple, cost-effective and reliable method to determine phonon properties of new materials. In particular, the new technique is expected to be very useful for establishing where interesting phonon modes and properties can be found, before spending time at large scale facilities

12. Immersive Algorithms: Better Visualization with Less Information

DEFF Research Database (Denmark)

Bille, Philip; Gørtz, Inge Li

2017-01-01

Visualizing algorithms, such as drawings, slideshow presentations, animations, videos, and software tools, is a key concept to enhance and support student learning. A typical visualization of an algorithm show the data and then perform computation on the data. For instance, a standard visualization...

13. Algorithmic and experimental methods in algebra, geometry, and number theory

CERN Document Server

Decker, Wolfram; Malle, Gunter

2017-01-01

This book presents state-of-the-art research and survey articles that highlight work done within the Priority Program SPP 1489 “Algorithmic and Experimental Methods in Algebra, Geometry and Number Theory”, which was established and generously supported by the German Research Foundation (DFG) from 2010 to 2016. The goal of the program was to substantially advance algorithmic and experimental methods in the aforementioned disciplines, to combine the different methods where necessary, and to apply them to central questions in theory and practice. Of particular concern was the further development of freely available open source computer algebra systems and their interaction in order to create powerful new computational tools that transcend the boundaries of the individual disciplines involved.  The book covers a broad range of topics addressing the design and theoretical foundations, implementation and the successful application of algebraic algorithms in order to solve mathematical research problems. It off...

14. Optimization and Control of Bilinear Systems Theory, Algorithms, and Applications

CERN Document Server

Pardalos, Panos M

2008-01-01

Covers developments in bilinear systems theory Focuses on the control of open physical processes functioning in a non-equilibrium mode Emphasis is on three primary disciplines: modern differential geometry, control of dynamical systems, and optimization theory Includes applications to the fields of quantum and molecular computing, control of physical processes, biophysics, superconducting magnetism, and physical information science

15. Information filtering via weighted heat conduction algorithm

Science.gov (United States)

Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

2011-06-01

In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

16. Information theoretic methods for image processing algorithm optimization

Science.gov (United States)

Prokushkin, Sergey F.; Galil, Erez

2015-01-01

Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

17. Foundations of digital signal processing theory, algorithms and hardware design

CERN Document Server

Gaydecki, Patrick

2005-01-01

An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

18. Search algorithms, hidden labour and information control

Directory of Open Access Journals (Sweden)

Paško Bilić

2016-06-01

Full Text Available The paper examines some of the processes of the closely knit relationship between Google’s ideologies of neutrality and objectivity and global market dominance. Neutrality construction comprises an important element sustaining the company’s economic position and is reflected in constant updates, estimates and changes to utility and relevance of search results. Providing a purely technical solution to these issues proves to be increasingly difficult without a human hand in steering algorithmic solutions. Search relevance fluctuates and shifts through continuous tinkering and tweaking of the search algorithm. The company also uses third parties to hire human raters for performing quality assessments of algorithmic updates and adaptations in linguistically and culturally diverse global markets. The adaptation process contradicts the technical foundations of the company and calculations based on the initial Page Rank algorithm. Annual market reports, Google’s Search Quality Rating Guidelines, and reports from media specialising in search engine optimisation business are analysed. The Search Quality Rating Guidelines document provides a rare glimpse into the internal architecture of search algorithms and the notions of utility and relevance which are presented and structured as neutral and objective. Intertwined layers of ideology, hidden labour of human raters, advertising revenues, market dominance and control are discussed throughout the paper.

19. Comment on Gallistel: behavior theory and information theory: some parallels.

Science.gov (United States)

Nevin, John A

2012-05-01

20. Optimal interconnection trees in the plane theory, algorithms and applications

CERN Document Server

Brazil, Marcus

2015-01-01

This book explores fundamental aspects of geometric network optimisation with applications to a variety of real world problems. It presents, for the first time in the literature, a cohesive mathematical framework within which the properties of such optimal interconnection networks can be understood across a wide range of metrics and cost functions. The book makes use of this mathematical theory to develop efficient algorithms for constructing such networks, with an emphasis on exact solutions.  Marcus Brazil and Martin Zachariasen focus principally on the geometric structure of optimal interconnection networks, also known as Steiner trees, in the plane. They show readers how an understanding of this structure can lead to practical exact algorithms for constructing such trees.  The book also details numerous breakthroughs in this area over the past 20 years, features clearly written proofs, and is supported by 135 colour and 15 black and white figures. It will help graduate students, working mathematicians, ...

1. Information theory and rate distortion theory for communications and compression

CERN Document Server

Gibson, Jerry

2013-01-01

This book is very specifically targeted to problems in communications and compression by providing the fundamental principles and results in information theory and rate distortion theory for these applications and presenting methods that have proved and will prove useful in analyzing and designing real systems. The chapters contain treatments of entropy, mutual information, lossless source coding, channel capacity, and rate distortion theory; however, it is the selection, ordering, and presentation of the topics within these broad categories that is unique to this concise book. While the cover

2. Turing’s algorithmic lens: From computability to complexity theory

Directory of Open Access Journals (Sweden)

Díaz, Josep

2013-12-01

Full Text Available The decidability question, i.e., whether any mathematical statement could be computationally proven true or false, was raised by Hilbert and remained open until Turing answered it in the negative. Then, most efforts in theoretical computer science turned to complexity theory and the need to classify decidable problems according to their difficulty. Among others, the classes P (problems solvable in polynomial time and NP (problems solvable in non-deterministic polynomial time were defined, and one of the most challenging scientific quests of our days arose: whether P = NP. This still open question has implications not only in computer science, mathematics and physics, but also in biology, sociology and economics, and it can be seen as a direct consequence of Turing’s way of looking through the algorithmic lens at different disciplines to discover how pervasive computation is.La cuestión de la decidibilidad, es decir, si es posible demostrar computacionalmente que una expresión matemática es verdadera o falsa, fue planteada por Hilbert y permaneció abierta hasta que Turing la respondió de forma negativa. Establecida la no-decidibilidad de las matemáticas, los esfuerzos en informática teórica se centraron en el estudio de la complejidad computacional de los problemas decidibles. En este artículo presentamos una breve introducción a las clases P (problemas resolubles en tiempo polinómico y NP (problemas resolubles de manera no determinista en tiempo polinómico, al tiempo que exponemos la dificultad de establecer si P = NP y las consecuencias que se derivarían de que ambas clases de problemas fueran iguales. Esta cuestión tiene implicaciones no solo en los campos de la informática, las matemáticas y la física, sino también para la biología, la sociología y la economía. La idea seminal del estudio de la complejidad computacional es consecuencia directa del modo en que Turing abordaba problemas en diferentes ámbitos mediante lo

3. Financial markets theory equilibrium, efficiency and information

CERN Document Server

Barucci, Emilio

2017-01-01

This work, now in a thoroughly revised second edition, presents the economic foundations of financial markets theory from a mathematically rigorous standpoint and offers a self-contained critical discussion based on empirical results. It is the only textbook on the subject to include more than two hundred exercises, with detailed solutions to selected exercises. Financial Markets Theory covers classical asset pricing theory in great detail, including utility theory, equilibrium theory, portfolio selection, mean-variance portfolio theory, CAPM, CCAPM, APT, and the Modigliani-Miller theorem. Starting from an analysis of the empirical evidence on the theory, the authors provide a discussion of the relevant literature, pointing out the main advances in classical asset pricing theory and the new approaches designed to address asset pricing puzzles and open problems (e.g., behavioral finance). Later chapters in the book contain more advanced material, including on the role of information in financial markets, non-c...

4. Fringe pattern analysis for optical metrology theory, algorithms, and applications

CERN Document Server

2014-01-01

The main objective of this book is to present the basic theoretical principles and practical applications for the classical interferometric techniques and the most advanced methods in the field of modern fringe pattern analysis applied to optical metrology. A major novelty of this work is the presentation of a unified theoretical framework based on the Fourier description of phase shifting interferometry using the Frequency Transfer Function (FTF) along with the theory of Stochastic Process for the straightforward analysis and synthesis of phase shifting algorithms with desired properties such

5. The discrete Fourier transform theory, algorithms and applications

CERN Document Server

Sundaraajan, D

2001-01-01

This authoritative book provides comprehensive coverage of practical Fourier analysis. It develops the concepts right from the basics and gradually guides the reader to the advanced topics. It presents the latest and practically efficient DFT algorithms, as well as the computation of discrete cosine and Walsh-Hadamard transforms. The large number of visual aids such as figures, flow graphs and flow charts makes the mathematical topic easy to understand. In addition, the numerous examples and the set of C-language programs (a supplement to the book) help greatly in understanding the theory and

6. Information content of ozone retrieval algorithms

Science.gov (United States)

Rodgers, C.; Bhartia, P. K.; Chu, W. P.; Curran, R.; Deluisi, J.; Gille, J. C.; Hudson, R.; Mateer, C.; Rusch, D.; Thomas, R. J.

1989-01-01

The algorithms are characterized that were used for production processing by the major suppliers of ozone data to show quantitatively: how the retrieved profile is related to the actual profile (This characterizes the altitude range and vertical resolution of the data); the nature of systematic errors in the retrieved profiles, including their vertical structure and relation to uncertain instrumental parameters; how trends in the real ozone are reflected in trends in the retrieved ozone profile; and how trends in other quantities (both instrumental and atmospheric) might appear as trends in the ozone profile. No serious deficiencies were found in the algorithms used in generating the major available ozone data sets. As the measurements are all indirect in someway, and the retrieved profiles have different characteristics, data from different instruments are not directly comparable.

7. Development of morphing algorithms for Histfactory using information geometry

Energy Technology Data Exchange (ETDEWEB)

Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

2016-07-01

Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

8. Econophysics: from Game Theory and Information Theory to Quantum Mechanics

Science.gov (United States)

Jimenez, Edward; Moya, Douglas

2005-03-01

Rationality is the universal invariant among human behavior, universe physical laws and ordered and complex biological systems. Econophysics isboth the use of physical concepts in Finance and Economics, and the use of Information Economics in Physics. In special, we will show that it is possible to obtain the Quantum Mechanics principles using Information and Game Theory.

9. Information Theory for Information Science: Antecedents, Philosophy, and Applications

Science.gov (United States)

Losee, Robert M.

2017-01-01

This paper provides an historical overview of the theoretical antecedents leading to information theory, specifically those useful for understanding and teaching information science and systems. Information may be discussed in a philosophical manner and at the same time be measureable. This notion of information can thus be the subject of…

10. An information theory account of cognitive control.

Science.gov (United States)

Fan, Jin

2014-01-01

Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory.

11. An information theory account of cognitive control

Directory of Open Access Journals (Sweden)

Jin eFan

2014-09-01

Full Text Available Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory.

12. Processing Information in Quantum Decision Theory

OpenAIRE

Yukalov, V. I.; Sornette, D.

2008-01-01

A survey is given summarizing the state of the art of describing information processing in Quantum Decision Theory, which has been recently advanced as a novel variant of decision making, based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intended actions. The theory characterizes entangled decision making, non-commutativity of subsequent decisions, and intention int...

13. Feature extraction algorithm for space targets based on fractal theory

Science.gov (United States)

Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

2007-11-01

In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

14. An Algorithmic Information Calculus for Causal Discovery and Reprogramming Systems

KAUST Repository

Zenil, Hector; Kiani, Narsis A.; Marabita, Francesco; Deng, Yue; Elias, Szabolcs; Schmidt, Angelika; Ball, Gordon; Tegner, Jesper

2017-01-01

. By applying sequences of controlled interventions to systems and networks, we estimate how changes in their algorithmic information content are reflected in positive/negative shifts towards and away from randomness. The strong connection between approximations

15. Fixed Orientation Interconnection Problems: Theory, Algorithms and Applications

DEFF Research Database (Denmark)

Zachariasen, Martin

Interconnection problems have natural applications in the design of integrated circuits (or chips). A modern chip consists of billions of transistors that are connected by metal wires on the surface of the chip. These metal wires are routed on a (fairly small) number of layers in such a way...... that electrically independent nets do not intersect each other. Traditional manufacturing technology limits the orientations of the wires to be either horizontal or vertical — and is known as Manhattan architecture. Over the last decade there has been a growing interest in general architectures, where more than two...... a significant step forward, both concerning theory and algorithms, for the fixed orientation Steiner tree problem. In addition, the work maintains a close link to applications and generalizations motivated by chip design....

16. Towards a critical theory of information

Directory of Open Access Journals (Sweden)

Christian Fuchs

2009-11-01

The debate on redistribution and recognition between critical theorists Nancy Fraser and Axel Honneth gives the opportunity to renew the discussion of the relationship of base and superstructure in critical social theory. Critical information theory needs to be aware of economic, political, and cultural demands that it needs to make in struggles for ending domination and oppression, and of the unifying role that the economy and class play in these demands and struggles. Objective and subjective information concepts are based on the underlying worldview of reification. Reification endangers human existence. Information as process and relation enables political and ethical alternatives that have radical implications for society.

17. Information theory based approaches to cellular signaling.

Science.gov (United States)

Waltermann, Christian; Klipp, Edda

2011-10-01

18. Reasonable fermionic quantum information theories require relativity

International Nuclear Information System (INIS)

Friis, Nicolai

2016-01-01

We show that any quantum information theory based on anticommuting operators must be supplemented by a superselection rule deeply rooted in relativity to establish a reasonable notion of entanglement. While quantum information may be encoded in the fermionic Fock space, the unrestricted theory has a peculiar feature: the marginals of bipartite pure states need not have identical entropies, which leads to an ambiguous definition of entanglement. We solve this problem, by proving that it is removed by relativity, i.e., by the parity superselection rule that arises from Lorentz invariance via the spin-statistics connection. Our results hence unveil a fundamental conceptual inseparability of quantum information and the causal structure of relativistic field theory. (paper)

19. Genetic algorithm based on virus theory of evolution for traveling salesman problem; Virus shinkaron ni motozuku identeki algorithm no junkai salesman mondai eno oyo

Energy Technology Data Exchange (ETDEWEB)

Kubota, N. [Osaka Inst. of Technology, Osaka (Japan); Fukuda, T. [Nagoya University, Nagoya (Japan)

1998-05-31

This paper deals with virus evolutionary genetic algorithm. The genetic algorithms (GAs) have been demonstrated its effectiveness in optimization problems in these days. In general, the GAs simulate the survival of fittest by natural selection and the heredity of the Darwins theory of evolution. However, some types of evolutionary hypotheses such as neutral theory of molecular evolution, Imanishis evolutionary theory, serial symbiosis theory, and virus theory of evolution, have been proposed in addition to the Darwinism. Virus theory of evolution is based on the view that the virus transduction is a key mechanism for transporting segments of DNA across species. This paper proposes genetic algorithm based on the virus theory of evolution (VE-GA), which has two types of populations: host population and virus population. The VE-GA is composed of genetic operators and virus operators such as reverse transcription and incorporation. The reverse transcription operator transcribes virus genes on the chromosome of host individual and the incorporation operator creates new genotype of virus from host individual. These operators by virus population make it possible to transmit segment of DNA between individuals in the host population. Therefore, the VE-GA realizes not only vertical but also horizontal propagation of genetic information. Further, the VE-GA is applied to the traveling salesman problem in order to show the effectiveness. 20 refs., 10 figs., 3 tabs.

20. A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network.

Science.gov (United States)

Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue

2016-02-19

Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency.

1. A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network

Directory of Open Access Journals (Sweden)

Yuzhong Chen

2016-02-01

Full Text Available Vehicular ad hoc networks (VANETs have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency.

2. Information theory, spectral geometry, and quantum gravity.

Science.gov (United States)

Kempf, Achim; Martin, Robert

2008-01-18

We show that there exists a deep link between the two disciplines of information theory and spectral geometry. This allows us to obtain new results on a well-known quantum gravity motivated natural ultraviolet cutoff which describes an upper bound on the spatial density of information. Concretely, we show that, together with an infrared cutoff, this natural ultraviolet cutoff beautifully reduces the path integral of quantum field theory on curved space to a finite number of ordinary integrations. We then show, in particular, that the subsequent removal of the infrared cutoff is safe.

3. Writing, Proofreading and Editing in Information Theory

Directory of Open Access Journals (Sweden)

J. Ricardo Arias-Gonzalez

2018-05-01

Full Text Available Information is a physical entity amenable to be described by an abstract theory. The concepts associated with the creation and post-processing of the information have not, however, been mathematically established, despite being broadly used in many fields of knowledge. Here, inspired by how information is managed in biomolecular systems, we introduce writing, entailing any bit string generation, and revision, as comprising proofreading and editing, in information chains. Our formalism expands the thermodynamic analysis of stochastic chains made up of material subunits to abstract strings of symbols. We introduce a non-Markovian treatment of operational rules over the symbols of the chain that parallels the physical interactions responsible for memory effects in material chains. Our theory underlies any communication system, ranging from human languages and computer science to gene evolution.

4. Fast half-sibling population reconstruction: theory and algorithms.

Science.gov (United States)

Dexter, Daniel; Brown, Daniel G

2013-07-12

Kinship inference is the task of identifying genealogically related individuals. Kinship information is important for determining mating structures, notably in endangered populations. Although many solutions exist for reconstructing full sibling relationships, few exist for half-siblings. We consider the problem of determining whether a proposed half-sibling population reconstruction is valid under Mendelian inheritance assumptions. We show that this problem is NP-complete and provide a 0/1 integer program that identifies the minimum number of individuals that must be removed from a population in order for the reconstruction to become valid. We also present SibJoin, a heuristic-based clustering approach based on Mendelian genetics, which is strikingly fast. The software is available at http://github.com/ddexter/SibJoin.git+. Our SibJoin algorithm is reasonably accurate and thousands of times faster than existing algorithms. The heuristic is used to infer a half-sibling structure for a population which was, until recently, too large to evaluate.

5. Geometrical identification of quantum and information theories

International Nuclear Information System (INIS)

Caianiello, E.R.

1983-01-01

The interrelation of quantum and information theories is investigation on the base of the conception of cross-entropy. It is assumed that ''complex information geometry'' may serve as a tool for ''technological transfer'' from one research field to the other which is not connected directly with the first one. It is pointed out that the ''infinitesimal distance'' ds 2 and ''infinitesimal cross-entropy'' dHsub(c) coincide

6. Information theory and the ethylene genetic network.

Science.gov (United States)

González-García, José S; Díaz, José

2011-10-01

The original aim of the Information Theory (IT) was to solve a purely technical problem: to increase the performance of communication systems, which are constantly affected by interferences that diminish the quality of the transmitted information. That is, the theory deals only with the problem of transmitting with the maximal precision the symbols constituting a message. In Shannon's theory messages are characterized only by their probabilities, regardless of their value or meaning. As for its present day status, it is generally acknowledged that Information Theory has solid mathematical foundations and has fruitful strong links with Physics in both theoretical and experimental areas. However, many applications of Information Theory to Biology are limited to using it as a technical tool to analyze biopolymers, such as DNA, RNA or protein sequences. The main point of discussion about the applicability of IT to explain the information flow in biological systems is that in a classic communication channel, the symbols that conform the coded message are transmitted one by one in an independent form through a noisy communication channel, and noise can alter each of the symbols, distorting the message; in contrast, in a genetic communication channel the coded messages are not transmitted in the form of symbols but signaling cascades transmit them. Consequently, the information flow from the emitter to the effector is due to a series of coupled physicochemical processes that must ensure the accurate transmission of the message. In this review we discussed a novel proposal to overcome this difficulty, which consists of the modeling of gene expression with a stochastic approach that allows Shannon entropy (H) to be directly used to measure the amount of uncertainty that the genetic machinery has in relation to the correct decoding of a message transmitted into the nucleus by a signaling pathway. From the value of H we can define a function I that measures the amount of

7. Structural information theory and visual form

NARCIS (Netherlands)

Leeuwenberg, E.L.J.; Kaernbach, C.; Schroeger, E.; Mueller, H.

2003-01-01

The paper attends to basic characteristics of visual form as approached by Structural information theory, or SIT, (Leeuwenberg, Van der Helm and Van Lier). The introduction provides a global survey of this approach. The main part of the paper focuses on three characteristics of SIT. Each one is made

8. Towards an Information Retrieval Theory of Everything

NARCIS (Netherlands)

Hiemstra, Djoerd; Lammerink, J.M.W.; Katoen, Joost P.; Kok, J.N.; van de Pol, Jan Cornelis; Raamsdonk, F.

2009-01-01

I present three well-known probabilistic models of information retrieval in tutorial style: The binary independence probabilistic model, the language modeling approach, and Google's page rank. Although all three models are based on probability theory, they are very different in nature. Each model

9. A THEORY OF MAXIMIZING SENSORY INFORMATION

NARCIS (Netherlands)

Hateren, J.H. van

1992-01-01

A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power

10. Quantum theory informational foundations and foils

CERN Document Server

Spekkens, Robert

2016-01-01

This book provides the first unified overview of the burgeoning research area at the interface between Quantum Foundations and Quantum Information.  Topics include: operational alternatives to quantum theory, information-theoretic reconstructions of the quantum formalism, mathematical frameworks for operational theories, and device-independent features of the set of quantum correlations. Powered by the injection of fresh ideas from the field of Quantum Information and Computation, the foundations of Quantum Mechanics are in the midst of a renaissance. The last two decades have seen an explosion of new results and research directions, attracting broad interest in the scientific community. The variety and number of different approaches, however, makes it challenging for a newcomer to obtain a big picture of the field and of its high-level goals. Here, fourteen original contributions from leading experts in the field cover some of the most promising research directions that have emerged in the new wave of quant...

11. Towards an Information Theory of Complex Networks

CERN Document Server

Dehmer, Matthias; Mehler, Alexander

2011-01-01

For over a decade, complex networks have steadily grown as an important tool across a broad array of academic disciplines, with applications ranging from physics to social media. A tightly organized collection of carefully-selected papers on the subject, Towards an Information Theory of Complex Networks: Statistical Methods and Applications presents theoretical and practical results about information-theoretic and statistical models of complex networks in the natural sciences and humanities. The book's major goal is to advocate and promote a combination of graph-theoretic, information-theoreti

12. An information integration theory of consciousness

Directory of Open Access Journals (Sweden)

Tononi Giulio

2004-11-01

Full Text Available Abstract Background Consciousness poses two main problems. The first is understanding the conditions that determine to what extent a system has conscious experience. For instance, why is our consciousness generated by certain parts of our brain, such as the thalamocortical system, and not by other parts, such as the cerebellum? And why are we conscious during wakefulness and much less so during dreamless sleep? The second problem is understanding the conditions that determine what kind of consciousness a system has. For example, why do specific parts of the brain contribute specific qualities to our conscious experience, such as vision and audition? Presentation of the hypothesis This paper presents a theory about what consciousness is and how it can be measured. According to the theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation – the availability of a very large number of conscious experiences; and integration – the unity of each such experience. The theory states that the quantity of consciousness available to a system can be measured as the Φ value of a complex of elements. Φ is the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements. A complex is a subset of elements with Φ>0 that is not part of a subset of higher Φ. The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each particular conscious experience is specified by the value, at any given time, of the variables mediating informational interactions among the elements of a complex. Testing the hypothesis The information integration theory accounts, in a principled manner, for several neurobiological observations

13. Set Theory Correlation Free Algorithm for HRRR Target Tracking

National Research Council Canada - National Science Library

Blasch, Erik

1999-01-01

.... Recently a few fusionists including Mahler 1 and Mori 2 are using a set theory approach for a unified data fusion theory which is a correlation free paradigm 3 This paper uses the set theory approach...

14. Imaging for dismantlement verification: Information management and analysis algorithms

International Nuclear Information System (INIS)

Robinson, S.M.; Jarman, K.D.; Pitts, W.K.; Seifert, A.; Misner, A.C.; Woodring, M.L.; Myjak, M.J.

2012-01-01

The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute, which must be non-sensitive to be acceptable in an Information Barrier regime. However, this process must be performed with care. Features like the perimeter, area, and intensity of an object, for example, might reveal sensitive information. Any data-reduction technique must provide sufficient information to discriminate a real object from a spoofed or incorrect one, while avoiding disclosure (or storage) of any sensitive object qualities. Ultimately, algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We discuss the utility of imaging for arms control applications and present three image-based verification algorithms in this context. The algorithms reduce full image information to non-sensitive feature information, in a process that is intended to enable verification while eliminating the possibility of image reconstruction. The underlying images can be highly detailed, since they are dynamically generated behind an information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography. We study these algorithms in terms of technical performance in image analysis and application to an information barrier scheme.

15. Cuckoo search and firefly algorithm theory and applications

CERN Document Server

2014-01-01

Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book.  Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others.   This book can serve as an ideal reference for both graduates and researchers in computer scienc...

16. Research on electricity consumption forecast based on mutual information and random forests algorithm

Science.gov (United States)

Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu

2018-02-01

Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.

17. Crossover Improvement for the Genetic Algorithm in Information Retrieval.

Science.gov (United States)

Vrajitoru, Dana

1998-01-01

In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…

18. Web multimedia information retrieval using improved Bayesian algorithm.

Science.gov (United States)

Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong

2003-01-01

The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

19. Algorithms for selecting informative marker panels for population assignment.

Science.gov (United States)

Rosenberg, Noah A

2005-11-01

Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

20. Comparing cosmic web classifiers using information theory

International Nuclear Information System (INIS)

Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin; Jasche, Jens

2016-01-01

We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

1. Comparing cosmic web classifiers using information theory

Energy Technology Data Exchange (ETDEWEB)

Leclercq, Florent [Institute of Cosmology and Gravitation (ICG), University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth PO1 3FX (United Kingdom); Lavaux, Guilhem; Wandelt, Benjamin [Institut d' Astrophysique de Paris (IAP), UMR 7095, CNRS – UPMC Université Paris 6, Sorbonne Universités, 98bis boulevard Arago, F-75014 Paris (France); Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: lavaux@iap.fr, E-mail: j.jasche@tum.de, E-mail: wandelt@iap.fr [Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, D-85748 Garching (Germany)

2016-08-01

We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

2. Quantum information theory and quantum statistics

International Nuclear Information System (INIS)

Petz, D.

2008-01-01

Based on lectures given by the author, this book focuses on providing reliable introductory explanations of key concepts of quantum information theory and quantum statistics - rather than on results. The mathematically rigorous presentation is supported by numerous examples and exercises and by an appendix summarizing the relevant aspects of linear analysis. Assuming that the reader is familiar with the content of standard undergraduate courses in quantum mechanics, probability theory, linear algebra and functional analysis, the book addresses graduate students of mathematics and physics as well as theoretical and mathematical physicists. Conceived as a primer to bridge the gap between statistical physics and quantum information, a field to which the author has contributed significantly himself, it emphasizes concepts and thorough discussions of the fundamental notions to prepare the reader for deeper studies, not least through the selection of well chosen exercises. (orig.)

3. The informationally-complete quantum theory

OpenAIRE

Chen, Zeng-Bing

2014-01-01

Quantum mechanics is a cornerstone of our current understanding of nature and extremely successful in describing physics covering a huge range of scales. However, its interpretation remains controversial since the early days of quantum mechanics. What does a quantum state really mean? Is there any way out of the so-called quantum measurement problem? Here we present an informationally-complete quantum theory (ICQT) and the trinary property of nature to beat the above problems. We assume that ...

4. Information theory of open fragmenting systems

International Nuclear Information System (INIS)

Gulminelli, F.; Juillet, O.; Chomaz, Ph.; Ison, M. J.; Dorso, C. O.

2007-01-01

An information theory description of finite systems explicitly evolving in time is presented. We impose a MaxEnt variational principle on the Shannon entropy at a given time while the constraints are set at a former time. The resulting density matrix contains explicit time odd components in the form of collective flows. As a specific application we consider the dynamics of the expansion in connection with heavy ion experiments. Lattice gas and classical molecular dynamics simulations are shown

5. Final Summary: Genre Theory in Information Studies

DEFF Research Database (Denmark)

Andersen, Jack

2015-01-01

Purpose This chapter offers a re-description of knowledge organization in light of genre and activity theory. Knowledge organization needs a new description in order to account for those activities and practices constituting and causing concrete knowledge organization activity. Genre and activity...... informing and shaping concrete forms of knowledge organization activity. With this, we are able to understand how knowledge organization activity also contributes to construct genre and activity systems and not only aid them....

6. Fault-tolerant search algorithms reliable computation with unreliable information

CERN Document Server

Cicalese, Ferdinando

2013-01-01

Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

7. A Moving Object Detection Algorithm Based on Color Information

International Nuclear Information System (INIS)

Fang, X H; Xiong, W; Hu, B J; Wang, L T

2006-01-01

This paper designed a new algorithm of moving object detection for the aim of quick moving object detection and orientation, which used a pixel and its neighbors as an image vector to represent that pixel and modeled different chrominance component pixel as a mixture of Gaussians, and set up different mixture model of Gauss for different YUV chrominance components. In order to make full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background

8. Multimedia information retrieval theory and techniques

CERN Document Server

Raieli, Roberto

2013-01-01

Novel processing and searching tools for the management of new multimedia documents have developed. Multimedia Information Retrieval (MMIR) is an organic system made up of Text Retrieval (TR); Visual Retrieval (VR); Video Retrieval (VDR); and Audio Retrieval (AR) systems. So that each type of digital document may be analysed and searched by the elements of language appropriate to its nature, search criteria must be extended. Such an approach is known as the Content Based Information Retrieval (CBIR), and is the core of MMIR. This novel content-based concept of information handling needs to be integrated with more traditional semantics. Multimedia Information Retrieval focuses on the tools of processing and searching applicable to the content-based management of new multimedia documents. Translated from Italian by Giles Smith, the book is divided in to two parts. Part one discusses MMIR and related theories, and puts forward new methodologies; part two reviews various experimental and operating MMIR systems, a...

9. Information Foraging Theory: A Framework for Intelligence Analysis

Science.gov (United States)

2014-11-01

oceanographic information, human intelligence (HUMINT), open-source intelligence ( OSINT ), and information provided by other governmental departments [1][5...Human Intelligence IFT Information Foraging Theory LSA Latent Semantic Similarity MVT Marginal Value Theorem OFT Optimal Foraging Theory OSINT

10. Development of information preserving data compression algorithm for CT images

International Nuclear Information System (INIS)

Kobayashi, Yoshio

1989-01-01

Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

11. Assessment of the information content of patterns: an algorithm

Science.gov (United States)

Daemi, M. Farhang; Beurle, R. L.

1991-12-01

A preliminary investigation confirmed the possibility of assessing the translational and rotational information content of simple artificial images. The calculation is tedious, and for more realistic patterns it is essential to implement the method on a computer. This paper describes an algorithm developed for this purpose which confirms the results of the preliminary investigation. Use of the algorithm facilitates much more comprehensive analysis of the combined effect of continuous rotation and fine translation, and paves the way for analysis of more realistic patterns. Owing to the volume of calculation involved in these algorithms, extensive computing facilities were necessary. The major part of the work was carried out using an ICL 3900 series mainframe computer as well as other powerful workstations such as a RISC architecture MIPS machine.

12. Bellman Ford algorithm - in Routing Information Protocol (RIP)

Science.gov (United States)

Krianto Sulaiman, Oris; Mahmud Siregar, Amir; Nasution, Khairuddin; Haramaini, Tasliyah

2018-04-01

In a large scale network need a routing that can handle a lot number of users, one of the solutions to cope with large scale network is by using a routing protocol, There are 2 types of routing protocol that is static and dynamic, Static routing is manually route input based on network admin, while dynamic routing is automatically route input formed based on existing network. Dynamic routing is efficient used to network extensively because of the input of route automatic formed, Routing Information Protocol (RIP) is one of dynamic routing that uses the bellman-ford algorithm where this algorithm will search for the best path that traversed the network by leveraging the value of each link, so with the bellman-ford algorithm owned by RIP can optimize existing networks.

13. Quantum information theory. Mathematical foundation. 2. ed.

International Nuclear Information System (INIS)

Hayashi, Masahito

2017-01-01

This graduate textbook provides a unified view of quantum information theory. Clearly explaining the necessary mathematical basis, it merges key topics from both information-theoretic and quantum- mechanical viewpoints and provides lucid explanations of the basic results. Thanks to this unified approach, it makes accessible such advanced topics in quantum communication as quantum teleportation, superdense coding, quantum state transmission (quantum error-correction) and quantum encryption. Since the publication of the preceding book Quantum Information: An Introduction, there have been tremendous strides in the field of quantum information. In particular, the following topics - all of which are addressed here - made seen major advances: quantum state discrimination, quantum channel capacity, bipartite and multipartite entanglement, security analysis on quantum communication, reverse Shannon theorem and uncertainty relation. With regard to the analysis of quantum security, the present book employs an improved method for the evaluation of leaked information and identifies a remarkable relation between quantum security and quantum coherence. Taken together, these two improvements allow a better analysis of quantum state transmission. In addition, various types of the newly discovered uncertainty relation are explained. Presenting a wealth of new developments, the book introduces readers to the latest advances and challenges in quantum information. To aid in understanding, each chapter is accompanied by a set of exercises and solutions.

14. Quantum information theory. Mathematical foundation. 2. ed.

Energy Technology Data Exchange (ETDEWEB)

Hayashi, Masahito [Nagoya Univ. (Japan). Graduate School of Mathematics

2017-07-01

This graduate textbook provides a unified view of quantum information theory. Clearly explaining the necessary mathematical basis, it merges key topics from both information-theoretic and quantum- mechanical viewpoints and provides lucid explanations of the basic results. Thanks to this unified approach, it makes accessible such advanced topics in quantum communication as quantum teleportation, superdense coding, quantum state transmission (quantum error-correction) and quantum encryption. Since the publication of the preceding book Quantum Information: An Introduction, there have been tremendous strides in the field of quantum information. In particular, the following topics - all of which are addressed here - made seen major advances: quantum state discrimination, quantum channel capacity, bipartite and multipartite entanglement, security analysis on quantum communication, reverse Shannon theorem and uncertainty relation. With regard to the analysis of quantum security, the present book employs an improved method for the evaluation of leaked information and identifies a remarkable relation between quantum security and quantum coherence. Taken together, these two improvements allow a better analysis of quantum state transmission. In addition, various types of the newly discovered uncertainty relation are explained. Presenting a wealth of new developments, the book introduces readers to the latest advances and challenges in quantum information. To aid in understanding, each chapter is accompanied by a set of exercises and solutions.

15. Theory and Algorithms for Global/Local Design Optimization

National Research Council Canada - National Science Library

Haftka, Raphael T

2004-01-01

... the component and overall design as well as on exploration of global optimization algorithms. In the former category, heuristic decomposition was followed with proof that it solves the original problem...

16. ALGORITHMIC support for THE System Wide Information Management concept

OpenAIRE

2016-01-01

The theoretical problems of computer support for the "System Wide Information Management" concept, which was proposed by experts of the International Civil Aviation Organization, are discussed. Within the framework of its provisions certain new requirements for all initial stages of air traffic management preceding the direct aircrafts control are formulated. Algorithmic instruments for ensuring a conflictlessness of a summary plan for the use of airspace during the plan’s implementation are ...

17. Theory of affine projection algorithms for adaptive filtering

CERN Document Server

Ozeki, Kazuhiko

2016-01-01

This book focuses on theoretical aspects of the affine projection algorithm (APA) for adaptive filtering. The APA is a natural generalization of the classical, normalized least-mean-squares (NLMS) algorithm. The book first explains how the APA evolved from the NLMS algorithm, where an affine projection view is emphasized. By looking at those adaptation algorithms from such a geometrical point of view, we can find many of the important properties of the APA, e.g., the improvement of the convergence rate over the NLMS algorithm especially for correlated input signals. After the birth of the APA in the mid-1980s, similar algorithms were put forward by other researchers independently from different perspectives. This book shows that they are variants of the APA, forming a family of APAs. Then it surveys research on the convergence behavior of the APA, where statistical analyses play important roles. It also reviews developments of techniques to reduce the computational complexity of the APA, which are important f...

18. Quantum information theory with Gaussian systems

Energy Technology Data Exchange (ETDEWEB)

Krueger, O.

2006-04-06

This thesis applies ideas and concepts from quantum information theory to systems of continuous-variables such as the quantum harmonic oscillator. The focus is on three topics: the cloning of coherent states, Gaussian quantum cellular automata and Gaussian private channels. Cloning was investigated both for finite-dimensional and for continuous-variable systems. We construct a private quantum channel for the sequential encryption of coherent states with a classical key, where the key elements have finite precision. For the case of independent one-mode input states, we explicitly estimate this precision, i.e. the number of key bits needed per input state, in terms of these parameters. (orig.)

19. Quantum information theory with Gaussian systems

International Nuclear Information System (INIS)

Krueger, O.

2006-01-01

This thesis applies ideas and concepts from quantum information theory to systems of continuous-variables such as the quantum harmonic oscillator. The focus is on three topics: the cloning of coherent states, Gaussian quantum cellular automata and Gaussian private channels. Cloning was investigated both for finite-dimensional and for continuous-variable systems. We construct a private quantum channel for the sequential encryption of coherent states with a classical key, where the key elements have finite precision. For the case of independent one-mode input states, we explicitly estimate this precision, i.e. the number of key bits needed per input state, in terms of these parameters. (orig.)

20. New Aspects of Probabilistic Forecast Verification Using Information Theory

Science.gov (United States)

Tödter, Julian; Ahrens, Bodo

2013-04-01

This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

1. Feminist Praxis, Critical Theory and Informal Hierarchies

Directory of Open Access Journals (Sweden)

Eva Giraud

2015-05-01

Full Text Available This article draws on my experiences teaching across two undergraduate media modules in a UK research-intensive institution to explore tactics for combatting both institutional and informal hierarchies within university teaching contexts. Building on Sara Motta’s (2012 exploration of implementing critical pedagogic principles at postgraduate level in an elite university context, I discuss additional tactics for combatting these hierarchies in undergraduate settings, which were developed by transferring insights derived from informal workshops led by the University of Nottingham’s Feminism and Teaching network into the classroom. This discussion is framed in relation to the concepts of “cyborg pedagogies” and “political semiotics of articulation,” derived from the work of Donna Haraway, in order to theorize how these tactics can engender productive relationships between radical pedagogies and critical theory.

2. Quantum Gravity, Information Theory and the CMB

Science.gov (United States)

Kempf, Achim

2018-04-01

We review connections between the metric of spacetime and the quantum fluctuations of fields. We start with the finding that the spacetime metric can be expressed entirely in terms of the 2-point correlator of the fluctuations of quantum fields. We then discuss the open question whether the knowledge of only the spectra of the quantum fluctuations of fields also suffices to determine the spacetime metric. This question is of interest because spectra are geometric invariants and their quantization would, therefore, have the benefit of not requiring the modding out of diffeomorphisms. Further, we discuss the fact that spacetime at the Planck scale need not necessarily be either discrete or continuous. Instead, results from information theory show that spacetime may be simultaneously discrete and continuous in the same way that information can. Finally, we review the recent finding that a covariant natural ultraviolet cutoff at the Planck scale implies a signature in the cosmic microwave background (CMB) that may become observable.

3. An information theory of image gathering

Science.gov (United States)

Fales, Carl L.; Huck, Friedrich O.

1991-01-01

Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.

4. Cognition and biology: perspectives from information theory.

Science.gov (United States)

Wallace, Rodrick

2014-02-01

The intimate relation between biology and cognition can be formally examined through statistical models constrained by the asymptotic limit theorems of communication theory, augmented by methods from statistical mechanics and nonequilibrium thermodynamics. Cognition, often involving submodules that act as information sources, is ubiquitous across the living state. Less metabolic free energy is consumed by permitting crosstalk between biological information sources than by isolating them, leading to evolutionary exaptations that assemble shifting, tunable cognitive arrays at multiple scales, and levels of organization to meet dynamic patterns of threat and opportunity. Cognition is thus necessary for life, but it is not sufficient: An organism represents a highly patterned outcome of path-dependent, blind, variation, selection, interaction, and chance extinction in the context of an adequate flow of free energy and an environment fit for development. Complex, interacting cognitive processes within an organism both record and instantiate those evolutionary and developmental trajectories.

5. Information theory perspective on network robustness

International Nuclear Information System (INIS)

Schieber, Tiago A.; Carpi, Laura; Frery, Alejandro C.; Rosso, Osvaldo A.; Pardalos, Panos M.; Ravetti, Martín G.

2016-01-01

A crucial challenge in network theory is the study of the robustness of a network when facing a sequence of failures. In this work, we propose a dynamical definition of network robustness based on Information Theory, that considers measurements of the structural changes caused by failures of the network's components. Failures are defined here as a temporal process defined in a sequence. Robustness is then evaluated by measuring dissimilarities between topologies after each time step of the sequence, providing a dynamical information about the topological damage. We thoroughly analyze the efficiency of the method in capturing small perturbations by considering different probability distributions on networks. In particular, we find that distributions based on distances are more consistent in capturing network structural deviations, as better reflect the consequences of the failures. Theoretical examples and real networks are used to study the performance of this methodology. - Highlights: • A novel methodology to measure the robustness of a network to component failure or targeted attacks is proposed. • The use of the network's distance PDF allows a precise analysis. • The method provides a dynamic robustness profile showing the response of the topology to each failure event. • The measure is capable to detect network's critical elements.

6. Parameter-free Network Sparsification and Data Reduction by Minimal Algorithmic Information Loss

KAUST Repository

Zenil, Hector

2018-02-16

The study of large and complex datasets, or big data, organized as networks has emerged as one of the central challenges in most areas of science and technology. Cellular and molecular networks in biology is one of the prime examples. Henceforth, a number of techniques for data dimensionality reduction, especially in the context of networks, have been developed. Yet, current techniques require a predefined metric upon which to minimize the data size. Here we introduce a family of parameter-free algorithms based on (algorithmic) information theory that are designed to minimize the loss of any (enumerable computable) property contributing to the object\\'s algorithmic content and thus important to preserve in a process of data dimension reduction when forcing the algorithm to delete first the least important features. Being independent of any particular criterion, they are universal in a fundamental mathematical sense. Using suboptimal approximations of efficient (polynomial) estimations we demonstrate how to preserve network properties outperforming other (leading) algorithms for network dimension reduction. Our method preserves all graph-theoretic indices measured, ranging from degree distribution, clustering-coefficient, edge betweenness, and degree and eigenvector centralities. We conclude and demonstrate numerically that our parameter-free, Minimal Information Loss Sparsification (MILS) method is robust, has the potential to maximize the preservation of all recursively enumerable features in data and networks, and achieves equal to significantly better results than other data reduction and network sparsification methods.

7. Informal Theory: The Ignored Link in Theory-to-Practice

Science.gov (United States)

Love, Patrick

2012-01-01

Applying theory to practice in student affairs is dominated by the assumption that formal theory is directly applied to practice. Among the problems with this assumption is that many practitioners believe they must choose between their lived experiences and formal theory, and that graduate students are taught that their experience "does not…

8. Genetic algorithms: Theory and applications in the safety domain

International Nuclear Information System (INIS)

Marseguerra, M.; Zio, E.

2001-01-01

This work illustrates the fundamentals underlying optimization by genetic algorithms. All the steps of the procedure are sketched in details for both the traditional breeding algorithm as well as for more sophisticated breeding procedures. The necessity of affine transforming the fitness function, object of the optimization, is discussed in detail, together with the transformation itself. Procedures for the inducement of species and niches are also presented. The theoretical aspects of the work are corroborated by a demonstration of the potential of genetic algorithm optimization procedures on three different case studies. The first case study deals with the design of the pressure stages of a natural gas pipeline system; the second one treats a reliability allocation problem in system configuration design; the last case regards the selection of maintenance and repair strategies for the logistic management of a risky plant. (author)

9. A Location-Based Business Information Recommendation Algorithm

Directory of Open Access Journals (Sweden)

Shudong Liu

2015-01-01

Full Text Available Recently, many researches on information (e.g., POI, ADs recommendation based on location have been done in both research and industry. In this paper, we firstly construct a region-based location graph (RLG, in which region node respectively connects with user node and business information node, and then we propose a location-based recommendation algorithm based on RLG, which can combine with user short-ranged mobility formed by daily activity and long-distance mobility formed by social network ties and sequentially can recommend local business information and long-distance business information to users. Moreover, it can combine user-based collaborative filtering with item-based collaborative filtering, and it can alleviate cold start problem which traditional recommender systems often suffer from. Empirical studies from large-scale real-world data from Yelp demonstrate that our method outperforms other methods on the aspect of recommendation accuracy.

10. A Simple But Effective Canonical Dual Theory Unified Algorithm for Global Optimization

OpenAIRE

Zhang, Jiapu

2011-01-01

Numerical global optimization methods are often very time consuming and could not be applied for high-dimensional nonconvex/nonsmooth optimization problems. Due to the nonconvexity/nonsmoothness, directly solving the primal problems sometimes is very difficult. This paper presents a very simple but very effective canonical duality theory (CDT) unified global optimization algorithm. This algorithm has convergence is proved in this paper. More important, for this CDT-unified algorithm, numerous...

11. Cell Formation in Industrial Engineering : Theory, Algorithms and Experiments

NARCIS (Netherlands)

Goldengorin, B.; Krushynskyi, D.; Pardalos, P.M.

2013-01-01

This book focuses on a development of optimal, flexible, and efficient models and algorithms for cell formation in group technology. Its main aim is to provide a reliable tool that can be used by managers and engineers to design manufacturing cells based on their own preferences and constraints

12. Bilevel programming problems theory, algorithms and applications to energy networks

CERN Document Server

Dempe, Stephan; Pérez-Valdés, Gerardo A; Kalashnykova, Nataliya; Kalashnikova, Nataliya

2015-01-01

This book describes recent theoretical findings relevant to bilevel programming in general, and in mixed-integer bilevel programming in particular. It describes recent applications in energy problems, such as the stochastic bilevel optimization approaches used in the natural gas industry. New algorithms for solving linear and mixed-integer bilevel programming problems are presented and explained.

13. Theory of Neural Information Processing Systems

International Nuclear Information System (INIS)

Galla, Tobias

2006-01-01

It is difficult not to be amazed by the ability of the human brain to process, to structure and to memorize information. Even by the toughest standards the behaviour of this network of about 10 11 neurons qualifies as complex, and both the scientific community and the public take great interest in the growing field of neuroscience. The scientific endeavour to learn more about the function of the brain as an information processing system is here a truly interdisciplinary one, with important contributions from biology, computer science, physics, engineering and mathematics as the authors quite rightly point out in the introduction of their book. The role of the theoretical disciplines here is to provide mathematical models of information processing systems and the tools to study them. These models and tools are at the centre of the material covered in the book by Coolen, Kuehn and Sollich. The book is divided into five parts, providing basic introductory material on neural network models as well as the details of advanced techniques to study them. A mathematical appendix complements the main text. The range of topics is extremely broad, still the presentation is concise and the book well arranged. To stress the breadth of the book let me just mention a few keywords here: the material ranges from the basics of perceptrons and recurrent network architectures to more advanced aspects such as Bayesian learning and support vector machines; Shannon's theory of information and the definition of entropy are discussed, and a chapter on Amari's information geometry is not missing either. Finally the statistical mechanics chapters cover Gardner theory and the replica analysis of the Hopfield model, not without being preceded by a brief introduction of the basic concepts of equilibrium statistical physics. The book also contains a part on effective theories of the macroscopic dynamics of neural networks. Many dynamical aspects of neural networks are usually hard to find in the

14. Encoding color information for visual tracking: Algorithms and benchmark.

Science.gov (United States)

Liang, Pengpeng; Blasch, Erik; Ling, Haibin

2015-12-01

While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

15. Shape reconstruction from apparent contours theory and algorithms

CERN Document Server

Bellettini, Giovanni; Paolini, Maurizio

2015-01-01

Motivated by a variational model concerning the depth of the objects in a picture and the problem of hidden and illusory contours, this book investigates one of the central problems of computer vision: the topological and algorithmic reconstruction of a smooth three dimensional scene starting from the visible part of an apparent contour. The authors focus their attention on the manipulation of apparent contours using a finite set of elementary moves, which correspond to diffeomorphic deformations of three dimensional scenes. A large part of the book is devoted to the algorithmic part, with implementations, experiments, and computed examples. The book is intended also as a user's guide to the software code appcontour, written for the manipulation of apparent contours and their invariants. This book is addressed to theoretical and applied scientists working in the field of mathematical models of image segmentation.

16. Information dynamics algorithm for detecting communities in networks

Science.gov (United States)

Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro

2012-11-01

The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.

17. Informal Risk Perceptions and Formal Theory

International Nuclear Information System (INIS)

Cayford, Jerry

2001-01-01

Economists have argued persuasively that our goals are wider than just risk minimization, and that they include a prudent weighing of costs and benefits. This economic line of thought recognizes that our policy goals are complex. As we widen the range of goals we are willing to entertain, though, we need to check that the methods we customarily employ are appropriate for the tasks to which we customarily apply them. This paper examines some economic methods of risk assessment, in light of the question of what our policy goals are and should be. Once the question of goals is open, more complexities than just cost intrude: what the public wants and why begs to be addressed. This leads us to the controversial issue of public risk perceptions. We have now examined a number of procedures that experts use to make public policy decisions. Behind all these issues is always the question of social welfare: what actions can we take, what policies should we embrace, to make the world a better place? In many cases, the public and the experts disagree about what the right choice is. In the first section, we saw a possible defense of the experts based on democratic theory: the people's participation, and even their will, can be legitimately set aside in the pursuit of their true interests. If this defense is to work, a great deal of weight rests on the question of the people's interests and the competence and integrity of the experts' pursuit of it. But at the same time, social preferences are ill-defined, and so are not good candidates for rational actor theory. Both the prescriptive legitimacy claim and the very workings of formal theory we have seen to depend on informal, qualitative, political judgments. Unfortunately, we have also seen a steady pattern of expert reliance on technical procedures even when they were manifestly unsuited to the task. The experts seem so intent on excluding informal thought that they would prefer even a bad quantitative process to a qualitative

18. Informal Risk Perceptions and Formal Theory

Energy Technology Data Exchange (ETDEWEB)

Cayford, Jerry [Resources for the Future, Washington, DC (United States)

2001-07-01

Economists have argued persuasively that our goals are wider than just risk minimization, and that they include a prudent weighing of costs and benefits. This economic line of thought recognizes that our policy goals are complex. As we widen the range of goals we are willing to entertain, though, we need to check that the methods we customarily employ are appropriate for the tasks to which we customarily apply them. This paper examines some economic methods of risk assessment, in light of the question of what our policy goals are and should be. Once the question of goals is open, more complexities than just cost intrude: what the public wants and why begs to be addressed. This leads us to the controversial issue of public risk perceptions. We have now examined a number of procedures that experts use to make public policy decisions. Behind all these issues is always the question of social welfare: what actions can we take, what policies should we embrace, to make the world a better place? In many cases, the public and the experts disagree about what the right choice is. In the first section, we saw a possible defense of the experts based on democratic theory: the people's participation, and even their will, can be legitimately set aside in the pursuit of their true interests. If this defense is to work, a great deal of weight rests on the question of the people's interests and the competence and integrity of the experts' pursuit of it. But at the same time, social preferences are ill-defined, and so are not good candidates for rational actor theory. Both the prescriptive legitimacy claim and the very workings of formal theory we have seen to depend on informal, qualitative, political judgments. Unfortunately, we have also seen a steady pattern of expert reliance on technical procedures even when they were manifestly unsuited to the task. The experts seem so intent on excluding informal thought that they would prefer even a bad quantitative process to

19. Designing and implementing of improved cryptographic algorithm using modular arithmetic theory

Directory of Open Access Journals (Sweden)

Maryam Kamarzarrin

2015-05-01

Full Text Available Maintaining the privacy and security of people information are two most important principles of electronic health plan. One of the methods of creating privacy and securing of information is using Public key cryptography system. In this paper, we compare two algorithms, Common And Fast Exponentiation algorithms, for enhancing the efficiency of public key cryptography. We express that a designed system by Fast Exponentiation Algorithm has high speed and performance but low power consumption and space occupied compared with Common Exponentiation algorithm. Although designed systems by Common Exponentiation algorithm have slower speed and lower performance, designing by this algorithm has less complexity, and easier designing compared with Fast Exponentiation algorithm. In this paper, we will try to examine and compare two different methods of exponentiation, also observe performance Impact of these two approaches in the form of hardware with VHDL language on FPGA.

20. A multilevel algorithm for flow observables in gauge theories

International Nuclear Information System (INIS)

Garcia Vera, Miguel; Humboldt-Universitaet, Berlin; Schaefer, Stefan

2016-03-01

We study the possibility of using multilevel algorithms for the computation of correlation functions of gradient flow observables. For each point in the correlation function an approximate flow is defined which depends only on links in a subset of the lattice. Together with a local action this allows for independent updates and consequently a convergence of the Monte Carlo process faster than the inverse square root of the number of measurements. We demonstrate the feasibility of this idea in the correlation functions of the topological charge and the energy density.

1. INFORMATIONAL-METHODICAL SUPPORT OF THE COURSE «MATHEMATICAL LOGIC AND THEORY OF ALGORITHMS»

Directory of Open Access Journals (Sweden)

Y. I. Sinko

2010-06-01

Full Text Available In this article the basic principles of training technique of future teachers of mathematics to foundations of mathematical logic and theory of algorithms in the Kherson State University with the use of information technologies are examined. General description of functioning of the methodical system of learning of mathematical logic with the use of information technologies, in that variant, when information technologies are presented by the integrated specialized programmatic environment of the educational purpose «MatLog» is given.

2. Cyber Power Theory First, Then Information Operations

National Research Council Canada - National Science Library

Smart, Antoinette G

2001-01-01

...) seems disconcerting, at least on the surface. Think tanks, government research organizations, and learned individuals have all pointed to the need for a viable theory of IO, yet no such theory has emerged...

3. A Two-Step Resume Information Extraction Algorithm

Directory of Open Access Journals (Sweden)

Jie Chen

2018-01-01

Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

4. Blind source separation advances in theory, algorithms and applications

CERN Document Server

Wang, Wenwu

2014-01-01

Blind Source Separation intends to report the new results of the efforts on the study of Blind Source Separation (BSS). The book collects novel research ideas and some training in BSS, independent component analysis (ICA), artificial intelligence and signal processing applications. Furthermore, the research results previously scattered in many journals and conferences worldwide are methodically edited and presented in a unified form. The book is likely to be of interest to university researchers, R&D engineers and graduate students in computer science and electronics who wish to learn the core principles, methods, algorithms, and applications of BSS. Dr. Ganesh R. Naik works at University of Technology, Sydney, Australia; Dr. Wenwu Wang works at University of Surrey, UK.

5. Hand and goods judgment algorithm based on depth information

Science.gov (United States)

Li, Mingzhu; Zhang, Jinsong; Yan, Dan; Wang, Qin; Zhang, Ruiqi; Han, Jing

2016-03-01

A tablet computer with a depth camera and a color camera is loaded on a traditional shopping cart. The inside information of the shopping cart is obtained by two cameras. In the shopping cart monitoring field, it is very important for us to determine whether the customer with goods in or out of the shopping cart. This paper establishes a basic framework for judging empty hand, it includes the hand extraction process based on the depth information, process of skin color model building based on WPCA (Weighted Principal Component Analysis), an algorithm for judging handheld products based on motion and skin color information, statistical process. Through this framework, the first step can ensure the integrity of the hand information, and effectively avoids the influence of sleeve and other debris, the second step can accurately extract skin color and eliminate the similar color interference, light has little effect on its results, it has the advantages of fast computation speed and high efficiency, and the third step has the advantage of greatly reducing the noise interference and improving the accuracy.

6. TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization

Science.gov (United States)

2016-11-28

methods is presented in the book chapter [CG16d]. 4.4 Robust Timetable Information Problems. Timetable information is the process of determining a...Princeton and Oxford, 2009. [BTN98] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Math - ematics of Operations Research, 23(4):769–805...Goerigk. A note on upper bounds to the robust knapsack problem with discrete scenarios. Annals of Operations Research, 223(1):461–469, 2014. [GS16] M

7. An IDS Alerts Aggregation Algorithm Based on Rough Set Theory

Science.gov (United States)

Zhang, Ru; Guo, Tao; Liu, Jianyi

2018-03-01

8. FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory

Energy Technology Data Exchange (ETDEWEB)

McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A., E-mail: mcewen.24@osu.edu, E-mail: fang.307@osu.edu, E-mail: hirata.10@osu.edu, E-mail: blazek@berkeley.edu [Center for Cosmology and AstroParticle Physics, Department of Physics, The Ohio State University, 191 W Woodruff Ave, Columbus OH 43210 (United States)

2016-09-01

We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theory and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.

9. Exploring a Theory Describing the Physics of Information Systems, Characterizing the Phenomena of Complex Information Systems

National Research Council Canada - National Science Library

Harmon, Scott

2001-01-01

This project accomplished all of its objectives: document a theory of information physics, conduct a workshop on planing experiments to test this theory, and design experiments that validate this theory...

10. Cognitive Radio for Smart Grid: Theory, Algorithms, and Security

Directory of Open Access Journals (Sweden)

Raghuram Ranganathan

2011-01-01

Full Text Available Recently, cognitive radio and smart grid are two areas which have received considerable research impetus. Cognitive radios are intelligent software defined radios (SDRs that efficiently utilize the unused regions of the spectrum, to achieve higher data rates. The smart grid is an automated electric power system that monitors and controls grid activities. In this paper, the novel concept of incorporating a cognitive radio network as the communications infrastructure for the smart grid is presented. A brief overview of the cognitive radio, IEEE 802.22 standard and smart grid, is provided. Experimental results obtained by using dimensionality reduction techniques such as principal component analysis (PCA, kernel PCA, and landmark maximum variance unfolding (LMVU on Wi-Fi signal measurements are presented in a spectrum sensing context. Furthermore, compressed sensing algorithms such as Bayesian compressed sensing and the compressed sensing Kalman filter is employed for recovering the sparse smart meter transmissions. From the power system point of view, a supervised learning method called support vector machine (SVM is used for the automated classification of power system disturbances. The impending problem of securing the smart grid is also addressed, in addition to the possibility of applying FPGA-based fuzzy logic intrusion detection for the smart grid.

11. Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

KAUST Repository

Richtarik, Peter; Taká č, Martin

2017-01-01

We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices. Our reformulation has several equivalent interpretations, allowing for researchers from various communities to leverage their domain specific insights. In particular, our reformulation can be equivalently seen as a stochastic optimization problem, stochastic linear system, stochastic fixed point problem and a probabilistic intersection problem. We prove sufficient, and necessary and sufficient conditions for the reformulation to be exact. Further, we propose and analyze three stochastic algorithms for solving the reformulated problem---basic, parallel and accelerated methods---with global linear convergence rates. The rates can be interpreted as condition numbers of a matrix which depends on the system matrix and on the reformulation parameters. This gives rise to a new phenomenon which we call stochastic preconditioning, and which refers to the problem of finding parameters (matrix and distribution) leading to a sufficiently small condition number. Our basic method can be equivalently interpreted as stochastic gradient descent, stochastic Newton method, stochastic proximal point method, stochastic fixed point method, and stochastic projection method, with fixed stepsize (relaxation parameter), applied to the reformulations.

12. Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

KAUST Repository

Richtarik, Peter

2017-06-04

We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices. Our reformulation has several equivalent interpretations, allowing for researchers from various communities to leverage their domain specific insights. In particular, our reformulation can be equivalently seen as a stochastic optimization problem, stochastic linear system, stochastic fixed point problem and a probabilistic intersection problem. We prove sufficient, and necessary and sufficient conditions for the reformulation to be exact. Further, we propose and analyze three stochastic algorithms for solving the reformulated problem---basic, parallel and accelerated methods---with global linear convergence rates. The rates can be interpreted as condition numbers of a matrix which depends on the system matrix and on the reformulation parameters. This gives rise to a new phenomenon which we call stochastic preconditioning, and which refers to the problem of finding parameters (matrix and distribution) leading to a sufficiently small condition number. Our basic method can be equivalently interpreted as stochastic gradient descent, stochastic Newton method, stochastic proximal point method, stochastic fixed point method, and stochastic projection method, with fixed stepsize (relaxation parameter), applied to the reformulations.

13. IMMAN: free software for information theory-based chemometric analysis.

Science.gov (United States)

Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

2015-05-01

The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA

14. Client-Controlled Case Information: A General System Theory Perspective

Science.gov (United States)

Fitch, Dale

2004-01-01

The author proposes a model for client control of case information via the World Wide Web built on principles of general system theory. It incorporates the client into the design, resulting in an information structure that differs from traditional human services information-sharing practices. Referencing general system theory, the concepts of…

15. Critical Theory and Information Studies: A Marcusean Infusion

Science.gov (United States)

Pyati, Ajit K.

2006-01-01

In the field of library and information science, also known as information studies, critical theory is often not included in debates about the discipline's theoretical foundations. This paper argues that the critical theory of Herbert Marcuse, in particular, has a significant contribution to make to the field of information studies. Marcuse's…

16. Optimized combination model and algorithm of parking guidance information configuration

Directory of Open Access Journals (Sweden)

Tian Ye

2011-01-01

Full Text Available Abstract Operators of parking guidance and information (PGI systems often have difficulty in providing the best car park availability information to drivers in periods of high demand. A new PGI configuration model based on the optimized combination method was proposed by analyzing of parking choice behavior. This article first describes a parking choice behavioral model incorporating drivers perceptions of waiting times at car parks based on PGI signs. This model was used to predict the influence of PGI signs on the overall performance of the traffic system. Then relationships were developed for estimating the arrival rates at car parks based on driver characteristics, car park attributes as well as the car park availability information displayed on PGI signs. A mathematical program was formulated to determine the optimal display PGI sign configuration to minimize total travel time. A genetic algorithm was used to identify solutions that significantly reduced queue lengths and total travel time compared with existing practices. These procedures were applied to an existing PGI system operating in Deqing Town and Xiuning City. Significant reductions in total travel time of parking vehicles with PGI being configured. This would reduce traffic congestion and lead to various environmental benefits.

17. Automated image segmentation using information theory

International Nuclear Information System (INIS)

Hibbard, L.S.

2001-01-01

Full text: Our development of automated contouring of CT images for RT planning is based on maximum a posteriori (MAP) analyses of region textures, edges, and prior shapes, and assumes stationary Gaussian distributions for voxel textures and contour shapes. Since models may not accurately represent image data, it would be advantageous to compute inferences without relying on models. The relative entropy (RE) from information theory can generate inferences based solely on the similarity of probability distributions. The entropy of a distribution of a random variable X is defined as -Σ x p(x)log 2 p(x) for all the values x which X may assume. The RE (Kullback-Liebler divergence) of two distributions p(X), q(X), over X is Σ x p(x)log 2 {p(x)/q(x)}. The RE is a kind of 'distance' between p,q, equaling zero when p=q and increasing as p,q are more different. Minimum-error MAP and likelihood ratio decision rules have RE equivalents: minimum error decisions obtain with functions of the differences between REs of compared distributions. One applied result is the contour ideally separating two regions is that which maximizes the relative entropy of the two regions' intensities. A program was developed that automatically contours the outlines of patients in stereotactic headframes, a situation most often requiring manual drawing. The relative entropy of intensities inside the contour (patient) versus outside (background) was maximized by conjugate gradient descent over the space of parameters of a deformable contour. shows the computed segmentation of a patient from headframe backgrounds. This program is particularly useful for preparing images for multimodal image fusion. Relative entropy and allied measures of distribution similarity provide automated contouring criteria that do not depend on statistical models of image data. This approach should have wide utility in medical image segmentation applications. Copyright (2001) Australasian College of Physical Scientists and

18. Information theoretic resources in quantum theory

Science.gov (United States)

Meznaric, Sebastian

Resource identification and quantification is an essential element of both classical and quantum information theory. Entanglement is one of these resources, arising when quantum communication and nonlocal operations are expensive to perform. In the first part of this thesis we quantify the effective entanglement when operations are additionally restricted to account for both fundamental restrictions on operations, such as those arising from superselection rules, as well as experimental errors arising from the imperfections in the apparatus. For an important class of errors we find a linear relationship between the usual and effective higher dimensional generalization of concurrence, a measure of entanglement. Following the treatment of effective entanglement, we focus on a related concept of nonlocality in the presence of superselection rules (SSR). Here we propose a scheme that may be used to activate nongenuinely multipartite nonlocality, in that a single copy of a state is not multipartite nonlocal, while two or more copies exhibit nongenuinely multipartite nonlocality. The states used exhibit the more powerful genuinely multipartite nonlocality when SSR are not enforced, but not when they are, raising the question of what is needed for genuinely multipartite nonlocality. We show that whenever the number of particles is insufficient, the degrading of genuinely multipartite to nongenuinely multipartite nonlocality is necessary. While in the first few chapters we focus our attention on understanding the resources present in quantum states, in the final part we turn the picture around and instead treat operations themselves as a resource. We provide our observers with free access to classical operations - ie. those that cannot detect or generate quantum coherence. We show that the operation of interest can then be used to either generate or detect quantum coherence if and only if it violates a particular commutation relation. Using the relative entropy, the

19. Automation of Algorithmic Tasks for Virtual Laboratories Based on Automata Theory

Directory of Open Access Journals (Sweden)

Evgeniy A. Efimchik

2016-03-01

Full Text Available In the work a description of an automata model of standard algorithm for constructing a correct solution of algorithmic tests is given. The described model allows a formal determination of the variant complexity of algorithmic test and serves as a basis for determining the complexity functions, including the collision concept – the situation of uncertainty, when a choice must be made upon fulfilling the task between the alternatives with various priorities. The influence of collisions on the automata model and its inner structure is described. The model and complexity functions are applied for virtual laboratories upon designing the algorithms of constructing variant with a predetermined complexity in real time and algorithms of the estimation procedures of students’ solution with respect to collisions. The results of the work are applied to the development of virtual laboratories, which are used in the practical part of massive online course on graph theory.

20. Parallel/vector algorithms for the spherical SN transport theory method

International Nuclear Information System (INIS)

Haghighat, A.; Mattis, R.E.

1990-01-01

This paper discusses vector and parallel processing of a 1-D curvilinear (i.e. spherical) S N transport theory algorithm on the Cornell National SuperComputer Facility (CNSF) IBM 3090/600E. Two different vector algorithms were developed and parallelized based on angular decomposition. It is shown that significant speedups are attainable. For example, for problems with large granularity, using 4 processors, the parallel/vector algorithm achieves speedups (for wall-clock time) of more than 4.5 relative to the old serial/scalar algorithm. Furthermore, this work has demonstrated the existing potential for the development of faster processing vector and parallel algorithms for multidimensional curvilinear geometries. (author)

1. Algorithms

polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

2. Ensemble Bayesian forecasting system Part I: Theory and algorithms

Science.gov (United States)

Herr, Henry D.; Krzysztofowicz, Roman

2015-05-01

The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of

3. Elaborations of grounded theory in information research: arenas/social worlds theory, discourse and situational analysis

OpenAIRE

Vasconcelos, A.C.; Sen, B.A.; Rosa, A.; Ellis, D.

2012-01-01

This paper explores elaborations of Grounded Theory in relation to Arenas/Social Worlds Theory. The notions of arenas and social worlds were present in early applications of Grounded Theory but have not been as much used or recognised as the general Grounded Theory approach, particularly in the information studies field. The studies discussed here are therefore very unusual in information research. The empirical contexts of these studies are those of (1) the role of discourse in the organisat...

4. Applying Information Processing Theory to Supervision: An Initial Exploration

Science.gov (United States)

Tangen, Jodi L.; Borders, L. DiAnne

2017-01-01

Although clinical supervision is an educational endeavor (Borders & Brown, [Borders, L. D., 2005]), many scholars neglect theories of learning in working with supervisees. The authors describe 1 learning theory--information processing theory (Atkinson & Shiffrin, 1968, 1971; Schunk, 2016)--and the ways its associated interventions may…

5. Information theory applied to econophysics: stock market behaviors

Science.gov (United States)

Vogel, Eugenio E.; Saravia, Gonzalo

2014-08-01

The use of data compressor techniques has allowed to recognize magnetic transitions and their associated critical temperatures [E.E. Vogel, G. Saravia, V. Cortez, Physica A 391, 1591 (2012)]. In the present paper we introduce some new concepts associated to data recognition and extend the use of these techniques to econophysics to explore the variations of stock market indicators showing that information theory can help to recognize different regimes. Modifications and further developments to previously introduced data compressor wlzip are introduced yielding two measurements. Additionally, we introduce an algorithm that allows to tune the number of significant digits over which the data compression is due to act complementing, this with an appropriate method to round off the truncation. The application is done to IPSA, the main indicator of the Chilean Stock Market during the year 2010 due to availability of quality data and also to consider a rare effect: the earthquake of the 27th of February on that year which is as of now the sixth strongest earthquake ever recorded by instruments (8.8 Richter scale) according to United States Geological Survey. Along the year 2010 different regimes are recognized. Calm days show larger compression than agitated days allowing for classification and recognition. Then the focus turns onto selected days showing that it is possible to recognize different regimes with the data of the last hour (60 entries) allowing to determine actions in a safer way. The "day of the week" effect is weakly present but "the hour of the day" effect is clearly present; its causes and implications are discussed. This effect also establishes the influence of Asian, European and American stock markets over the smaller Chilean Stock Market. Then dynamical studies are conducted intended to search a system that can help to realize in real time about sudden variations of the market; it is found that information theory can be really helpful in this respect.

6. What Density Functional Theory could do for Quantum Information

Science.gov (United States)

Mattsson, Ann

2015-03-01

The Hohenberg-Kohn theorem of Density Functional Theory (DFT), and extensions thereof, tells us that all properties of a system of electrons can be determined through their density, which uniquely determines the many-body wave-function. Given access to the appropriate, universal, functionals of the density we would, in theory, be able to determine all observables of any electronic system, without explicit reference to the wave-function. On the other hand, the wave-function is at the core of Quantum Information (QI), with the wave-function of a set of qubits being the central computational resource in a quantum computer. While there is seemingly little overlap between DFT and QI, reliance upon observables form a key connection. Though the time-evolution of the wave-function and associated phase information is fundamental to quantum computation, the initial and final states of a quantum computer are characterized by observables of the system. While observables can be extracted directly from a system's wave-function, DFT tells us that we may be able to intuit a method for extracting them from its density. In this talk, I will review the fundamentals of DFT and how these principles connect to the world of QI. This will range from DFT's utility in the engineering of physical qubits, to the possibility of using it to efficiently (but approximately) simulate Hamiltonians at the logical level. The apparent paradox of describing algorithms based on the quantum mechanical many-body wave-function with a DFT-like theory based on observables will remain a focus throughout. The ultimate goal of this talk is to initiate a dialog about what DFT could do for QI, in theory and in practice. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

7. The application of quadtree algorithm for information integration in the high-level radioactive waste geological disposal

International Nuclear Information System (INIS)

Gao Min; Zhong Xia; Huang Shutao

2008-01-01

A multi-source database for high-level radioactive waste geological disposal, aims to promote the information process of the geological of HLW. In the periods of the multi-dimensional and multi-source and the integration of information and applications, it also relates to computer software and hardware, the paper preliminary analysises the data resources Beishan area, Gansu Province. The paper introduces a theory based on GIS technology and methods and open source code GDAL application, at the same time, it discusses the technical methods how to finish the application of the Quadtree algorithm in the area of information resources management system, fully sharing, rapid retrieval and so on. A more detailed description of the characteristics of existing data resources, space-related data retrieval algorithm theory, programming design and implementation of ideas are showed in the paper. (authors)

8. USING INFORMATION THEORY TO DEFINE A SUSTAINABILITY INDEX

Science.gov (United States)

Information theory has many applications in Ecology and Environmental science, such as a biodiversity indicator, as a measure of evolution, a measure of distance from thermodynamic equilibrium, and as a measure of system organization. Fisher Information, in particular, provides a...

9. Client-controlled case information: a general system theory perspective.

Science.gov (United States)

Fitch, Dale

2004-07-01

The author proposes a model for client control of case information via the World Wide Web built on principles of general system theory. It incorporates the client into the design, resulting in an information structure that differs from traditional human services information-sharing practices. Referencing general system theory, the concepts of controller and controlled system, as well as entropy and negentropy, are applied to the information flow and autopoietic behavior as they relate to the boundary-maintaining functions of today's organizations. The author's conclusions synthesize general system theory and human services values to lay the foundation for an information-sharing framework for human services in the 21st century.

10. The logic of logistics: theory, algorithms and applications for logistics management

Directory of Open Access Journals (Sweden)

Claudio Barbieri da Cunha

2010-04-01

Full Text Available

Nesse texto o autor apresenta uma resenha acerca do livro "The logic of logistics: theory, algorithms and applications for logistics management", de autoria de Julien Bramel e David Simchi-Levi, publicado pela Springer-Verlag, em 1997.

11. Multiscale Monte Carlo algorithms in statistical mechanics and quantum field theory

Energy Technology Data Exchange (ETDEWEB)

Lauwers, P G

1990-12-01

Conventional Monte Carlo simulation algorithms for models in statistical mechanics and quantum field theory are afflicted by problems caused by their locality. They become highly inefficient if investigations of critical or nearly-critical systems, i.e., systems with important large scale phenomena, are undertaken. We present two types of multiscale approaches that alleveate problems of this kind: Stochastic cluster algorithms and multigrid Monte Carlo simulation algorithms. Another formidable computational problem in simulations of phenomenologically relevant field theories with fermions is the need for frequently inverting the Dirac operator. This inversion can be accelerated considerably by means of deterministic multigrid methods, very similar to the ones used for the numerical solution of differential equations. (orig.).

12. Rudolf Ahlswede’s lectures on information theory

CERN Document Server

Althöfer, Ingo; Deppe, Christian; Tamm, Ulrich

Volume 1 : The volume “Storing and Transmitting Data” is based on Rudolf Ahlswede's introductory course on "Information Theory I" and presents an introduction to Shannon Theory. Readers, familiar or unfamiliar with the technical intricacies of Information Theory, will benefit considerably from working through the book; especially Chapter VI with its lively comments and uncensored insider views from the world of science and research offers informative and revealing insights. This is the first of several volumes that will serve as a collected research documentation of Rudolf Ahlswede’s lectures on information theory. Each volume includes comments from an invited well-known expert. Holger Boche contributed his insights in the supplement of the present volume. Classical information processing concerns the main tasks of gaining knowledge, storage, transmitting and hiding data. The first task is the prime goal of Statistics. For the two next, Shannon presented an impressive mathematical theory called Informat...

13. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

Science.gov (United States)

Zhang, Yanqin

2014-01-01

As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

14. Quantum theory from first principles an informational approach

CERN Document Server

D'Ariano, Giacomo Mauro; Perinotti, Paolo

2017-01-01

Quantum theory is the soul of theoretical physics. It is not just a theory of specific physical systems, but rather a new framework with universal applicability. This book shows how we can reconstruct the theory from six information-theoretical principles, by rebuilding the quantum rules from the bottom up. Step by step, the reader will learn how to master the counterintuitive aspects of the quantum world, and how to efficiently reconstruct quantum information protocols from first principles. Using intuitive graphical notation to represent equations, and with shorter and more efficient derivations, the theory can be understood and assimilated with exceptional ease. Offering a radically new perspective on the field, the book contains an efficient course of quantum theory and quantum information for undergraduates. The book is aimed at researchers, professionals, and students in physics, computer science and philosophy, as well as the curious outsider seeking a deeper understanding of the theory.

15. Could information theory provide an ecological theory of sensory processing?

Science.gov (United States)

Atick, Joseph J

2011-01-01

The sensory pathways of animals are well adapted to processing a special class of signals, namely stimuli from the animal's environment. An important fact about natural stimuli is that they are typically very redundant and hence the sampled representation of these signals formed by the array of sensory cells is inefficient. One could argue for some animals and pathways, as we do in this review, that efficiency of information representation in the nervous system has several evolutionary advantages. Consequently, one might expect that much of the processing in the early levels of these sensory pathways could be dedicated towards recoding incoming signals into a more efficient form. In this review, we explore the principle of efficiency of information representation as a design principle for sensory processing. We give a preliminary discussion on how this principle could be applied in general to predict neural processing and then discuss concretely some neural systems where it recently has been shown to be successful. In particular, we examine the fly's LMC coding strategy and the mammalian retinal coding in the spatial, temporal and chromatic domains.

16. A Mathematical Theory of System Information Flow

Science.gov (United States)

2016-06-27

i.i.d. is usually quite involved. There are numerous experiments , often using photons, to test Bell’s Inequality recorded in the literature, but the...classical setting. Peter focused on non-locality as an alternative theory and experiments using the CHSH inequality , and devised a statistical procedure...761 (2014). 7. BIERHORST, P., A new loophole in recent Bell test experiments , arXiv:1311.4488, (2014). 8. BIERHORST, P., A Mathematical Foundation

17. Theory of the Concealed Information Test

NARCIS (Netherlands)

Verschuere, B.; Ben-Shakhar, G.; Verschuere, B.; Ben-Shakhar, G.; Meijer, E.

2011-01-01

It is now well established that physiological measures can be validly used to detect concealed information. An important challenge is to elucidate the underlying mechanisms of concealed information detection. We review theoretical approaches that can be broadly classified in two major categories:

18. Assessment of visual communication by information theory

Science.gov (United States)

Huck, Friedrich O.; Fales, Carl L.

1994-01-01

This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

19. Public Management Information Systems: Theory and Prescription.

Science.gov (United States)

Bozeman, Barry; Bretschneider, Stuart

1986-01-01

The existing theoretical framework for research in management information systems (MIS) is criticized for its lack of attention to the external environment of organizations, and a new framework is developed which better accommodates MIS in public organizations: public management information systems. Four models of publicness that reflect external…

20. Algorithms

to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

1. Galerkin algorithm for multidimensional plasma simulation codes. Informal report

International Nuclear Information System (INIS)

Godfrey, B.B.

1979-03-01

A Galerkin finite element differencing scheme has been developed for a computer simulation of plasmas. The new difference equations identically satisfy an equation of continuity. Thus, the usual current correction procedure, involving inversion of Poisson's equation, is unnecessary. The algorithm is free of many numerical Cherenkov instabilities. This differencing scheme has been implemented in CCUBE, an already existing relativistic, electromagnetic, two-dimensional PIC code in arbitrary separable, orthogonal coordinates. The separability constraint is eliminated by the new algorithm. The new version of CCUBE exhibits good stability and accuracy with reduced computer memory and time requirements. Details of the algorithm and its implementation are presented

2. Textual and chemical information processing: different domains but similar algorithms

Directory of Open Access Journals (Sweden)

Peter Willett

2000-01-01

Full Text Available This paper discusses the extent to which algorithms developed for the processing of textual databases are also applicable to the processing of chemical structure databases, and vice versa. Applications discussed include: an algorithm for distribution sorting that has been applied to the design of screening systems for rapid chemical substructure searching; the use of measures of inter-molecular structural similarity for the analysis of hypertext graphs; a genetic algorithm for calculating term weights for relevance feedback searching for determining whether a molecule is likely to exhibit biological activity; and the use of data fusion to combine the results of different chemical similarity searches.

3. Implications of Information Theory for Computational Modeling of Schizophrenia.

Science.gov (United States)

Silverstein, Steven M; Wibral, Michael; Phillips, William A

2017-10-01

Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory-such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio-can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.

4. Equity trees and graphs via information theory

Science.gov (United States)

Harré, M.; Bossomaier, T.

2010-01-01

We investigate the similarities and differences between two measures of the relationship between equities traded in financial markets. Our measures are the correlation coefficients and the mutual information. In the context of financial markets correlation coefficients are well established whereas mutual information has not previously been as well studied despite its theoretically appealing properties. We show that asset trees which are derived from either the correlation coefficients or the mutual information have a mixture of both similarities and differences at the individual equity level and at the macroscopic level. We then extend our consideration from trees to graphs using the "genus 0" condition recently introduced in order to study the networks of equities.

5. Information Theoretic Characterization of Physical Theories with Projective State Space

Science.gov (United States)

Zaopo, Marco

2015-08-01

Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.

6. Star pattern recognition algorithm aided by inertial information

Science.gov (United States)

Liu, Bao; Wang, Ke-dong; Zhang, Chao

2011-08-01

Star pattern recognition is one of the key problems of the celestial navigation. The traditional star pattern recognition approaches, such as the triangle algorithm and the star angular distance algorithm, are a kind of all-sky matching method whose recognition speed is slow and recognition success rate is not high. Therefore, the real time and reliability of CNS (Celestial Navigation System) is reduced to some extent, especially for the maneuvering spacecraft. However, if the direction of the camera optical axis can be estimated by other navigation systems such as INS (Inertial Navigation System), the star pattern recognition can be fulfilled in the vicinity of the estimated direction of the optical axis. The benefits of the INS-aided star pattern recognition algorithm include at least the improved matching speed and the improved success rate. In this paper, the direction of the camera optical axis, the local matching sky, and the projection of stars on the image plane are estimated by the aiding of INS firstly. Then, the local star catalog for the star pattern recognition is established in real time dynamically. The star images extracted in the camera plane are matched in the local sky. Compared to the traditional all-sky star pattern recognition algorithms, the memory of storing the star catalog is reduced significantly. Finally, the INS-aided star pattern recognition algorithm is validated by simulations. The results of simulations show that the algorithm's computation time is reduced sharply and its matching success rate is improved greatly.

7. Realism and Antirealism in Informational Foundations of Quantum Theory

Directory of Open Access Journals (Sweden)

Tina Bilban

2014-08-01

Full Text Available Zeilinger-Brukner's informational foundations of quantum theory, a theory based on Zeilinger's foundational principle for quantum mechanics that an elementary system carried one bit of information, explains seemingly unintuitive quantum behavior with simple theoretical framework. It is based on the notion that distinction between reality and information cannot be made, therefore they are the same. As the critics of informational foundations of quantum theory show, this antirealistic move captures the theory in tautology, where information only refers to itself, while the relationships outside the information with the help of which the nature of information would be defined are lost and the questions "Whose information? Information about what?" cannot be answered. The critic's solution is a return to realism, where the observer's effects on the information are neglected. We show that radical antirealism of informational foundations of quantum theory is not necessary and that the return to realism is not the only way forward. A comprehensive approach that exceeds mere realism and antirealism is also possible: we can consider both sources of the constraints on the information, those coming from the observer and those coming from the observed system/nature/reality. The information is always the observer's information about the observed. Such a comprehensive philosophical approach can still support the theoretical framework of informational foundations of quantum theory: If we take that one bit is the smallest amount of information in the form of which the observed reality can be grasped by the observer, we can say that an elementary system (grasped and defined as such by the observer correlates to one bit of information. Our approach thus explains all the features of the quantum behavior explained by informational foundations of quantum theory: the wave function and its collapse, entanglement, complementarity and quantum randomness. However, it does

8. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

Science.gov (United States)

A. AL-Salhi, Yahya E.; Lu, Songfeng

2016-08-01

Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

9. Efficient multitasking of the SU(3) lattice gauge theory algorithm on the CRAY X-MP

International Nuclear Information System (INIS)

Kuba, D.W.; Moriarty, K.J.M.

1985-01-01

The Monte Carlo lattice gauge theory algorithm with the Metropolis et.al. updating procedure is vectorized and multitasked on the four processor CRAY X-MP and results in a code with a link-update-time, in 64-bit arithmetic and 10 hits-per-link, of 11.0 μs on a 16 4 lattice, the fastest link-update-time so far achieved. The program calculates the Wilson loops of size up to L/2.L/2 for an L 4 lattice for SU(3) gauge theory. (orig./HSI)

10. Affect Theory and Autoethnography in Ordinary Information Systems

DEFF Research Database (Denmark)

2016-01-01

This paper uses philosophical theories of affect as a lens for exploring autoethnographic renderings of everyday experience with information technology. Affect theories, in the paper, denote a broad trend in post-humanistic philosophy that explores sensation and feeling as emergent and relational...

11. Response to Patrick Love's "Informal Theory": A Rejoinder

Science.gov (United States)

Evans, Nancy J.; Guido, Florence M.

2012-01-01

This rejoinder to Patrick Love's article, "Informal Theory: The Ignored Link in Theory-to-Practice," which appears earlier in this issue of the "Journal of College Student Development", was written at the invitation of the Editor. In the critique, we point out the weaknesses of many of Love's arguments and propositions. We provide an alternative…

12. Generalized Net Model of the Cognitive and Neural Algorithm for Adaptive Resonance Theory 1

Directory of Open Access Journals (Sweden)

Todor Petkov

2013-12-01

Full Text Available The artificial neural networks are inspired by biological properties of human and animal brains. One of the neural networks type is called ART [4]. The abbreviation of ART stands for Adaptive Resonance Theory that has been invented by Stephen Grossberg in 1976 [5]. ART represents a family of Neural Networks. It is a cognitive and neural theory that describes how the brain autonomously learns to categorize, recognize and predict objects and events in the changing world. In this paper we introduce a GN model that represent ART1 Neural Network learning algorithm [1]. The purpose of this model is to explain when the input vector will be clustered or rejected among all nodes by the network. It can also be used for explanation and optimization of ART1 learning algorithm.

13. Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging

International Nuclear Information System (INIS)

Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco

2011-01-01

The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.

14. Generation Expansion Planning in pool market: A hybrid modified game theory and improved genetic algorithm

International Nuclear Information System (INIS)

Shayanfar, H.A.; Lahiji, A. Saliminia; Aghaei, J.; Rabiee, A.

2009-01-01

Unlike the traditional policy, Generation Expansion Planning (GEP) problem in competitive framework is complicated. In the new policy, each Generation Company (GENCO) decides to invest in such a way that obtains as much profit as possible. This paper presents a new hybrid algorithm to determine GEP in a Pool market. The proposed algorithm is divided in two programming levels: master and slave. In the master level a Modified Game Theory (MGT) is proposed to evaluate the contrast of GENCOs by the Independent System Operator (ISO). In the slave level, an Improved Genetic Algorithm (IGA) method is used to find the best solution of each GENCO for decision-making of investment. The validity of the proposed method is examined in the case study including three GENCOs with multi-type of power plants. The results show that the presented method is both satisfactory and consistent with expectation. (author)

15. Quantum: information theory: technological challenge; Computacion Cuantica: un reto tecnologico

Energy Technology Data Exchange (ETDEWEB)

Calixto, M.

2001-07-01

The new Quantum Information Theory augurs powerful machines that obey the entangled logic of the subatomic world. Parallelism, entanglement, teleportation, no-cloning and quantum cryptography are typical peculiarities of this novel way of understanding computation. (Author) 24 refs.

16. Information Processing Theories and the Education of the Gifted.

Science.gov (United States)

Rawl, Ruth K.; O'Tuel, Frances S.

1983-01-01

The basic assumptions of information processing theories in cognitive psychology are reviewed, and the application of this approach to problem solving in gifted education is considered. Specific implications are cited on problem selection and instruction giving. (CL)

17. Information theory and its application to optical communication

NARCIS (Netherlands)

Willems, F.M.J.

2017-01-01

The lecture focusses on the foundations of communication which were developed within the field of information theory. Enumerative shaping techniques and the so-called squareroot transform will be discussed in detail.

18. An introduction to single-user information theory

CERN Document Server

2018-01-01

This book presents a succinct and mathematically rigorous treatment of the main pillars of Shannon’s information theory, discussing the fundamental concepts and indispensable results of Shannon’s mathematical theory of communications. It includes five meticulously written core chapters (with accompanying problems), emphasizing the key topics of information measures; lossless and lossy data compression; channel coding; and joint source-channel coding for single-user (point-to-point) communications systems. It also features two appendices covering necessary background material in real analysis and in probability theory and stochastic processes. The book is ideal for a one-semester foundational course on information theory for senior undergraduate and entry-level graduate students in mathematics, statistics, engineering, and computing and information sciences. A comprehensive instructor’s solutions manual is available.

19. Generalized information theory: aims, results, and open problems

International Nuclear Information System (INIS)

Klir, George J.

2004-01-01

The principal purpose of this paper is to present a comprehensive overview of generalized information theory (GIT): a research program whose objective is to develop a broad treatment of uncertainty-based information, not restricted to classical notions of uncertainty. After a brief overview of classical information theories, a broad framework for formalizing uncertainty and the associated uncertainty-based information of a great spectrum of conceivable types is sketched. The various theories of imprecise probabilities that have already been developed within this framework are then surveyed, focusing primarily on some important unifying principles applying to all these theories. This is followed by introducing two higher levels of the theories of imprecise probabilities: (i) the level of measuring the amount of relevant uncertainty (predictive, retrodictive, prescriptive, diagnostic, etc.) in any situation formalizable in each given theory, and (ii) the level of some methodological principles of uncertainty, which are contingent upon the capability to measure uncertainty and the associated uncertainty-based information. Various issues regarding both the measurement of uncertainty and the uncertainty principles are discussed. Again, the focus is on unifying principles applicable to all the theories. Finally, the current status of GIT is assessed and future research in the area is discussed

20. From the social learning theory to a social learning algorithm for global optimization

OpenAIRE

Gong, Yue-Jiao; Zhang, Jun; Li, Yun

2014-01-01

Traditionally, the Evolutionary Computation (EC) paradigm is inspired by Darwinian evolution or the swarm intelligence of animals. Bandura's Social Learning Theory pointed out that the social learning behavior of humans indicates a high level of intelligence in nature. We found that such intelligence of human society can be implemented by numerical computing and be utilized in computational algorithms for solving optimization problems. In this paper, we design a novel and generic optimization...

1. Activity System Theory Approach to Healthcare Information System

OpenAIRE

Bai, Guohua

2004-01-01

Healthcare information system is a very complex system and has to be approached from systematic perspectives. This paper presents an Activity System Theory (ATS) approach by integrating system thinking and social psychology. First part of the paper, the activity system theory is presented, especially a recursive model of human activity system is introduced. A project ‘Integrated Mobile Information System for Diabetic Healthcare (IMIS)’ is then used to demonstrate a practical application of th...

2. Advancing Theory? Landscape Archaeology and Geographical Information Systems

Directory of Open Access Journals (Sweden)

Di Hu

2012-05-01

Full Text Available This paper will focus on how Geographical Information Systems (GIS have been applied in Landscape Archaeology from the late 1980s to the present. GIS, a tool for organising and analysing spatial information, has exploded in popularity, but we still lack a systematic overview of how it has contributed to archaeological theory, specifically Landscape Archaeology. This paper will examine whether and how GIS has advanced archaeological theory through a historical review of its application in archaeology.

3. Aerosol Retrievals from Proposed Satellite Bistatic Lidar Observations: Algorithm and Information Content

Science.gov (United States)

Alexandrov, M. D.; Mishchenko, M. I.

2017-12-01

Accurate aerosol retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. We suggested to address this ill-posedness by flying a bistatic lidar system. Such a system would consist of formation flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and an additional platform hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar. Thus, bistatic lidar observations will be free of deficiencies affecting both monostatic lidar measurements (caused by the highly limited information content) and passive photopolarimetric measurements (caused by vertical integration and surface reflection).We present a preliminary aerosol retrieval algorithm for a bistatic lidar system consisting of a high spectral resolution lidar (HSRL) and an additional receiver flown in formation with it at a scattering angle of 165 degrees. This algorithm was applied to synthetic data generated using Mie-theory computations. The model/retrieval parameters in our tests were the effective radius and variance of the aerosol size distribution, complex refractive index of the particles, and their number concentration. Both mono- and bimodal aerosol mixtures were considered. Our algorithm allowed for definitive evaluation of error propagation from measurements to retrievals using a Monte Carlo technique, which involves random distortion of the observations and statistical characterization of the resulting retrieval errors. Our tests demonstrated that supplementing a conventional monostatic HSRL with an additional receiver dramatically increases the information content of the measurements and allows for a sufficiently accurate characterization of tropospheric aerosols.

4. Applications of quantum information theory to quantum gravity

International Nuclear Information System (INIS)

Smolin, L.

2005-01-01

Full text: I describe work by and with Fotini Markopoulou and Olaf Dreyeron the application of quantum information theory to quantum gravity. A particular application to black hole physics is described, which treats the black hole horizon as an open system, in interaction with an environment, which are the degrees of freedom in the bulk spacetime. This allows us to elucidate which quantum states of a general horizon contribute to the entropy of a Schwarzchild black hole. This case serves as an example of how methods from quantum information theory may help to elucidate how the classical limit emerges from a background independent quantum theory of gravity. (author)

5. Agricultural Library Information Retrieval Based on Improved Semantic Algorithm

OpenAIRE

Meiling , Xie

2014-01-01

International audience; To support users to quickly access information they need from the agricultural library’s vast information and to improve the low intelligence query service, a model for intelligent library information retrieval was constructed. The semantic web mode was introduced and the information retrieval framework was designed. The model structure consisted of three parts: Information data integration, user interface and information retrieval match. The key method supporting retr...

6. Entanglement dynamics in quantum information theory

Energy Technology Data Exchange (ETDEWEB)

Cubitt, T.S.

2007-03-29

This thesis contributes to the theory of entanglement dynamics, that is, the behaviour of entanglement in systems that are evolving with time. Progressively more complex multipartite systems are considered, starting with low-dimensional tripartite systems, whose entanglement dynamics can nonetheless display surprising properties, progressing through larger networks of interacting particles, and finishing with infinitely large lattice models. Firstly, what is perhaps the most basic question in entanglement dynamics is considered: what resources are necessary in order to create entanglement between distant particles? The answer is surprising: sending separable states between the parties is sufficient; entanglement can be created without it being carried by a ''messenger'' particle. The analogous result also holds in the continuous-time case: two particles interacting indirectly via a common ancilla particle can be entangled without the ancilla ever itself becoming entangled. The latter result appears to discount any notion of entanglement flow. However, for pure states, this intuitive idea can be recovered, and even made quantitative. A ''bottleneck'' inequality is derived that relates the entanglement rate of the end particles in a tripartite chain to the entanglement of the middle one. In particular, no entanglement can be created if the middle particle is not entangled. However, although this result can be applied to general interaction networks, it does not capture the full entanglement dynamics of these more complex systems. This is remedied by the derivation of entanglement rate equations, loosely analogous to the rate equations describing a chemical reaction. A complete set of rate equations for a system reflects the full structure of its interaction network, and can be used to prove a lower bound on the scaling with chain length of the time required to entangle the ends of a chain. Finally, in contrast with these more

7. Entanglement dynamics in quantum information theory

International Nuclear Information System (INIS)

Cubitt, T.S.

2007-01-01

This thesis contributes to the theory of entanglement dynamics, that is, the behaviour of entanglement in systems that are evolving with time. Progressively more complex multipartite systems are considered, starting with low-dimensional tripartite systems, whose entanglement dynamics can nonetheless display surprising properties, progressing through larger networks of interacting particles, and finishing with infinitely large lattice models. Firstly, what is perhaps the most basic question in entanglement dynamics is considered: what resources are necessary in order to create entanglement between distant particles? The answer is surprising: sending separable states between the parties is sufficient; entanglement can be created without it being carried by a ''messenger'' particle. The analogous result also holds in the continuous-time case: two particles interacting indirectly via a common ancilla particle can be entangled without the ancilla ever itself becoming entangled. The latter result appears to discount any notion of entanglement flow. However, for pure states, this intuitive idea can be recovered, and even made quantitative. A ''bottleneck'' inequality is derived that relates the entanglement rate of the end particles in a tripartite chain to the entanglement of the middle one. In particular, no entanglement can be created if the middle particle is not entangled. However, although this result can be applied to general interaction networks, it does not capture the full entanglement dynamics of these more complex systems. This is remedied by the derivation of entanglement rate equations, loosely analogous to the rate equations describing a chemical reaction. A complete set of rate equations for a system reflects the full structure of its interaction network, and can be used to prove a lower bound on the scaling with chain length of the time required to entangle the ends of a chain. Finally, in contrast with these more abstract results, the entanglement and

8. The g-theorem and quantum information theory

Energy Technology Data Exchange (ETDEWEB)

Casini, Horacio; Landea, Ignacio Salazar; Torroba, Gonzalo [Centro Atómico Bariloche and CONICET,S.C. de Bariloche, Río Negro, R8402AGP (Argentina)

2016-10-25

We study boundary renormalization group flows between boundary conformal field theories in 1+1 dimensions using methods of quantum information theory. We define an entropic g-function for theories with impurities in terms of the relative entanglement entropy, and we prove that this g-function decreases along boundary renormalization group flows. This entropic g-theorem is valid at zero temperature, and is independent from the g-theorem based on the thermal partition function. We also discuss the mutual information in boundary RG flows, and how it encodes the correlations between the impurity and bulk degrees of freedom. Our results provide a quantum-information understanding of (boundary) RG flow as increase of distinguishability between the UV fixed point and the theory along the RG flow.

9. An information theory-based approach to modeling the information processing of NPP operators

International Nuclear Information System (INIS)

Kim, Jong Hyun; Seong, Poong Hyun

2002-01-01

This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory

10. A molecular dynamics algorithm for simulation of field theories in the canonical ensemble

International Nuclear Information System (INIS)

Kogut, J.B.; Sinclair, D.K.

1986-01-01

We add a single scalar degree of freedom (''demon'') to the microcanonical ensemble which converts its molecular dynamics into a simulation method for the canonical ensemble (euclidean path integral) of the underlying field theory. This generalization of the microcanonical molecular dynamics algorithm simulates the field theory at fixed coupling with a completely deterministic procedure. We discuss the finite size effects of the method, the equipartition theorem and ergodicity. The method is applied to the planar model in two dimensions and SU(3) lattice gauge theory with four species of light, dynamical quarks in four dimensions. The method is much less sensitive to its discrete time step than conventional Langevin equation simulations of the canonical ensemble. The method is a straightforward generalization of a procedure introduced by S. Nose for molecular physics. (orig.)

11. Fast filtering algorithm based on vibration systems and neural information exchange and its application to micro motion robot

International Nuclear Information System (INIS)

Gao Wa; Zha Fu-Sheng; Li Man-Tian; Song Bao-Yu

2014-01-01

This paper develops a fast filtering algorithm based on vibration systems theory and neural information exchange approach. The characters, including the derivation process and parameter analysis, are discussed and the feasibility and the effectiveness are testified by the filtering performance compared with various filtering methods, such as the fast wavelet transform algorithm, the particle filtering method and our previously developed single degree of freedom vibration system filtering algorithm, according to simulation and practical approaches. Meanwhile, the comparisons indicate that a significant advantage of the proposed fast filtering algorithm is its extremely fast filtering speed with good filtering performance. Further, the developed fast filtering algorithm is applied to the navigation and positioning system of the micro motion robot, which is a high real-time requirement for the signals preprocessing. Then, the preprocessing data is used to estimate the heading angle error and the attitude angle error of the micro motion robot. The estimation experiments illustrate the high practicality of the proposed fast filtering algorithm. (general)

12. An information theory framework for dynamic functional domain connectivity.

Science.gov (United States)

Vergara, Victor M; Miller, Robyn; Calhoun, Vince

2017-06-01

Dynamic functional network connectivity (dFNC) analyzes time evolution of coherent activity in the brain. In this technique dynamic changes are considered for the whole brain. This paper proposes an information theory framework to measure information flowing among subsets of functional networks call functional domains. Our method aims at estimating bits of information contained and shared among domains. The succession of dynamic functional states is estimated at the domain level. Information quantity is based on the probabilities of observing each dynamic state. Mutual information measurement is then obtained from probabilities across domains. Thus, we named this value the cross domain mutual information (CDMI). Strong CDMIs were observed in relation to the subcortical domain. Domains related to sensorial input, motor control and cerebellum form another CDMI cluster. Information flow among other domains was seldom found. Other methods of dynamic connectivity focus on whole brain dFNC matrices. In the current framework, information theory is applied to states estimated from pairs of multi-network functional domains. In this context, we apply information theory to measure information flow across functional domains. Identified CDMI clusters point to known information pathways in the basal ganglia and also among areas of sensorial input, patterns found in static functional connectivity. In contrast, CDMI across brain areas of higher level cognitive processing follow a different pattern that indicates scarce information sharing. These findings show that employing information theory to formally measured information flow through brain domains reveals additional features of functional connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.

13. Planting contemporary practice theory in the garden of information science

NARCIS (Netherlands)

Huizing, A.; Cavanagh, M.

2011-01-01

Introduction. The purpose of this paper is to introduce to information science in a coherent fashion the core premises of contemporary practice theory, and thus to engage the information research community in further debate and discussion. Method. Contemporary practice-based approaches are

14. Year 7 Students, Information Literacy, and Transfer: A Grounded Theory

Science.gov (United States)

Herring, James E.

2011-01-01

This study examined the views of year 7 students, teacher librarians, and teachers in three state secondary schools in rural New South Wales, Australia, on information literacy and transfer. The aims of the study included the development of a grounded theory in relation to information literacy and transfer in these schools. The study's perspective…

15. Information processing theory in the early design stages

DEFF Research Database (Denmark)

Cash, Philip; Kreye, Melanie

2014-01-01

suggestions for improvements and support. One theory that may be particularly applicable to the early design stages is Information Processing Theory (IPT) as it is linked to the design process with regard to the key concepts considered. IPT states that designers search for information if they perceive......, the new knowledge is shared between the design team to reduce ambiguity with regards to its meaning and to build a shared understanding – reducing perceived uncertainty. Thus, we propose that Information-Processing Theory is suitable to describe designer activity in the early design stages...... uncertainty with regard to the knowledge necessary to solve a design challenge. They then process this information and compare if the new knowledge they have gained covers the previous knowledge gap. In engineering design, uncertainty plays a key role, particularly in the early design stages which has been...

16. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

Science.gov (United States)

Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

2013-01-01

The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.

17. Algorithms

ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

18. Comparative analysis of different variants of the Uzawa algorithm in problems of the theory of elasticity for incompressible materials

Directory of Open Access Journals (Sweden)

Nikita E. Styopin

2016-09-01

Full Text Available Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.

19. A Survey of Stemming Algorithms in Information Retrieval

Science.gov (United States)

Moral, Cristian; de Antonio, Angélica; Imbert, Ricardo; Ramírez, Jaime

2014-01-01

Background: During the last fifty years, improved information retrieval techniques have become necessary because of the huge amount of information people have available, which continues to increase rapidly due to the use of new technologies and the Internet. Stemming is one of the processes that can improve information retrieval in terms of…

20. Information Theory - The Bridge Connecting Bounded Rational Game Theory and Statistical Physics

Science.gov (United States)

Wolpert, David H.

2005-01-01

A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality of all red-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. This paper shows that the same information theoretic mathematical structure, known as Product Distribution (PD) theory, addresses both issues. In this, PD theory not only provides a principle formulation of bounded rationality and a set of new types of mean field theory in statistical physics; it also shows that those topics are fundamentally one and the same.

1. Rational hybrid Monte Carlo algorithm for theories with unknown spectral bounds

International Nuclear Information System (INIS)

Kogut, J. B.; Sinclair, D. K.

2006-01-01

The Rational Hybrid Monte Carlo (RHMC) algorithm extends the Hybrid Monte Carlo algorithm for lattice QCD simulations to situations involving fractional powers of the determinant of the quadratic Dirac operator. This avoids the updating increment (dt) dependence of observables which plagues the Hybrid Molecular-dynamics (HMD) method. The RHMC algorithm uses rational approximations to fractional powers of the quadratic Dirac operator. Such approximations are only available when positive upper and lower bounds to the operator's spectrum are known. We apply the RHMC algorithm to simulations of 2 theories for which a positive lower spectral bound is unknown: lattice QCD with staggered quarks at finite isospin chemical potential and lattice QCD with massless staggered quarks and chiral 4-fermion interactions (χQCD). A choice of lower bound is made in each case, and the properties of the RHMC simulations these define are studied. Justification of our choices of lower bounds is made by comparing measurements with those from HMD simulations, and by comparing different choices of lower bounds

2. Intercept Algorithm for Maneuvering Targets Based on Differential Geometry and Lyapunov Theory

Directory of Open Access Journals (Sweden)

Yunes Sh. ALQUDSI

2018-03-01

Full Text Available Nowadays, the homing guidance is utilized in the existed and under development air defense systems (ADS to effectively intercept the targets. The targets became smarter and capable to fly and maneuver professionally and the tendency to design missile with a small warhead became greater, then there is a pressure to produce a more precise and accurate missile guidance system based on intelligent algorithms to ensure effective interception of highly maneuverable targets. The aim of this paper is to present an intelligent guidance algorithm that effectively and precisely intercept the maneuverable and smart targets by virtue of the differential geometry (DG concepts. The intercept geometry and engagement kinematics, in addition to the direct intercept condition are developed and expressed in DG terms. The guidance algorithm is then developed by virtue of DG and Lyapunov theory. The study terminates with 2D engagement simulation with illustrative examples, to demonstrate that, the derived DG guidance algorithm is a generalized guidance approach and the well-known proportional navigation (PN guidance law is a subset of this approach.

3. Fast clustering algorithm for large ECG data sets based on CS theory in combination with PCA and K-NN methods.

Science.gov (United States)

2014-01-01

Long-term recording of Electrocardiogram (ECG) signals plays an important role in health care systems for diagnostic and treatment purposes of heart diseases. Clustering and classification of collecting data are essential parts for detecting concealed information of P-QRS-T waves in the long-term ECG recording. Currently used algorithms do have their share of drawbacks: 1) clustering and classification cannot be done in real time; 2) they suffer from huge energy consumption and load of sampling. These drawbacks motivated us in developing novel optimized clustering algorithm which could easily scan large ECG datasets for establishing low power long-term ECG recording. In this paper, we present an advanced K-means clustering algorithm based on Compressed Sensing (CS) theory as a random sampling procedure. Then, two dimensionality reduction methods: Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) followed by sorting the data using the K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers are applied to the proposed algorithm. We show our algorithm based on PCA features in combination with K-NN classifier shows better performance than other methods. The proposed algorithm outperforms existing algorithms by increasing 11% classification accuracy. In addition, the proposed algorithm illustrates classification accuracy for K-NN and PNN classifiers, and a Receiver Operating Characteristics (ROC) area of 99.98%, 99.83%, and 99.75% respectively.

4. Modeling Routinization in Games: An Information Theory Approach

DEFF Research Database (Denmark)

Wallner, Simon; Pichlmair, Martin; Hecher, Michael

2015-01-01

Routinization is the result of practicing until an action stops being a goal-directed process. This paper formulates a definition of routinization in games based on prior research in the fields of activity theory and practice theory. Routinization is analyzed using the formal model of discrete......-time, discrete-space Markov chains and information theory to measure the actual error between the dynamically trained models and the player interaction. Preliminary research supports the hypothesis that Markov chains can be effectively used to model routinization in games. A full study design is presented...

5. Mean field theory of EM algorithm for Bayesian grey scale image restoration

International Nuclear Information System (INIS)

Inoue, Jun-ichi; Tanaka, Kazuyuki

2003-01-01

The EM algorithm for the Bayesian grey scale image restoration is investigated in the framework of the mean field theory. Our model system is identical to the infinite range random field Q-Ising model. The maximum marginal likelihood method is applied to the determination of hyper-parameters. We calculate both the data-averaged mean square error between the original image and its maximizer of posterior marginal estimate, and the data-averaged marginal likelihood function exactly. After evaluating the hyper-parameter dependence of the data-averaged marginal likelihood function, we derive the EM algorithm which updates the hyper-parameters to obtain the maximum likelihood estimate analytically. The time evolutions of the hyper-parameters and so-called Q function are obtained. The relation between the speed of convergence of the hyper-parameters and the shape of the Q function is explained from the viewpoint of dynamics

6. A New Recommendation Algorithm Based on User’s Dynamic Information in Complex Social Network

Directory of Open Access Journals (Sweden)

Jiujun Cheng

2015-01-01

Full Text Available The development of recommendation system comes with the research of data sparsity, cold start, scalability, and privacy protection problems. Even though many papers proposed different improved recommendation algorithms to solve those problems, there is still plenty of room for improvement. In the complex social network, we can take full advantage of dynamic information such as user’s hobby, social relationship, and historical log to improve the performance of recommendation system. In this paper, we proposed a new recommendation algorithm which is based on social user’s dynamic information to solve the cold start problem of traditional collaborative filtering algorithm and also considered the dynamic factors. The algorithm takes user’s response information, dynamic interest, and the classic similar measurement of collaborative filtering algorithm into account. Then, we compared the new proposed recommendation algorithm with the traditional user based collaborative filtering algorithm and also presented some of the findings from experiment. The results of experiment demonstrate that the new proposed algorithm has a better recommended performance than the collaborative filtering algorithm in cold start scenario.

7. The use of information theory in evolutionary biology.

Science.gov (United States)

2012-05-01

Information is a key concept in evolutionary biology. Information stored in a biological organism's genome is used to generate the organism and to maintain and control it. Information is also that which evolves. When a population adapts to a local environment, information about this environment is fixed in a representative genome. However, when an environment changes, information can be lost. At the same time, information is processed by animal brains to survive in complex environments, and the capacity for information processing also evolves. Here, I review applications of information theory to the evolution of proteins and to the evolution of information processing in simulated agents that adapt to perform a complex task. © 2012 New York Academy of Sciences.

8. Nonequilibrium thermodynamics and information theory: basic concepts and relaxing dynamics

International Nuclear Information System (INIS)

Altaner, Bernhard

2017-01-01

Thermodynamics is based on the notions of energy and entropy. While energy is the elementary quantity governing physical dynamics, entropy is the fundamental concept in information theory. In this work, starting from first principles, we give a detailed didactic account on the relations between energy and entropy and thus physics and information theory. We show that thermodynamic process inequalities, like the second law, are equivalent to the requirement that an effective description for physical dynamics is strongly relaxing. From the perspective of information theory, strongly relaxing dynamics govern the irreversible convergence of a statistical ensemble towards the maximally non-commital probability distribution that is compatible with thermodynamic equilibrium parameters. In particular, Markov processes that converge to a thermodynamic equilibrium state are strongly relaxing. Our framework generalizes previous results to arbitrary open and driven systems, yielding novel thermodynamic bounds for idealized and real processes. (paper)

9. BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.

Science.gov (United States)

Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter

2013-02-01

Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of

10. Spacecraft TT&C and information transmission theory and technologies

CERN Document Server

Liu, Jiaxing

2015-01-01

Spacecraft TT&C and Information Transmission Theory and Technologies introduces the basic theory of spacecraft TT&C (telemetry, track and command) and information transmission. Combining TT&C and information transmission, the book presents several technologies for continuous wave radar including measurements for range, range rate and angle, analog and digital information transmissions, telecommand, telemetry, remote sensing and spread spectrum TT&C. For special problems occurred in the channels for TT&C and information transmission, the book represents radio propagation features and its impact on orbit measurement accuracy, and the effects caused by rain attenuation, atmospheric attenuation and multi-path effect, and polarization composition technology. This book can benefit researchers and engineers in the field of spacecraft TT&C and communication systems. Liu Jiaxing is a professor at The 10th Institute of China Electronics Technology Group Corporation.

11. A density distribution algorithm for bone incorporating local orthotropy, modal analysis and theories of cellular solids.

Science.gov (United States)

Impelluso, Thomas J

2003-06-01

An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.

12. Genetic Algorithm and Graph Theory Based Matrix Factorization Method for Online Friend Recommendation

Directory of Open Access Journals (Sweden)

Qu Li

2014-01-01

Full Text Available Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.

13. Holledge gauge failure testing using concurrent information processing algorithm

International Nuclear Information System (INIS)

Weeks, G.E.; Daniel, W.E.; Edwards, R.E.; Jannarone, R.J.; Joshi, S.N.; Palakodety, S.S.; Qian, D.

1996-01-01

For several decades, computerized information processing systems and human information processing models have developed with a good deal of mutual influence. Any comprehensive psychology text in this decade uses terms that originated in the computer industry, such as ''cache'' and ''memory'', to describe human information processing. Likewise, many engineers today are using ''artificial intelligence''and ''artificial neural network'' computing tools that originated as models of human thought to solve industrial problems. This paper concerns a recently developed human information processing model, called ''concurrent information processing'' (CIP), and a related set of computing tools for solving industrial problems. The problem of focus is adaptive gauge monitoring; the application is pneumatic pressure repeaters (Holledge gauges) used to measure liquid level and density in the Defense Waste Processing Facility and the Integrated DWPF Melter System

14. Probability and information theory, with applications to radar

CERN Document Server

Woodward, P M; Higinbotham, W

1964-01-01

Electronics and Instrumentation, Second Edition, Volume 3: Probability and Information Theory with Applications to Radar provides information pertinent to the development on research carried out in electronics and applied physics. This book presents the established mathematical techniques that provide the code in which so much of the mathematical theory of electronics and radar is expressed.Organized into eight chapters, this edition begins with an overview of the geometry of probability distributions in which moments play a significant role. This text then examines the mathematical methods in

15. Algorithms

algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

16. Algorithms

algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

17. Information Optics and Photonics Algorithms, Systems, and Applications

CERN Document Server

Javidi, Bahram

2010-01-01

This book addresses applications, recent advances, and emerging areas in fields with applications in information optics and photonics systems. The objective of this book is to illustrate and discuss novel approaches, analytical techniques, models, and technologies that enhance sensing, measurement, processing, interpretation, and visualization of information using free space optics and photonics. The material in this book concentrates on integration of diverse fields for cross-disciplinary applications including bio-photonics, digitally enhanced sensing and imaging systems, multi-dimensional optical imaging and image processing, bio-inspired imaging, 3D visualization, 3D displays, imaging on the nano-scale, quantum optics, super resolution imaging, photonics for biological applications, and holographic information systems. As a result, this book is a useful resource for researchers, engineers, and graduate students who work in the diverse fields comprising information optics and photonics.

18. e-DMDAV: A new privacy preserving algorithm for wearable enterprise information systems

Science.gov (United States)

Zhang, Zhenjiang; Wang, Xiaoni; Uden, Lorna; Zhang, Peng; Zhao, Yingsi

2018-04-01

Wearable devices have been widely used in many fields to improve the quality of people's lives. More and more data on individuals and businesses are collected by statistical organizations though those devices. Almost all of this data holds confidential information. Statistical Disclosure Control (SDC) seeks to protect statistical data in such a way that it can be released without giving away confidential information that can be linked to specific individuals or entities. The MDAV (Maximum Distance to Average Vector) algorithm is an efficient micro-aggregation algorithm belonging to SDC. However, the MDAV algorithm cannot survive homogeneity and background knowledge attacks because it was designed for static numerical data. This paper proposes a systematic dynamic-updating anonymity algorithm based on MDAV called the e-DMDAV algorithm. This algorithm introduces a new parameter and a table to ensure that the k records in one cluster with the range of the distinct values in each cluster is no less than e for numerical and non-numerical datasets. This new algorithm has been evaluated and compared with the MDAV algorithm. The simulation results show that the new algorithm outperforms MDAV in terms of minimizing distortion and disclosure risk with a similar computational cost.

19. Theory of information warfare: basic framework, methodology and conceptual apparatus

Directory of Open Access Journals (Sweden)

Олександр Васильович Курбан

2015-11-01

Full Text Available It is conducted a comprehensive theoretical study and determine the basic provisions of the modern theory of information warfare in on-line social networks. Three basic blocks, which systematized the theoretical and methodological basis of the topic, are established. There are information and psychological war, social off-line and on-line network. According to the three blocks, theoretical concepts are defined and methodological substantiation of information processes within the information warfare in the social on-line networks is formed

20. NIFTY - Numerical Information Field Theory. A versatile PYTHON library for signal inference

Science.gov (United States)

Selig, M.; Bell, M. R.; Junklewitz, H.; Oppermann, N.; Reinecke, M.; Greiner, M.; Pachajoa, C.; Enßlin, T. A.

2013-06-01

NIFTy (Numerical Information Field Theory) is a software package designed to enable the development of signal inference algorithms that operate regardless of the underlying spatial grid and its resolution. Its object-oriented framework is written in Python, although it accesses libraries written in Cython, C++, and C for efficiency. NIFTy offers a toolkit that abstracts discretized representations of continuous spaces, fields in these spaces, and operators acting on fields into classes. Thereby, the correct normalization of operations on fields is taken care of automatically without concerning the user. This allows for an abstract formulation and programming of inference algorithms, including those derived within information field theory. Thus, NIFTy permits its user to rapidly prototype algorithms in 1D, and then apply the developed code in higher-dimensional settings of real world problems. The set of spaces on which NIFTy operates comprises point sets, n-dimensional regular grids, spherical spaces, their harmonic counterparts, and product spaces constructed as combinations of those. The functionality and diversity of the package is demonstrated by a Wiener filter code example that successfully runs without modification regardless of the space on which the inference problem is defined. NIFTy homepage http://www.mpa-garching.mpg.de/ift/nifty/; Excerpts of this paper are part of the NIFTy source code and documentation.

1. Observational information for f(T) theories and dark torsion

Energy Technology Data Exchange (ETDEWEB)

Bengochea, Gabriel R., E-mail: gabriel@iafe.uba.a [Instituto de Astronomia y Fisica del Espacio (IAFE), CC 67, Suc. 28, 1428 Buenos Aires (Argentina)

2011-01-17

In the present work we analyze and compare the information coming from different observational data sets in the context of a sort of f(T) theories. We perform a joint analysis with measurements of the most recent type Ia supernovae (SNe Ia), Baryon Acoustic Oscillation (BAO), Cosmic Microwave Background radiation (CMB), Gamma-Ray Bursts data (GRBs) and Hubble parameter observations (OHD) to constraint the only new parameter these theories have. It is shown that when the new combined BAO/CMB parameter is used to put constraints, the result is different from previous works. We also show that when we include Observational Hubble Data (OHD) the simpler {Lambda}CDM model is excluded to one sigma level, leading the effective equation of state of these theories to be of phantom type. Also, analyzing a tension criterion for SNe Ia and other observational sets, we obtain more consistent and better suited data sets to work with these theories.

2. Automated Physico-Chemical Cell Model Development through Information Theory

Energy Technology Data Exchange (ETDEWEB)

Peter J. Ortoleva

2005-11-29

The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

3. Comparison of Predictive Contract Mechanisms from an Information Theory Perspective

OpenAIRE

Zhang, Xin; Ward, Tomas; McLoone, Seamus

2012-01-01

Inconsistency arises across a Distributed Virtual Environment due to network latency induced by state changes communications. Predictive Contract Mechanisms (PCMs) combat this problem through reducing the amount of messages transmitted in return for perceptually tolerable inconsistency. To date there are no methods to quantify the efficiency of PCMs in communicating this reduced state information. This article presents an approach derived from concepts in information theory for a dee...

4. Basic Knowledge for Market Principle: Approaches to the Price Coordination Mechanism by Using Optimization Theory and Algorithm

Science.gov (United States)

Aiyoshi, Eitaro; Masuda, Kazuaki

On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.

5. Improved Inverse Kinematics Algorithm Using Screw Theory for a Six-DOF Robot Manipulator

Directory of Open Access Journals (Sweden)

Qingcheng Chen

2015-10-01

Full Text Available Based on screw theory, a novel improved inverse-kinematics approach for a type of six-DOF serial robot, “Qianjiang I”, is proposed in this paper. The common kinematics model of the robot is based on the Denavit-Hartenberg (D-H notation method while its inverse kinematics has inefficient calculation and complicated solution, which cannot meet the demands of online real-time application. To solve this problem, this paper presents a new method to improve the efficiency of the inverse kinematics solution by introducing the screw theory. Unlike other methods, the proposed method only establishes two coordinates, namely the inertial coordinate and the tool coordinate; the screw motion of each link is carried out based on the inertial coordinate, ensuring definite geometric meaning. Furthermore, we adopt a new inverse kinematics algorithm, developing an improved sub-problem method along with Paden-Kahan sub-problems. This method has high efficiency and can be applied in real-time industrial operation. It is convenient to select the desired solutions directly from among multiple solutions by examining clear geometric meaning. Finally, the effectiveness and reliability performance of the new algorithm are analysed and verified in comparative experiments carried out on the six-DOF serial robot “Qianjiang I”.

6. An extension theory-based maximum power tracker using a particle swarm optimization algorithm

International Nuclear Information System (INIS)

Chao, Kuei-Hsiang

2014-01-01

Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller

7. Density functional theory and evolution algorithm calculations of elastic properties of AlON

Energy Technology Data Exchange (ETDEWEB)

Batyrev, I. G.; Taylor, D. E.; Gazonas, G. A.; McCauley, J. W. [U.S. Army Research Laboratory, Aberdeen Proving Ground, Maryland 21005 (United States)

2014-01-14

Different models for aluminum oxynitride (AlON) were calculated using density functional theory and optimized using an evolutionary algorithm. Evolutionary algorithm and density functional theory (DFT) calculations starting from several models of AlON with different Al or O vacancy locations and different positions for the N atoms relative to the vacancy were carried out. The results show that the constant anion model [McCauley et al., J. Eur. Ceram. Soc. 29(2), 223 (2009)] with a random distribution of N atoms not adjacent to the Al vacancy has the lowest energy configuration. The lowest energy structure is in a reasonable agreement with experimental X-ray diffraction spectra. The optimized structure of a 55 atom unit cell was used to construct 220 and 440 atom models for simulation cells using DFT with a Gaussian basis set. Cubic elastic constant predictions were found to approach the experimentally determined AlON single crystal elastic constants as the model size increased from 55 to 440 atoms. The pressure dependence of the elastic constants found from simulated stress-strain relations were in overall agreement with experimental measurements of polycrystalline and single crystal AlON. Calculated IR intensity and Raman spectra are compared with available experimental data.

8. Consensus embedding: theory, algorithms and application to segmentation and classification of biomedical data

Directory of Open Access Journals (Sweden)

Viswanath Satish

2012-02-01

Full Text Available Abstract Background Dimensionality reduction (DR enables the construction of a lower dimensional space (embedding from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding. Intelligent sub-sampling (via mean-shift and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1 image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2 classification of 4 high-dimensional gene-expression datasets, (3 cancer detection (at a pixel-level on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range

9. A potential theory approach to an algorithm of conceptual space partitioning

Directory of Open Access Journals (Sweden)

Roman Urban

2017-12-01

Full Text Available A potential theory approach to an algorithm of conceptual space partitioning This paper proposes a new classification algorithm for the partitioning of a conceptual space. All the algorithms which have been used until now have mostly been based on the theory of Voronoi diagrams. This paper proposes an approach based on potential theory, with the criteria for measuring similarities between objects in the conceptual space being based on the Newtonian potential function. The notion of a fuzzy prototype, which generalizes the previous definition of a prototype, is introduced. Furthermore, the necessary conditions that a natural concept must meet are discussed. Instead of convexity, as proposed by Gärdenfors, the notion of geodesically convex sets is used. Thus, if a concept corresponds to a set which is geodesically convex, it is a natural concept. This definition applies, for example, if the conceptual space is an Euclidean space. As a by-product of the construction of the algorithm, an extension of the conceptual space to d-dimensional Riemannian manifolds is obtained.   Algorytm podziału przestrzeni konceptualnych przy użyciu teorii potencjału W niniejszej pracy zaproponowany został nowy algorytm podziału przestrzeni konceptualnej. Dotąd podział taki zazwyczaj wykorzystywał teorię diagramów Voronoi. Nasze podejście do problemu oparte jest na teorii potencjału Miara podobieństwa pomiędzy elementami przestrzeni konceptualnej bazuje na Newtonowskiej funkcji potencjału. Definiujemy pojęcie rozmytego prototypu, który uogólnia dotychczas stosowane definicje prototypu. Ponadto zajmujemy się warunkiem koniecznym, który musi spełniać naturalny koncept. Zamiast wypukłości zaproponowanej przez Gärdenforsa, rozważamy linie geodezyjne w obszarze odpowiadającym danemu konceptowi naturalnemu, otrzymując warunek mówiący, że koncept jest konceptem naturalnym, jeżeli zbiór odpowiadający temu konceptowi jest geodezyjnie wypuk

10. Securing mobile ad hoc networks using danger theory-based artificial immune algorithm.

Science.gov (United States)

Abdelhaq, Maha; Alsaqour, Raed; Abdelhaq, Shawkat

2015-01-01

A mobile ad hoc network (MANET) is a set of mobile, decentralized, and self-organizing nodes that are used in special cases, such as in the military. MANET properties render the environment of this network vulnerable to different types of attacks, including black hole, wormhole and flooding-based attacks. Flooding-based attacks are one of the most dangerous attacks that aim to consume all network resources and thus paralyze the functionality of the whole network. Therefore, the objective of this paper is to investigate the capability of a danger theory-based artificial immune algorithm called the mobile dendritic cell algorithm (MDCA) to detect flooding-based attacks in MANETs. The MDCA applies the dendritic cell algorithm (DCA) to secure the MANET with additional improvements. The MDCA is tested and validated using Qualnet v7.1 simulation tool. This work also introduces a new simulation module for a flooding attack called the resource consumption attack (RCA) using Qualnet v7.1. The results highlight the high efficiency of the MDCA in detecting RCAs in MANETs.

11. Development of the algorithm for obtaining 3-dimensional information using the structured light

Energy Technology Data Exchange (ETDEWEB)

Shin, Dong Uk; Lee, Jae Hyub; Kim, Chung Soo [Korea University of Technology and Education, Cheonan (Korea)

1998-03-01

The utilization of robot in atomic power plants or nuclear-related facilities has grown rapidly. In order to perform preassigned jobs using robot in nuclear-related facilities, advanced technology extracting 3D information of objects is essential. We have studied an algorithm to extract 3D information of objects using laser slit light and camera, and developed the following hardware system and algorithms. (1) We have designed and fabricated the hardware system which consists of laser light and two cameras. The hardware system can be easily installed on the robot. (2) In order to reduce the occlusion problem when measuring 3D information using laser slit light and camera, we have studied system with laser slit light and two cameras and developed algorithm to synthesize 3D information obtained from two cameras. (2) For easy use of obtained 3D information, we expressed it as digital distance image format and developed algorithm to interpolate 3D information of points which is not obtained. (4) In order to simplify calibration of the camera's parameter, we have also designed an fabricated LED plate, and developed an algorithm detecting the center position of LED automatically. We can certify the efficiency of developed algorithm and hardware system through experimental results. 16 refs., 26 figs., 1 tabs. (Author)

12. Information richness in construction projects: A critical social theory

NARCIS (Netherlands)

2002-01-01

Two important factors influencing the communication in construction projects are the interests of the people involved and the language spoken by the people involved. The objective of the paper is to analyse these factors by using recent insights in the information richness theory. The critical

13. Information Architecture without Internal Theory: An Inductive Design Process.

Science.gov (United States)

Haverty, Marsha

2002-01-01

Suggests that information architecture design is primarily an inductive process, partly because it lacks internal theory and partly because it is an activity that supports emergent phenomena (user experiences) from basic design components. Suggests a resemblance to Constructive Induction, a design process that locates the best representational…

14. Evaluating hydrological model performance using information theory-based metrics

Science.gov (United States)

The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

15. Cooperative Localization for Multi-AUVs Based on GM-PHD Filters and Information Entropy Theory

Directory of Open Access Journals (Sweden)

Lichuan Zhang

2017-10-01

Full Text Available Cooperative localization (CL is considered a promising method for underwater localization with respect to multiple autonomous underwater vehicles (multi-AUVs. In this paper, we proposed a CL algorithm based on information entropy theory and the probability hypothesis density (PHD filter, aiming to enhance the global localization accuracy of the follower. In the proposed framework, the follower carries lower cost navigation systems, whereas the leaders carry better ones. Meanwhile, the leaders acquire the followers’ observations, including both measurements and clutter. Then, the PHD filters are utilized on the leaders and the results are communicated to the followers. The followers then perform weighted summation based on all received messages and obtain a final positioning result. Based on the information entropy theory and the PHD filter, the follower is able to acquire a precise knowledge of its position.

16. The use of network theory to model disparate ship design information

Directory of Open Access Journals (Sweden)

Douglas Rigterink

2014-06-01

Full Text Available This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship's distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

17. The use of network theory to model disparate ship design information

Science.gov (United States)

Rigterink, Douglas; Piks, Rebecca; Singer, David J.

2014-06-01

This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship's distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

18. The use of network theory to model disparate ship design information

Directory of Open Access Journals (Sweden)

Rigterink Douglas

2014-06-01

Full Text Available This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship’s distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

19. The Philosophy of Information as an Underlying and Unifying Theory of Information Science

Science.gov (United States)

Tomic, Taeda

2010-01-01

Introduction: Philosophical analyses of theoretical principles underlying these sub-domains reveal philosophy of information as underlying meta-theory of information science. Method: Conceptual research on the knowledge sub-domains in information science and philosophy and analysis of their mutual connection. Analysis: Similarities between…

20. An information theory criteria based blind method for enumerating active users in DS-CDMA system

Science.gov (United States)

2014-11-01

In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.

1. Algorithms

will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

2. Developing Information Power Grid Based Algorithms and Software

Science.gov (United States)

Dongarra, Jack

1998-01-01

This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

3. How to Produce a Transdisciplinary Information Concept for a Universal Theory of Information?

DEFF Research Database (Denmark)

Brier, Søren

2017-01-01

the natural, technical, social and humanistic sciences must be defined as a part of real relational meaningful sign-processes manifesting as tokens. Thus Peirce’s information theory is empirically based in a realistic worldview, which through modern biosemiotics includes all living systems....... concept of information as a difference that makes a difference and in Luhmann’s triple autopoietic communication based system theory, where information is always a part of a message. Charles Sanders Peirce’s pragmaticist semiotics differs from other paradigms in that it integrates logic and information...... in interpretative semiotics. I therefore suggest alternatively building information theories based on semiotics from the basic relations of embodied living systems meaningful cognition and communication. I agree with Peircean biosemiotics that all transdisciplinary information concepts in order to work across...

4. A general rough-surface inversion algorithm: Theory and application to SAR data

Science.gov (United States)

1993-01-01

Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

5. A fingerprint classification algorithm based on combination of local and global information

Science.gov (United States)

Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

2011-12-01

Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

6. Information theory and stochastics for multiscale nonlinear systems

CERN Document Server

Majda, Andrew J; Grote, Marcus J

2005-01-01

This book introduces mathematicians to the fascinating emerging mathematical interplay between ideas from stochastics and information theory and important practical issues in studying complex multiscale nonlinear systems. It emphasizes the serendipity between modern applied mathematics and applications where rigorous analysis, the development of qualitative and/or asymptotic models, and numerical modeling all interact to explain complex phenomena. After a brief introduction to the emerging issues in multiscale modeling, the book has three main chapters. The first chapter is an introduction to information theory with novel applications to statistical mechanics, predictability, and Jupiter's Red Spot for geophysical flows. The second chapter discusses new mathematical issues regarding fluctuation-dissipation theorems for complex nonlinear systems including information flow, various approximations, and illustrates applications to various mathematical models. The third chapter discusses stochastic modeling of com...

7. An application of information theory to stochastic classical gravitational fields

Science.gov (United States)

Angulo, J.; Angulo, J. C.; Angulo, J. M.

2018-06-01

The objective of this study lies on the incorporation of the concepts developed in the Information Theory (entropy, complexity, etc.) with the aim of quantifying the variation of the uncertainty associated with a stochastic physical system resident in a spatiotemporal region. As an example of application, a relativistic classical gravitational field has been considered, with a stochastic behavior resulting from the effect induced by one or several external perturbation sources. One of the key concepts of the study is the covariance kernel between two points within the chosen region. Using this concept and the appropriate criteria, a methodology is proposed to evaluate the change of uncertainty at a given spatiotemporal point, based on available information and efficiently applying the diverse methods that Information Theory provides. For illustration, a stochastic version of the Einstein equation with an added Gaussian Langevin term is analyzed.

8. Entropy and information causality in general probabilistic theories

International Nuclear Information System (INIS)

Barnum, Howard; Leifer, Matthew; Spekkens, Robert; Barrett, Jonathan; Clark, Lisa Orloff; Stepanik, Nicholas; Wilce, Alex; Wilke, Robin

2010-01-01

We investigate the concept of entropy in probabilistic theories more general than quantum mechanics, with particular reference to the notion of information causality (IC) recently proposed by Pawlowski et al (2009 arXiv:0905.2292). We consider two entropic quantities, which we term measurement and mixing entropy. In the context of classical and quantum theory, these coincide, being given by the Shannon and von Neumann entropies, respectively; in general, however, they are very different. In particular, while measurement entropy is easily seen to be concave, mixing entropy need not be. In fact, as we show, mixing entropy is not concave whenever the state space is a non-simplicial polytope. Thus, the condition that measurement and mixing entropies coincide is a strong constraint on possible theories. We call theories with this property monoentropic. Measurement entropy is subadditive, but not in general strongly subadditive. Equivalently, if we define the mutual information between two systems A and B by the usual formula I(A: B)=H(A)+H(B)-H(AB), where H denotes the measurement entropy and AB is a non-signaling composite of A and B, then it can happen that I(A:BC)< I(A:B). This is relevant to IC in the sense of Pawlowski et al: we show that any monoentropic non-signaling theory in which measurement entropy is strongly subadditive, and also satisfies a version of the Holevo bound, is informationally causal, and on the other hand we observe that Popescu-Rohrlich boxes, which violate IC, also violate strong subadditivity. We also explore the interplay between measurement and mixing entropy and various natural conditions on theories that arise in quantum axiomatics.

9. Grounded theory for radiotherapy practitioners: Informing clinical practice

International Nuclear Information System (INIS)

Walsh, N.A.

2010-01-01

Radiotherapy practitioners may be best placed to undertake qualitative research within the context of cancer, due to specialist knowledge of radiation treatment and sensitivity to radiotherapy patient's needs. The grounded theory approach to data collection and analysis is a unique method of identifying a theory directly based on data collected within a clinical context. Research for radiotherapy practitioners is integral to role expansion within the government's directive for evidence-based practice. Due to the paucity of information on qualitative research undertaken by radiotherapy radiographers, this article aims to assess the potential impact of qualitative research on radiotherapy patient and service outcomes.

10. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

International Nuclear Information System (INIS)

Sundararaman, Ravishankar; Goddard, William A. III; Arias, Tomas A.

2017-01-01

First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.

11. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

Science.gov (United States)

Sundararaman, Ravishankar; Goddard, William A.; Arias, Tomas A.

2017-03-01

First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.

12. Characterization and visualization of RNA secondary structure Boltzmann ensemble via information theory.

Science.gov (United States)

Lin, Luan; McKerrow, Wilson H; Richards, Bryce; Phonsom, Chukiat; Lawrence, Charles E

2018-03-05

The nearest neighbor model and associated dynamic programming algorithms allow for the efficient estimation of the RNA secondary structure Boltzmann ensemble. However because a given RNA secondary structure only contains a fraction of the possible helices that could form from a given sequence, the Boltzmann ensemble is multimodal. Several methods exist for clustering structures and finding those modes. However less focus is given to exploring the underlying reasons for this multimodality: the presence of conflicting basepairs. Information theory, or more specifically mutual information, provides a method to identify those basepairs that are key to the secondary structure. To this end we find most informative basepairs and visualize the effect of these basepairs on the secondary structure. Knowing whether a most informative basepair is present tells us not only the status of the particular pair but also provides a large amount of information about which other pairs are present or not present. We find that a few basepairs account for a large amount of the structural uncertainty. The identification of these pairs indicates small changes to sequence or stability that will have a large effect on structure. We provide a novel algorithm that uses mutual information to identify the key basepairs that lead to a multimodal Boltzmann distribution. We then visualize the effect of these pairs on the overall Boltzmann ensemble.

13. Using institutional theory with sensemaking theory: a case study of information system implementation in healthcare

DEFF Research Database (Denmark)

Jensen, Tina Blegind; Kjærgaard, Annemette; Svejvig, Per

2009-01-01

Institutional theory has proven to be a central analytical perspective for investigating the role of social and historical structures of information systems (IS) implementation. However, it does not explicitly account for how organisational actors make sense of and enact technologies in their local...... context. We address this limitation by exploring the potential of using institutional theory with sensemaking theory to study IS implementation in organisations. We argue that each theoretical perspective has its own explanatory power and that a combination of the two facilitates a much richer...... interpretation of IS implementation by linking macro- and micro-levels of analysis. To illustrate this, we report from an empirical study of the implementation of an Electronic Patient Record (EPR) system in a clinical setting. Using key constructs from the two theories, our findings address the phenomenon...

14. An improved algorithm for information hiding based on features of Arabic text: A Unicode approach

Directory of Open Access Journals (Sweden)

A.A. Mohamed

2014-07-01

Full Text Available Steganography means how to hide secret information in a cover media, so that other individuals fail to realize their existence. Due to the lack of data redundancy in the text file in comparison with other carrier files, text steganography is a difficult problem to solve. In this paper, we proposed a new promised steganographic algorithm for Arabic text based on features of Arabic text. The focus is on more secure algorithm and high capacity of the carrier. Our extensive experiments using the proposed algorithm resulted in a high capacity of the carrier media. The embedding capacity rate ratio of the proposed algorithm is high. In addition, our algorithm can resist traditional attacking methods since it makes the changes in carrier text as minimum as possible.

15. Walking pattern classification and walking distance estimation algorithms using gait phase information.

Science.gov (United States)

Wang, Jeen-Shing; Lin, Che-Wei; Yang, Ya-Ting C; Ho, Yu-Jen

2012-10-01

This paper presents a walking pattern classification and a walking distance estimation algorithm using gait phase information. A gait phase information retrieval algorithm was developed to analyze the duration of the phases in a gait cycle (i.e., stance, push-off, swing, and heel-strike phases). Based on the gait phase information, a decision tree based on the relations between gait phases was constructed for classifying three different walking patterns (level walking, walking upstairs, and walking downstairs). Gait phase information was also used for developing a walking distance estimation algorithm. The walking distance estimation algorithm consists of the processes of step count and step length estimation. The proposed walking pattern classification and walking distance estimation algorithm have been validated by a series of experiments. The accuracy of the proposed walking pattern classification was 98.87%, 95.45%, and 95.00% for level walking, walking upstairs, and walking downstairs, respectively. The accuracy of the proposed walking distance estimation algorithm was 96.42% over a walking distance.

16. EDITORIAL: Quantum control theory for coherence and information dynamics Quantum control theory for coherence and information dynamics

Science.gov (United States)

Viola, Lorenza; Tannor, David

2011-08-01

Precisely characterizing and controlling the dynamics of realistic open quantum systems has emerged in recent years as a key challenge across contemporary quantum sciences and technologies, with implications ranging from physics, chemistry and applied mathematics to quantum information processing (QIP) and quantum engineering. Quantum control theory aims to provide both a general dynamical-system framework and a constructive toolbox to meet this challenge. The purpose of this special issue of Journal of Physics B: Atomic, Molecular and Optical Physics is to present a state-of-the-art account of recent advances and current trends in the field, as reflected in two international meetings that were held on the subject over the last summer and which motivated in part the compilation of this volume—the Topical Group: Frontiers in Open Quantum Systems and Quantum Control Theory, held at the Institute for Theoretical Atomic, Molecular and Optical Physics (ITAMP) in Cambridge, Massachusetts (USA), from 1-14 August 2010, and the Safed Workshop on Quantum Decoherence and Thermodynamics Control, held in Safed (Israel), from 22-27 August 2010. Initial developments in quantum control theory date back to (at least) the early 1980s, and have been largely inspired by the well-established mathematical framework for classical dynamical systems. As the above-mentioned meetings made clear, and as the burgeoning body of literature on the subject testifies, quantum control has grown since then well beyond its original boundaries, and has by now evolved into a highly cross-disciplinary field which, while still fast-moving, is also entering a new phase of maturity, sophistication, and integration. Two trends deserve special attention: on the one hand, a growing emphasis on control tasks and methodologies that are specifically motivated by QIP, in addition and in parallel to applications in more traditional areas where quantum coherence is nevertheless vital (such as, for instance

17. Generating information-rich high-throughput experimental materials genomes using functional clustering via multitree genetic programming and information theory.

Science.gov (United States)

Suram, Santosh K; Haber, Joel A; Jin, Jian; Gregoire, John M

2015-04-13

High-throughput experimental methodologies are capable of synthesizing, screening and characterizing vast arrays of combinatorial material libraries at a very rapid rate. These methodologies strategically employ tiered screening wherein the number of compositions screened decreases as the complexity, and very often the scientific information obtained from a screening experiment, increases. The algorithm used for down-selection of samples from higher throughput screening experiment to a lower throughput screening experiment is vital in achieving information-rich experimental materials genomes. The fundamental science of material discovery lies in the establishment of composition-structure-property relationships, motivating the development of advanced down-selection algorithms which consider the information value of the selected compositions, as opposed to simply selecting the best performing compositions from a high throughput experiment. Identification of property fields (composition regions with distinct composition-property relationships) in high throughput data enables down-selection algorithms to employ advanced selection strategies, such as the selection of representative compositions from each field or selection of compositions that span the composition space of the highest performing field. Such strategies would greatly enhance the generation of data-driven discoveries. We introduce an informatics-based clustering of composition-property functional relationships using a combination of information theory and multitree genetic programming concepts for identification of property fields in a composition library. We demonstrate our approach using a complex synthetic composition-property map for a 5 at. % step ternary library consisting of four distinct property fields and finally explore the application of this methodology for capturing relationships between composition and catalytic activity for the oxygen evolution reaction for 5429 catalyst compositions in a

18. A short course in quantum information theory. An approach from theoretical physics. 2. ed.

International Nuclear Information System (INIS)

Diosi, Lajos

2011-01-01

19. Intuitive theories of information: beliefs about the value of redundancy.

Science.gov (United States)

Soll, J B

1999-03-01

In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.

20. Experiments in Discourse Analysis Impact on Information Classification and Retrieval Algorithms.

Science.gov (United States)

Morato, Jorge; Llorens, J.; Genova, G.; Moreiro, J. A.

2003-01-01

Discusses the inclusion of contextual information in indexing and retrieval systems to improve results and the ability to carry out text analysis by means of linguistic knowledge. Presents research that investigated whether discourse variables have an impact on information and retrieval and classification algorithms. (Author/LRW)

1. Filtration Algorithms of Untrustworthy Analogous Information in APCS at TPP and NPP

Directory of Open Access Journals (Sweden)

V. I. Nazarov

2012-01-01

Full Text Available The paper considers filtration algorithms of untrustworthy analogous information in APCS at TTP and NPP that make it possible to identify credibility of information transmitted through communication channels in the form of signals and which are continuously changeable in the regime of real time.

2. REALIZATION OF VISUAL TECHNIQUE DIDACTIC APPROACH IN ALGORITHMIC TRAINING OF STUDENTS THROUGH INFORMATION AND COMMUNICATION TECHNOLOGIES OF EDUCATIONAL ENVIRONMENT

Directory of Open Access Journals (Sweden)

Sergii Voloshynov

2016-12-01

Full Text Available The article examines the development of visual learning theory, states functions of accuracy and peculiarities of visual technique realization in modern studying process, it defines the concept of “Visual learning environment” and didactic role of interactive and multimedia visualization processes. Author examines the problem of determination of cognitive visualization potential in algorithmic training of students through information and communication technologies of educational environment. This article specifies functions of visual aids use and implementation features of the specified principle in modern educational process and proves the didactic role of interactive multimedia visualization process that stimulates cognitive activity of student and activates perceptive mechanism of teaching information. It analyzes problem of cognitive visualization potential capacity signification while training future marine personnel using informational communicative educational environment.

3. Algorithmic Information Dynamics of Persistent Patterns and Colliding Particles in the Game of Life

KAUST Repository

Zenil, Hector

2018-02-18

We demonstrate the way to apply and exploit the concept of \\\\textit{algorithmic information dynamics} in the characterization and classification of dynamic and persistent patterns, motifs and colliding particles in, without loss of generalization, Conway\\'s Game of Life (GoL) cellular automaton as a case study. We analyze the distribution of prevailing motifs that occur in GoL from the perspective of algorithmic probability. We demonstrate how the tools introduced are an alternative to computable measures such as entropy and compression algorithms which are often nonsensitive to small changes and features of non-statistical nature in the study of evolving complex systems and their emergent structures.

4. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

Science.gov (United States)

Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

2010-01-01

A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

5. Should the model for risk-informed regulation be game theory rather than decision theory?

Science.gov (United States)

Bier, Vicki M; Lin, Shi-Woei

2013-02-01

Risk analysts frequently view the regulation of risks as being largely a matter of decision theory. According to this view, risk analysis methods provide information on the likelihood and severity of various possible outcomes; this information should then be assessed using a decision-theoretic approach (such as cost/benefit analysis) to determine whether the risks are acceptable, and whether additional regulation is warranted. However, this view ignores the fact that in many industries (particularly industries that are technologically sophisticated and employ specialized risk and safety experts), risk analyses may be done by regulated firms, not by the regulator. Moreover, those firms may have more knowledge about the levels of safety at their own facilities than the regulator does. This creates a situation in which the regulated firm has both the opportunity-and often also the motive-to provide inaccurate (in particular, favorably biased) risk information to the regulator, and hence the regulator has reason to doubt the accuracy of the risk information provided by regulated parties. Researchers have argued that decision theory is capable of dealing with many such strategic interactions as well as game theory can. This is especially true in two-player, two-stage games in which the follower has a unique best strategy in response to the leader's strategy, as appears to be the case in the situation analyzed in this article. However, even in such cases, we agree with Cox that game-theoretic methods and concepts can still be useful. In particular, the tools of mechanism design, and especially the revelation principle, can simplify the analysis of such games because the revelation principle provides rigorous assurance that it is sufficient to analyze only games in which licensees truthfully report their risk levels, making the problem more manageable. Without that, it would generally be necessary to consider much more complicated forms of strategic behavior (including

6. Russian and Chinese Information Warfare: Theory and Practice

Science.gov (United States)

2004-06-01

Integral neurolinguistic programming •Placing essential programs into the conscious or sub- conscious mind •Subconscious suggestions that modify human...Generators of special rays •Optical systems • Neurolinguistic programming •Computer psychotechnology •The mass media •Audiovisual effects •Special effects...Information Warfare: Theory and Practice 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e

7. Inverse problems with Poisson data: statistical regularization theory, applications and algorithms

International Nuclear Information System (INIS)

Hohage, Thorsten; Werner, Frank

2016-01-01

Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years. (topical review)

8. A Novel Dynamic Algorithm for IT Outsourcing Risk Assessment Based on Transaction Cost Theory

Directory of Open Access Journals (Sweden)

Guodong Cong

2015-01-01

Full Text Available With the great risk exposed in IT outsourcing, how to assess IT outsourcing risk becomes a critical issue. However, most of approaches to date need to further adapt to the particular complexity of IT outsourcing risk for either falling short in subjective bias, inaccuracy, or efficiency. This paper proposes a dynamic algorithm of risk assessment. It initially forwards extended three layers (risk factors, risks, and risk consequences of transferring mechanism based on transaction cost theory (TCT as the framework of risk analysis, which bridges the interconnection of components in three layers with preset transferring probability and impact. Then, it establishes an equation group between risk factors and risk consequences, which assures the “attribution” more precisely to track the specific sources that lead to certain loss. Namely, in each phase of the outsourcing lifecycle, both the likelihood and the loss of each risk factor and those of each risk are acquired through solving equation group with real data of risk consequences collected. In this “reverse” way, risk assessment becomes a responsive and interactive process with real data instead of subjective estimation, which improves the accuracy and alleviates bias in risk assessment. The numerical case proves the effectiveness of the algorithm compared with the approach forwarded by other references.

9. Two- and three-dimensional nonlocal density functional theory for inhomogeneous fluids. 1. Algorithms and parallelization

International Nuclear Information System (INIS)

Frink, L.J.D.; Salinger, A.G.

2000-01-01

Fluids adsorbed near surfaces, near macromolecules, and in porous materials are inhomogeneous, exhibiting spatially varying density distributions. This inhomogeneity in the fluid plays an important role in controlling a wide variety of complex physical phenomena including wetting, self-assembly, corrosion, and molecular recognition. One of the key methods for studying the properties of inhomogeneous fluids in simple geometries has been density functional theory (DFT). However, there has been a conspicuous lack of calculations in complex two- and three-dimensional geometries. The computational difficulty arises from the need to perform nested integrals that are due to nonlocal terms in the free energy functional. These integral equations are expensive both in evaluation time and in memory requirements; however, the expense can be mitigated by intelligent algorithms and the use of parallel computers. This paper details the efforts to develop efficient numerical algorithms so that nonlocal DFT calculations in complex geometries that require two or three dimensions can be performed. The success of this implementation will enable the study of solvation effects at heterogeneous surfaces, in zeolites, in solvated (bio)polymers, and in colloidal suspensions

10. Information theory, animal communication, and the search for extraterrestrial intelligence

Science.gov (United States)

Doyle, Laurance R.; McCowan, Brenda; Johnston, Simon; Hanser, Sean F.

2011-02-01

We present ongoing research in the application of information theory to animal communication systems with the goal of developing additional detectors and estimators for possible extraterrestrial intelligent signals. Regardless of the species, for intelligence (i.e., complex knowledge) to be transmitted certain rules of information theory must still be obeyed. We demonstrate some preliminary results of applying information theory to socially complex marine mammal species (bottlenose dolphins and humpback whales) as well as arboreal squirrel monkeys, because they almost exclusively rely on vocal signals for their communications, producing signals which can be readily characterized by signal analysis. Metrics such as Zipf's Law and higher-order information-entropic structure are emerging as indicators of the communicative complexity characteristic of an "intelligent message" content within these animals' signals, perhaps not surprising given these species' social complexity. In addition to human languages, for comparison we also apply these metrics to pulsar signals—perhaps (arguably) the most "organized" of stellar systems—as an example of astrophysical systems that would have to be distinguished from an extraterrestrial intelligence message by such information theoretic filters. We also look at a message transmitted from Earth (Arecibo Observatory) that contains a lot of meaning but little information in the mathematical sense we define it here. We conclude that the study of non-human communication systems on our own planet can make a valuable contribution to the detection of extraterrestrial intelligence by providing quantitative general measures of communicative complexity. Studying the complex communication systems of other intelligent species on our own planet may also be one of the best ways to deprovincialize our thinking about extraterrestrial communication systems in general.

11. Nonequilibrium thermodynamics and information theory: basic concepts and relaxing dynamics

Science.gov (United States)

Altaner, Bernhard

2017-11-01

Thermodynamics is based on the notions of energy and entropy. While energy is the elementary quantity governing physical dynamics, entropy is the fundamental concept in information theory. In this work, starting from first principles, we give a detailed didactic account on the relations between energy and entropy and thus physics and information theory. We show that thermodynamic process inequalities, like the second law, are equivalent to the requirement that an effective description for physical dynamics is strongly relaxing. From the perspective of information theory, strongly relaxing dynamics govern the irreversible convergence of a statistical ensemble towards the maximally non-commital probability distribution that is compatible with thermodynamic equilibrium parameters. In particular, Markov processes that converge to a thermodynamic equilibrium state are strongly relaxing. Our framework generalizes previous results to arbitrary open and driven systems, yielding novel thermodynamic bounds for idealized and real processes. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Bernhard Altaner was selected by the Editorial Board of J. Phys. A as an Emerging Talent.

12. Preservation of information in Fourier theory based deconvolved nuclear spectra

International Nuclear Information System (INIS)

Madan, V.K.; Gopalakrishnan, K.R.; Sharma, R.C.; Rattan, S.S.

1995-01-01

Nuclear spectroscopy is extremely useful to the internal radiation dosimetry for the estimation of body burden due to gamma emitters. Analysis of nuclear spectra is concerned with the extraction of qualitative and quantitative information embedded in the spectra. A spectral deconvolution method based on Fourier theory is probably the simplest method of deconvolving nuclear spectra. It is proved mathematically that the deconvolution method preserves the qualitative information. It is shown by using simulated spectra and an observed gamma ray spectrum that the method preserves the quantitative information. This may provide a novel approach of information extraction from a deconvolved spectrum. The paper discusses the methodology, mathematical analysis, and the results obtained by deconvolving spectra. (author). 6 refs., 2 tabs

13. Approach to estimation of level of information security at enterprise based on genetic algorithm

Science.gov (United States)

V, Stepanov L.; V, Parinov A.; P, Korotkikh L.; S, Koltsov A.

2018-05-01

In the article, the way of formalization of different types of threats of information security and vulnerabilities of an information system of the enterprise and establishment is considered. In a type of complexity of ensuring information security of application of any new organized system, the concept and decisions in the sphere of information security are expedient. One of such approaches is the method of a genetic algorithm. For the enterprises of any fields of activity, the question of complex estimation of the level of security of information systems taking into account the quantitative and qualitative factors characterizing components of information security is relevant.

14. EDITORIAL: Focus on Quantum Information and Many-Body Theory

Science.gov (United States)

Eisert, Jens; Plenio, Martin B.

2010-02-01

Quantum many-body models describing natural systems or materials and physical systems assembled piece by piece in the laboratory for the purpose of realizing quantum information processing share an important feature: intricate correlations that originate from the coherent interaction between a large number of constituents. In recent years it has become manifest that the cross-fertilization between research devoted to quantum information science and to quantum many-body physics leads to new ideas, methods, tools, and insights in both fields. Issues of criticality, quantum phase transitions, quantum order and magnetism that play a role in one field find relations to the classical simulation of quantum systems, to error correction and fault tolerance thresholds, to channel capacities and to topological quantum computation, to name but a few. The structural similarities of typical problems in both fields and the potential for pooling of ideas then become manifest. Notably, methods and ideas from quantum information have provided fresh approaches to long-standing problems in strongly correlated systems in the condensed matter context, including both numerical methods and conceptual insights. Focus on quantum information and many-body theory Contents TENSOR NETWORKS Homogeneous multiscale entanglement renormalization ansatz tensor networks for quantum critical systems M Rizzi, S Montangero, P Silvi, V Giovannetti and Rosario Fazio Concatenated tensor network states R Hübener, V Nebendahl and W Dür Entanglement renormalization in free bosonic systems: real-space versus momentum-space renormalization group transforms G Evenbly and G Vidal Finite-size geometric entanglement from tensor network algorithms Qian-Qian Shi, Román Orús, John Ove Fjærestad and Huan-Qiang Zhou Characterizing symmetries in a projected entangled pair state D Pérez-García, M Sanz, C E González-Guillén, M M Wolf and J I Cirac Matrix product operator representations B Pirvu, V Murg, J I Cirac

15. Hybrid iterative phase retrieval algorithm based on fusion of intensity information in three defocused planes.

Science.gov (United States)

Zeng, Fa; Tan, Qiaofeng; Yan, Yingbai; Jin, Guofan

2007-10-01

Study of phase retrieval technology is quite meaningful, for its wide applications related to many domains, such as adaptive optics, detection of laser quality, precise measurement of optical surface, and so on. Here a hybrid iterative phase retrieval algorithm is proposed, based on fusion of the intensity information in three defocused planes. First the conjugate gradient algorithm is adapted to achieve a coarse solution of phase distribution in the input plane; then the iterative angular spectrum method is applied in succession for better retrieval result. This algorithm is still applicable even when the exact shape and size of the aperture in the input plane are unknown. Moreover, this algorithm always exhibits good convergence, i.e., the retrieved results are insensitive to the chosen positions of the three defocused planes and the initial guess of complex amplitude in the input plane, which has been proved by both simulations and further experiments.

16. Role of information theoretic uncertainty relations in quantum theory

Energy Technology Data Exchange (ETDEWEB)

Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz [FNSPE, Czech Technical University in Prague, Břehová 7, 115 19 Praha 1 (Czech Republic); ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin (Germany); Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk [Department of Physics and Astronomy, University of Sussex, Falmer, Brighton, BN1 9QH (United Kingdom); Joo, Jaewoo, E-mail: j.joo@surrey.ac.uk [Advanced Technology Institute and Department of Physics, University of Surrey, Guildford, GU2 7XH (United Kingdom)

2015-04-15

Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again, improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.

17. Role of information theoretic uncertainty relations in quantum theory

International Nuclear Information System (INIS)

Jizba, Petr; Dunningham, Jacob A.; Joo, Jaewoo

2015-01-01

Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again, improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed

18. Integrated information theory of consciousness: an updated account.

Science.gov (United States)

Tononi, G

2012-12-01

This article presents an updated account of integrated information theory of consciousness (liT) and some of its implications. /IT stems from thought experiments that lead to phenomenological axioms (existence, compositionality, information, integration, exclusion) and corresponding ontological postulates. The information axiom asserts that every experience is spec~fic - it is what it is by differing in its particular way from a large repertoire of alternatives. The integration axiom asserts that each experience is unified- it cannot be reduced to independent components. The exclusion axiom asserts that every experience is definite - it is limited to particular things and not others and flows at a particular speed and resolution. /IT formalizes these intuitions with postulates. The information postulate states that only "differences that make a difference" from the intrinsic perpective of a system matter: a mechanism generates cause-effect information if its present state has selective past causes and selective future effects within a system. The integration postulate states that only information that is irreducible matters: mechanisms generate integrated information only to the extent that the information they generate cannot be partitioned into that generated within independent components. The exclusion postulate states that only maxima of integrated information matter: a mechanism specifies only one maximally irreducible set of past causes and future effects - a concept. A complex is a set of elements specifying a maximally irreducible constellation of concepts, where the maximum is evaluated over elements and at the optimal spatiatemporal scale. Its concepts specify a maximally integrated conceptual information structure or quale, which is identical with an experience. Finally, changes in information integration upon exposure to the environment reflect a system's ability to match the causal structure of the world. After introducing an updated definition of

19. Informed consent in neurosurgery--translating ethical theory into action.

Science.gov (United States)

Schmitz, Dagmar; Reinacher, Peter C

2006-09-01

Although a main principle of medical ethics and law since the 1970s, standards of informed consent are regarded with great scepticism by many clinicans. By reviewing the reactions to and adoption of this principle of medical ethics in neurosurgery, the characteristic conflicts that emerge between theory and everyday clinical experience are emphasised and a modified conception of informed consent is proposed. The adoption and debate of informed consent in neurosurgery took place in two steps. Firstly, respect for patient autonomy was included into the ethical codes of the professional organisations. Secondly, the legal demands of the principle were questioned by clinicians. Informed consent is mainly interpreted in terms of freedom from interference and absolute autonomy. It lacks a constructive notion of physician-patient interaction in its effort to promote the best interest of the patient, which, however, potentially emerges from a reconsideration of the principle of beneficence. To avoid insufficient legal interpretations, informed consent should be understood in terms of autonomy and beneficence. A continuous interaction between the patient and the given physician is considered as an essential prerequisite for the realisation of the standards of informed consent.

20. Informed consent in neurosurgery—translating ethical theory into action

Science.gov (United States)

Schmitz, Dagmar; Reinacher, Peter C

2006-01-01

Objective Although a main principle of medical ethics and law since the 1970s, standards of informed consent are regarded with great scepticism by many clinicans. Methods By reviewing the reactions to and adoption of this principle of medical ethics in neurosurgery, the characteristic conflicts that emerge between theory and everyday clinical experience are emphasised and a modified conception of informed consent is proposed. Results The adoption and debate of informed consent in neurosurgery took place in two steps. Firstly, respect for patient autonomy was included into the ethical codes of the professional organisations. Secondly, the legal demands of the principle were questioned by clinicians. Informed consent is mainly interpreted in terms of freedom from interference and absolute autonomy. It lacks a constructive notion of physician–patient interaction in its effort to promote the best interest of the patient, which, however, potentially emerges from a reconsideration of the principle of beneficence. Conclusion To avoid insufficient legal interpretations, informed consent should be understood in terms of autonomy and beneficence. A continuous interaction between the patient and the given physician is considered as an essential prerequisite for the realisation of the standards of informed consent. PMID:16943326

1. New approaches in mathematical biology: Information theory and molecular machines

International Nuclear Information System (INIS)

Schneider, T.

1995-01-01

My research uses classical information theory to study genetic systems. Information theory was founded by Claude Shannon in the 1940's and has had an enormous impact on communications engineering and computer sciences. Shannon found a way to measure information. This measure can be used to precisely characterize the sequence conservation at nucleic-acid binding sites. The resulting methods, by completely replacing the use of ''consensus sequences'', provide better models for molecular biologists. An excess of conservation led us to do experimental work on bacteriophage T7 promoters and the F plasmid IncD repeats. The wonderful fidelity of telephone communications and compact disk (CD) music can be traced directly to Shannon's channel capacity theorem. When rederived for molecular biology, this theorem explains the surprising precision of many molecular events. Through connections with the Second Law of Thermodyanmics and Maxwell's Demon, this approach also has implications for the development of technology at the molecular level. Discussions of these topics are held on the internet news group bionet.info-theo. (author). (Abstract only)

2. An Agent-Based Framework for E-Commerce Information Retrieval Management Using Genetic Algorithms

Directory of Open Access Journals (Sweden)

Floarea NASTASE

2009-01-01

Full Text Available The paper addresses the issue of improving retrieval performance management for retrieval from document collections that exist on the Internet. It also comes with a solution that uses the benefits of the agent technology and genetic algorithms in the process of the information retrieving management. The most important paradigms of information retrieval are mentioned having the goal to make more evident the advantages of using the genetic algorithms based one. Within the paper, also a genetic algorithm that can be use for the proposed solution is detailed and a comparative description between the dynamic and static proposed solution is made. In the end, new future directions are shown based on elements presented in this paper. The future results look very encouraging.

3. Parallel algorithm of real-time infrared image restoration based on total variation theory

Science.gov (United States)

Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

2015-10-01

Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

4. Why hydrological predictions should be evaluated using information theory

Directory of Open Access Journals (Sweden)

S. V. Weijs

2010-12-01

Full Text Available Probabilistic predictions are becoming increasingly popular in hydrology. Equally important are methods to test such predictions, given the topical debate on uncertainty analysis in hydrology. Also in the special case of hydrological forecasting, there is still discussion about which scores to use for their evaluation. In this paper, we propose to use information theory as the central framework to evaluate predictions. From this perspective, we hope to shed some light on what verification scores measure and should measure. We start from the ''divergence score'', a relative entropy measure that was recently found to be an appropriate measure for forecast quality. An interpretation of a decomposition of this measure provides insight in additive relations between climatological uncertainty, correct information, wrong information and remaining uncertainty. When the score is applied to deterministic forecasts, it follows that these increase uncertainty to infinity. In practice, however, deterministic forecasts tend to be judged far more mildly and are widely used. We resolve this paradoxical result by proposing that deterministic forecasts either are implicitly probabilistic or are implicitly evaluated with an underlying decision problem or utility in mind. We further propose that calibration of models representing a hydrological system should be the based on information-theoretical scores, because this allows extracting all information from the observations and avoids learning from information that is not there. Calibration based on maximizing utility for society trains an implicit decision model rather than the forecasting system itself. This inevitably results in a loss or distortion of information in the data and more risk of overfitting, possibly leading to less valuable and informative forecasts. We also show this in an example. The final conclusion is that models should preferably be explicitly probabilistic and calibrated to maximize the

5. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

Science.gov (United States)

Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

2016-01-01

This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

6. Evaluation of the efficiency of computer-aided spectra search systems based on information theory

International Nuclear Information System (INIS)

Schaarschmidt, K.

1979-01-01

Application of information theory allows objective evaluation of the efficiency of computer-aided spectra search systems. For this purpose, a significant number of search processes must be analyzed. The amount of information gained by computer application is considered as the difference between the entropy of the data bank and a conditional entropy depending on the proportion of unsuccessful search processes and ballast. The influence of the following factors can be estimated: volume, structure, and quality of the spectra collection stored, efficiency of the encoding instruction and the comparing algorithm, and subjective errors involved in the encoding of spectra. The relations derived are applied to two published storage and retrieval systems for infared spectra. (Auth.)

7. Highly accurate fluorogenic DNA sequencing with information theory-based error correction.

Science.gov (United States)

Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi

2017-12-01

Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.

8. Optimizing Sparse Representations of Kinetic Distributions via Information Theory

Science.gov (United States)

2017-07-31

Information Theory Robert Martin and Daniel Eckhardt Air Force Research Laboratory (AFMC) AFRL/RQRS 1 Ara Drive Edwards AFB, CA 93524-7013 Air Force...momentum, energy, and physical entropy. N/A Unclassified Unclassified Unclassified SAR 7 Robert Martin N/A Research in Industrial Projects for Students...Journal of Computational Physics, vol. 145, no. 1, pp. 382 – 405, 1998. [7] R. S. Martin , H. Le, D. L. Bilyeu, and S. Gildea, “Plasma model V&V of

9. Properties of some nonlinear Schroedinger equations motivated through information theory

International Nuclear Information System (INIS)

Yuan, Liew Ding; Parwani, Rajesh R

2009-01-01

We update our understanding of nonlinear Schroedinger equations motivated through information theory. In particular we show that a q-deformation of the basic nonlinear equation leads to a perturbative increase in the energy of a system, thus favouring the simplest q = 1 case. Furthermore the energy minimisation criterion is shown to be equivalent, at leading order, to an uncertainty maximisation argument. The special value η = 1/4 for the interpolation parameter, where leading order energy shifts vanish, implies the preservation of existing supersymmetry in nonlinearised supersymmetric quantum mechanics. Physically, η might be encoding relativistic effects.

10. Surrogate Marker Evaluation from an Information Theory Perspective

OpenAIRE

2006-01-01

The last 20 years have seen lots of work in the area of surrogate marker validation, partly devoted to frame the evaluation in a multitrial framework, leading to definitions in terms of the quality of trial- and individual-level association between a potential surrogate and a true endpoint (Buyse et al., 2000, Biostatistics 1, 49–67). A drawback is that different settings have led to different measures at the individual level. Here, we use information theory to create a unified framework, lea...

11. Towards integrating control and information theories from information-theoretic measures to control performance limitations

CERN Document Server

Fang, Song; Ishii, Hideaki

2017-01-01

This book investigates the performance limitation issues in networked feedback systems. The fact that networked feedback systems consist of control and communication devices and systems calls for the integration of control theory and information theory. The primary contributions of this book lie in two aspects: the newly-proposed information-theoretic measures and the newly-discovered control performance limitations. We first propose a number of information notions to facilitate the analysis. Using those notions, classes of performance limitations of networked feedback systems, as well as state estimation systems, are then investigated. In general, the book presents a unique, cohesive treatment of performance limitation issues of networked feedback systems via an information-theoretic approach. This book is believed to be the first to treat the aforementioned subjects systematically and in a unified manner, offering a unique perspective differing from existing books.

12. An introductory review of information theory in the context of computational neuroscience.

Science.gov (United States)

McDonnell, Mark D; Ikeda, Shiro; Manton, Jonathan H

2011-07-01

This article introduces several fundamental concepts in information theory from the perspective of their origins in engineering. Understanding such concepts is important in neuroscience for two reasons. Simply applying formulae from information theory without understanding the assumptions behind their definitions can lead to erroneous results and conclusions. Furthermore, this century will see a convergence of information theory and neuroscience; information theory will expand its foundations to incorporate more comprehensively biological processes thereby helping reveal how neuronal networks achieve their remarkable information processing abilities.

13. Integrating soil information into canopy sensor algorithms for improved corn nitrogen rate recommendation

Science.gov (United States)

Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...

14. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

Science.gov (United States)

de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

2013-06-01

Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

15. Medical image registration by combining global and local information: a chain-type diffeomorphic demons algorithm

International Nuclear Information System (INIS)

Liu, Xiaozheng; Yuan, Zhenming; Zhu, Junming; Xu, Dongrong

2013-01-01

The demons algorithm is a popular algorithm for non-rigid image registration because of its computational efficiency and simple implementation. The deformation forces of the classic demons algorithm were derived from image gradients by considering the deformation to decrease the intensity dissimilarity between images. However, the methods using the difference of image intensity for medical image registration are easily affected by image artifacts, such as image noise, non-uniform imaging and partial volume effects. The gradient magnitude image is constructed from the local information of an image, so the difference in a gradient magnitude image can be regarded as more reliable and robust for these artifacts. Then, registering medical images by considering the differences in both image intensity and gradient magnitude is a straightforward selection. In this paper, based on a diffeomorphic demons algorithm, we propose a chain-type diffeomorphic demons algorithm by combining the differences in both image intensity and gradient magnitude for medical image registration. Previous work had shown that the classic demons algorithm can be considered as an approximation of a second order gradient descent on the sum of the squared intensity differences. By optimizing the new dissimilarity criteria, we also present a set of new demons forces which were derived from the gradients of the image and gradient magnitude image. We show that, in controlled experiments, this advantage is confirmed, and yields a fast convergence. (paper)

16. Information flow, causality, and the classical theory of tachyons

International Nuclear Information System (INIS)

Basano, L.

1977-01-01

Causal paradoxes arising in the tachyon theory have been systematically solved by using the reinterpretation principle as a consequence of which cause and effect no longer retain an absolute meaning. However, even in the tachyon theory, a cause is always seen to chronologically precede its effect, but this is obtained at the price of allowing cause and effect to be interchanged when required. A recent result has shown that this interchange-ability of cause and effect must not be unlimited if heavy paradoxes are to be avoided. This partial recovery of the classical concept of causality has been expressed by the conjecture that transcendent tachyons cannot be absorbed by a tachyon detector. In this paper the directional properties of the flow of information between two observers in relative motion and its consequences on the logical self-consistency of the theory of superluminal particles are analyzed. It is shown that the above conjecture does not provide a satisfactory solution to the problem because it implies that tachyons of any speed cannot be intercepted by the same detector. (author)

17. Learning-based traffic signal control algorithms with neighborhood information sharing: An application for sustainable mobility

Energy Technology Data Exchange (ETDEWEB)

Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhu, Feng [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering; Ukkusuri, Satish V. [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering

2017-10-04

Here, this research applies R-Markov Average Reward Technique based reinforcement learning (RL) algorithm, namely RMART, for vehicular signal control problem leveraging information sharing among signal controllers in connected vehicle environment. We implemented the algorithm in a network of 18 signalized intersections and compare the performance of RMART with fixed, adaptive, and variants of the RL schemes. Results show significant improvement in system performance for RMART algorithm with information sharing over both traditional fixed signal timing plans and real time adaptive control schemes. Additionally, the comparison with reinforcement learning algorithms including Q learning and SARSA indicate that RMART performs better at higher congestion levels. Further, a multi-reward structure is proposed that dynamically adjusts the reward function with varying congestion states at the intersection. Finally, the results from test networks show significant reduction in emissions (CO, CO2, NOx, VOC, PM10) when RL algorithms are implemented compared to fixed signal timings and adaptive schemes.

18. Research on Kalman Filtering Algorithm for Deformation Information Series of Similar Single-Difference Model

Institute of Scientific and Technical Information of China (English)

L(U) Wei-cai; XU Shao-quan

2004-01-01

Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.

19. Information Theory for Gabor Feature Selection for Face Recognition

Directory of Open Access Journals (Sweden)

Shen Linlin

2006-01-01

Full Text Available A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004, our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.

20. Information Theory for Gabor Feature Selection for Face Recognition

Science.gov (United States)

Shen, Linlin; Bai, Li

2006-12-01

A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004), our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.

1. FAST-PT II: an algorithm to calculate convolution integrals of general tensor quantities in cosmological perturbation theory

Energy Technology Data Exchange (ETDEWEB)

Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.; Hirata, Christopher M., E-mail: fang.307@osu.edu, E-mail: blazek@berkeley.edu, E-mail: mcewen.24@osu.edu, E-mail: hirata.10@osu.edu [Center for Cosmology and AstroParticle Physics, Department of Physics, The Ohio State University, 191 W Woodruff Ave, Columbus OH 43210 (United States)

2017-02-01

Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies in the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.

2. Edge Detection Algorithm Based on Fuzzy Logic Theory for a Local Vision System of Robocup Humanoid League

Directory of Open Access Journals (Sweden)

Andrea K. Perez-Hernandez

2013-06-01

Full Text Available At this paper we shown the development of an algorithm to perform edges extraction based on fuzzy logic theory. This method allows recognizing landmarks on the game field for Humanoid League of RoboCup. The proposed algorithm describes the creation of a fuzzy inference system that permit evaluate the existent relationship between image pixels, finding variations on grey levels of related neighbor pixels. Subsequently, it shows an implementation of OTSU method to binarize an image that was obtained from fuzzy process and so generate an image containing only extracted edges, validating the algorithm with Humanoid League images. Later, we analyze obtained results that evidence a good performance of algorithm, considering that this proposal only takes an extra 35% processing time that will be required by traditional methods, whereas extracted edges are 52% less noise susceptible.

3. Designing mixed metal halide ammines for ammonia storage using density functional theory and genetic algorithms.

Science.gov (United States)

Jensen, Peter Bjerre; Lysgaard, Steen; Quaade, Ulrich J; Vegge, Tejs

2014-09-28

Metal halide ammines have great potential as a future, high-density energy carrier in vehicles. So far known materials, e.g. Mg(NH3)6Cl2 and Sr(NH3)8Cl2, are not suitable for automotive, fuel cell applications, because the release of ammonia is a multi-step reaction, requiring too much heat to be supplied, making the total efficiency lower. Here, we apply density functional theory (DFT) calculations to predict new mixed metal halide ammines with improved storage capacities and the ability to release the stored ammonia in one step, at temperatures suitable for system integration with polymer electrolyte membrane fuel cells (PEMFC). We use genetic algorithms (GAs) to search for materials containing up to three different metals (alkaline-earth, 3d and 4d) and two different halides (Cl, Br and I) - almost 27,000 combinations, and have identified novel mixtures, with significantly improved storage capacities. The size of the search space and the chosen fitness function make it possible to verify that the found candidates are the best possible candidates in the search space, proving that the GA implementation is ideal for this kind of computational materials design, requiring calculations on less than two percent of the candidates to identify the global optimum.

4. Prolegomena to a theory of nuclear information exchange

International Nuclear Information System (INIS)

Van Nuffelen, Dominique

1997-01-01

From the researcher's point of view, the communications with the agricultural populations in case of radiological emergency can not be anything else but the application of a theory of nuclear information exchange among social groups. Consequently, it is essentially necessary to work out such a theory, the prolegomena of which are exposed in this paper. It describes an experiment conducted at 'Service de protection contre les radiations ionisantes' - Belgium (SPRI), and proposes an investigation within the scientific knowledge in this matter. The available empirical and theoretical data allow formulating pragmatic recommendations, among which the principal one is the necessity of creating in normal radiological situation of a number of scenarios of messages adapted to the agricultural populations. The author points out that in order to be perfectly adapted these scenarios must been negotiated between the emitter and receiver. If this condition is satisfied the information in case of nuclear emergency will really be an exchange of knowledge between experts and the agricultural population i.e. a 'communication'

5. Applied information science, engineering and technology selected topics from the field of production information engineering and IT for manufacturing : theory and practice

CERN Document Server

Tóth, Tibor

2014-01-01

The objective of the book is to give a selection from the papers, which summarize several important results obtained within the framework of the József Hatvany Doctoral School operating at the University of Miskolc, Hungary. In accordance with the three main research areas of the Doctoral School established for Information Science, Engineering and Technology, the papers can be classified into three groups. They are as follows: (1) Applied Computational Science; (2) Production Information Engineering (IT for Manufacturing included); (3) Material Stream Systems and IT for Logistics. As regards the first area, some papers deal with special issues of algorithms theory and its applications, with computing algorithms for engineering tasks, as well as certain issues of data base systems and knowledge intensive systems. Related to the second research area, the focus is on Production Information Engineering with special regard to discrete production processes. In the second research area the papers show some new inte...

6. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

CERN Document Server

Uhr, Leonard

1984-01-01

Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

7. Understanding family health information seeking: a test of the theory of motivated information management.

Science.gov (United States)

Hovick, Shelly R

2014-01-01

Although a family health history can be used to assess disease risk and increase health prevention behaviors, research suggests that few people have collected family health information. Guided by the Theory of Motivated Information Management, this study seeks to understand the barriers to and facilitators of interpersonal information seeking about family health history. Individuals who were engaged to be married (N = 306) were surveyed online and in person to understand how factors such as uncertainty, expectations for an information search, efficacy, and anxiety influence decisions and strategies for obtaining family health histories. The results supported the Theory of Motivated Information Management by demonstrating that individuals who experienced uncertainty discrepancies regarding family heath history had greater intention to seek information from family members when anxiety was low, outcome expectancy was high, and communication efficacy was positive. Although raising uncertainty about family health history may be an effective tool for health communicators to increase communication among family members, low-anxiety situations may be optimal for information seeking. Health communication messages must also build confidence in people's ability to communicate with family to obtain the needed health information.

8. Efficiency and credit ratings: a permutation-information-theory analysis

International Nuclear Information System (INIS)

Bariviera, Aurelio Fernandez; Martinez, Lisana B; Zunino, Luciano; Belén Guercio, M; Rosso, Osvaldo A

2013-01-01

The role of credit rating agencies has been under severe scrutiny after the subprime crisis. In this paper we explore the relationship between credit ratings and informational efficiency of a sample of thirty nine corporate bonds of US oil and energy companies from April 2008 to November 2012. For this purpose we use a powerful statistical tool, relatively new in the financial literature: the complexity–entropy causality plane. This representation space allows us to graphically classify the different bonds according to their degree of informational efficiency. We find that this classification agrees with the credit ratings assigned by Moody’s. In particular, we detect the formation of two clusters, which correspond to the global categories of investment and speculative grades. Regarding the latter cluster, two subgroups reflect distinct levels of efficiency. Additionally, we also find an intriguing absence of correlation between informational efficiency and firm characteristics. This allows us to conclude that the proposed permutation-information-theory approach provides an alternative practical way to justify bond classification. (paper)

9. Hiding data selected topics : Rudolf Ahlswede’s lectures on information theory 3

CERN Document Server

Althöfer, Ingo; Deppe, Christian; Tamm, Ulrich

2016-01-01

Devoted to information security, this volume begins with a short course on cryptography, mainly based on lectures given by Rudolf Ahlswede at the University of Bielefeld in the mid 1990s. It was the second of his cycle of lectures on information theory which opened with an introductory course on basic coding theorems, as covered in Volume 1 of this series. In this third volume, Shannon’s historical work on secrecy systems is detailed, followed by an introduction to an information-theoretic model of wiretap channels, and such important concepts as homophonic coding and authentication. Once the theoretical arguments have been presented, comprehensive technical details of AES are given. Furthermore, a short introduction to the history of public-key cryptology, RSA and El Gamal cryptosystems is provided, followed by a look at the basic theory of elliptic curves, and algorithms for efficient addition in elliptic curves. Lastly, the important topic of “oblivious transfer” is discussed, which is strongly conne...

10. Defining information need in health - assimilating complex theories derived from information science.

Science.gov (United States)

Ormandy, Paula

2011-03-01

Key policy drivers worldwide include optimizing patients' roles in managing their care; focusing services around patients' needs and preferences; and providing information to support patients' contributions and choices. The term information need penetrates many policy documents. Information need is espoused as the foundation from which to develop patient-centred or patient-led services. Yet there is no clear definition as to what the term means or how patients' information needs inform and shape information provision and patient care. The assimilation of complex theories originating from information science has much to offer considerations of patient information need within the context of health care. Health-related research often focuses on the content of information patients prefer, not why they need information. This paper extends and applies knowledge of information behaviour to considerations of information need in health, exposing a working definition for patient information need that reiterates the importance of considering the patient's goals and understanding the patient's context/situation. A patient information need is defined as 'recognition that their knowledge is inadequate to satisfy a goal, within the context/situation that they find themselves at a specific point in the time'. This typifies the key concepts of national/international health policy, the centrality and importance of the patient. The proposed definition of patient information need provides a conceptual framework to guide health-care practitioners on what to consider and why when meeting the information needs of patients in practice. This creates a solid foundation from which to inform future research. © 2010 The Author. Health Expectations © 2010 Blackwell Publishing Ltd.

11. Trajectory Shape Analysis and Anomaly Detection Utilizing Information Theory Tools

Directory of Open Access Journals (Sweden)

Yuejun Guo

2017-06-01

Full Text Available In this paper, we propose to improve trajectory shape analysis by explicitly considering the speed attribute of trajectory data, and to successfully achieve anomaly detection. The shape of object motion trajectory is modeled using Kernel Density Estimation (KDE, making use of both the angle attribute of the trajectory and the speed of the moving object. An unsupervised clustering algorithm, based on the Information Bottleneck (IB method, is employed for trajectory learning to obtain an adaptive number of trajectory clusters through maximizing the Mutual Information (MI between the clustering result and a feature set of the trajectory data. Furthermore, we propose to effectively enhance the performance of IB by taking into account the clustering quality in each iteration of the clustering procedure. The trajectories are determined as either abnormal (infrequently observed or normal by a measure based on Shannon entropy. Extensive tests on real-world and synthetic data show that the proposed technique behaves very well and outperforms the state-of-the-art methods.

12. BOOK REVIEW: Theory of Neural Information Processing Systems

Science.gov (United States)

Galla, Tobias

2006-04-01

It is difficult not to be amazed by the ability of the human brain to process, to structure and to memorize information. Even by the toughest standards the behaviour of this network of about 1011 neurons qualifies as complex, and both the scientific community and the public take great interest in the growing field of neuroscience. The scientific endeavour to learn more about the function of the brain as an information processing system is here a truly interdisciplinary one, with important contributions from biology, computer science, physics, engineering and mathematics as the authors quite rightly point out in the introduction of their book. The role of the theoretical disciplines here is to provide mathematical models of information processing systems and the tools to study them. These models and tools are at the centre of the material covered in the book by Coolen, Kühn and Sollich. The book is divided into five parts, providing basic introductory material on neural network models as well as the details of advanced techniques to study them. A mathematical appendix complements the main text. The range of topics is extremely broad, still the presentation is concise and the book well arranged. To stress the breadth of the book let me just mention a few keywords here: the material ranges from the basics of perceptrons and recurrent network architectures to more advanced aspects such as Bayesian learning and support vector machines; Shannon's theory of information and the definition of entropy are discussed, and a chapter on Amari's information geometry is not missing either. Finally the statistical mechanics chapters cover Gardner theory and the replica analysis of the Hopfield model, not without being preceded by a brief introduction of the basic concepts of equilibrium statistical physics. The book also contains a part on effective theories of the macroscopic dynamics of neural networks. Many dynamical aspects of neural networks are usually hard to find in the

13. Implementation of Rivest Shamir Adleman Algorithm (RSA) and Vigenere Cipher In Web Based Information System

Science.gov (United States)

Aryanti, Aryanti; Mekongga, Ikhthison

2018-02-01

Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA) and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA) and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

14. An Experimental Evaluation of the DQ-DHT Algorithm in a Grid Information Service

Science.gov (United States)

DQ-DHT is a resource discovery algorithm that combines the Dynamic Querying (DQ) technique used in unstructured peer-to-peer networks with an algorithm for efficient broadcast over a Distributed Hash Table (DHT). Similarly to DQ, DQ-DHT dynamically controls the query propagation on the basis of the desired number of results and the popularity of the resource to be located. Differently from DQ, DQ-DHT exploits the structural properties of a DHT to avoid message duplications, thus reducing the amount of network traffic generated by each query. The goal of this paper is to evaluate experimentally the amount of traffic generated by DQ-DHT compared to the DQ algorithm in a Grid infrastructure. A prototype of a Grid information service, which can use both DQ and DQ-DHT as resource discovery algorithm, has been implemented and deployed on the Grid'5000 infrastructure for evaluation. The experimental results presented in this paper show that DQ-DHT significantly reduces the amount of network traffic generated during the discovery process compared to the original DQ algorithm.

15. Implementation of Rivest Shamir Adleman Algorithm (RSA and Vigenere Cipher In Web Based Information System

Directory of Open Access Journals (Sweden)

Aryanti Aryanti

2018-01-01

Full Text Available Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

16. Entropy, Information Theory, Information Geometry and Bayesian Inference in Data, Signal and Image Processing and Inverse Problems

Directory of Open Access Journals (Sweden)

2015-06-01

Full Text Available The main content of this review article is first to review the main inference tools using Bayes rule, the maximum entropy principle (MEP, information theory, relative entropy and the Kullback–Leibler (KL divergence, Fisher information and its corresponding geometries. For each of these tools, the precise context of their use is described. The second part of the paper is focused on the ways these tools have been used in data, signal and image processing and in the inverse problems, which arise in different physical sciences and engineering applications. A few examples of the applications are described: entropy in independent components analysis (ICA and in blind source separation, Fisher information in data model selection, different maximum entropy-based methods in time series spectral estimation and in linear inverse problems and, finally, the Bayesian inference for general inverse problems. Some original materials concerning the approximate Bayesian computation (ABC and, in particular, the variational Bayesian approximation (VBA methods are also presented. VBA is used for proposing an alternative Bayesian computational tool to the classical Markov chain Monte Carlo (MCMC methods. We will also see that VBA englobes joint maximum a posteriori (MAP, as well as the different expectation-maximization (EM algorithms as particular cases.

17. Evaluation of EMG processing techniques using Information Theory.

Science.gov (United States)

Farfán, Fernando D; Politti, Julio C; Felice, Carmelo J

2010-11-12

Electromyographic signals can be used in biomedical engineering and/or rehabilitation field, as potential sources of control for prosthetics and orthotics. In such applications, digital processing techniques are necessary to follow efficient and effectively the changes in the physiological characteristics produced by a muscular contraction. In this paper, two methods based on information theory are proposed to evaluate the processing techniques. These methods determine the amount of information that a processing technique is able to extract from EMG signals. The processing techniques evaluated with these methods were: absolute mean value (AMV), RMS values, variance values (VAR) and difference absolute mean value (DAMV). EMG signals from the middle deltoid during abduction and adduction movement of the arm in the scapular plane was registered, for static and dynamic contractions. The optimal window length (segmentation), abduction and adduction movements and inter-electrode distance were also analyzed. Using the optimal segmentation (200 ms and 300 ms in static and dynamic contractions, respectively) the best processing techniques were: RMS, AMV and VAR in static contractions, and only the RMS in dynamic contractions. Using the RMS of EMG signal, variations in the amount of information between the abduction and adduction movements were observed. Although the evaluation methods proposed here were applied to standard processing techniques, these methods can also be considered as alternatives tools to evaluate new processing techniques in different areas of electrophysiology.

18. Evaluation of EMG processing techniques using Information Theory

Directory of Open Access Journals (Sweden)

Felice Carmelo J

2010-11-01

Full Text Available Abstract Background Electromyographic signals can be used in biomedical engineering and/or rehabilitation field, as potential sources of control for prosthetics and orthotics. In such applications, digital processing techniques are necessary to follow efficient and effectively the changes in the physiological characteristics produced by a muscular contraction. In this paper, two methods based on information theory are proposed to evaluate the processing techniques. Methods These methods determine the amount of information that a processing technique is able to extract from EMG signals. The processing techniques evaluated with these methods were: absolute mean value (AMV, RMS values, variance values (VAR and difference absolute mean value (DAMV. EMG signals from the middle deltoid during abduction and adduction movement of the arm in the scapular plane was registered, for static and dynamic contractions. The optimal window length (segmentation, abduction and adduction movements and inter-electrode distance were also analyzed. Results Using the optimal segmentation (200 ms and 300 ms in static and dynamic contractions, respectively the best processing techniques were: RMS, AMV and VAR in static contractions, and only the RMS in dynamic contractions. Using the RMS of EMG signal, variations in the amount of information between the abduction and adduction movements were observed. Conclusions Although the evaluation methods proposed here were applied to standard processing techniques, these methods can also be considered as alternatives tools to evaluate new processing techniques in different areas of electrophysiology.

19. Information carriers and (reading them through) information theory in quantum chemistry.

Science.gov (United States)

Geerlings, Paul; Borgoo, Alex

2011-01-21

This Perspective discusses the reduction of the electronic wave function via the second-order reduced density matrix to the electron density ρ(r), which is the key ingredient in density functional theory (DFT) as a basic carrier of information. Simplifying further, the 1-normalized density function turns out to contain essentially the same information as ρ(r) and is even of preferred use as an information carrier when discussing the periodic properties along Mendeleev's table where essentially the valence electrons are at stake. The Kullback-Leibler information deficiency turns out to be the most interesting choice to obtain information on the differences in ρ(r) or σ(r) between two systems. To put it otherwise: when looking for the construction of a functional F(AB) = F[ζ(A)(r),ζ(B)(r)] for extracting differences in information from an information carrier ζ(r) (i.e. ρ(r), σ(r)) for two systems A and B the Kullback-Leibler information measure ΔS is a particularly adequate choice. Examples are given, varying from atoms, to molecules and molecular interactions. Quantum similarity of atoms indicates that the shape function based KL information deficiency is the most appropriate tool to retrieve periodicity in the Periodic Table. The dissimilarity of enantiomers for which different information measures are presented at global and local (i.e. molecular and atomic) level leads to an extension of Mezey's holographic density theorem and shows numerical evidence that in a chiral molecule the whole molecule is pervaded by chirality. Finally Kullback-Leibler information profiles are discussed for intra- and intermolecular proton transfer reactions and a simple S(N)2 reaction indicating that the theoretical information profile can be used as a companion to the energy based Hammond postulate to discuss the early or late transition state character of a reaction. All in all this Perspective's answer is positive to the question of whether an even simpler carrier of

20. Wireless sensor placement for structural monitoring using information-fusing firefly algorithm

Science.gov (United States)

Zhou, Guang-Dong; Yi, Ting-Hua; Xie, Mei-Xi; Li, Hong-Nan

2017-10-01

Wireless sensor networks (WSNs) are promising technology in structural health monitoring (SHM) applications for their low cost and high efficiency. The limited wireless sensors and restricted power resources in WSNs highlight the significance of optimal wireless sensor placement (OWSP) during designing SHM systems to enable the most useful information to be captured and to achieve the longest network lifetime. This paper presents a holistic approach, including an optimization criterion and a solution algorithm, for optimally deploying self-organizing multi-hop WSNs on large-scale structures. The combination of information effectiveness represented by the modal independence and the network performance specified by the network connectivity and network lifetime is first formulated to evaluate the performance of wireless sensor configurations. Then, an information-fusing firefly algorithm (IFFA) is developed to solve the OWSP problem. The step sizes drawn from a Lévy distribution are adopted to drive fireflies toward brighter individuals. Following the movement with Lévy flights, information about the contributions of wireless sensors to the objective function as carried by the fireflies is fused and applied to move inferior wireless sensors to better locations. The reliability of the proposed approach is verified via a numerical example on a long-span suspension bridge. The results demonstrate that the evaluation criterion provides a good performance metric of wireless sensor configurations, and the IFFA outperforms the simple discrete firefly algorithm.

1. Analyzing complex networks evolution through Information Theory quantifiers

International Nuclear Information System (INIS)

Carpi, Laura C.; Rosso, Osvaldo A.; Saco, Patricia M.; Ravetti, Martin Gomez

2011-01-01

A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

2. Analyzing complex networks evolution through Information Theory quantifiers

Energy Technology Data Exchange (ETDEWEB)

Carpi, Laura C., E-mail: Laura.Carpi@studentmail.newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Rosso, Osvaldo A., E-mail: rosso@fisica.ufmg.b [Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Chaos and Biology Group, Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellon II, Ciudad Universitaria, 1428 Ciudad de Buenos Aires (Argentina); Saco, Patricia M., E-mail: Patricia.Saco@newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Hidraulica, Facultad de Ciencias Exactas, Ingenieria y Agrimensura, Universidad Nacional de Rosario, Avenida Pellegrini 250, Rosario (Argentina); Ravetti, Martin Gomez, E-mail: martin.ravetti@dep.ufmg.b [Departamento de Engenharia de Producao, Universidade Federal de Minas Gerais, Av. Antonio Carlos, 6627, Belo Horizonte (31270-901), MG (Brazil)

2011-01-24

A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

3. Entropy in quantum information theory - Communication and cryptography

DEFF Research Database (Denmark)

Majenz, Christian

in quantum Shannon theory. While immensely more entanglement-consuming, the variant of port based teleportation is interesting for applications like instantaneous non-local computation and attacks on quantum position-based cryptography. Port based teleportation cannot be implemented perfectly......, for vanishing error. As a byproduct, a new lower bound for the size of the program register for an approximate universal programmable quantum processor is derived. Finally, the mix is completed with a result in quantum cryptography. While quantum key distribution is the most well-known quantum cryptographic...... protocol, there has been increased interest in extending the framework of symmetric key cryptography to quantum messages. We give a new denition for information-theoretic quantum non-malleability, strengthening the previous denition by Ambainis et al. We show that quantum non-malleability implies secrecy...

4. Surrogate marker evaluation from an information theory perspective.

Science.gov (United States)

Alonso, Ariel; Molenberghs, Geert

2007-03-01

The last 20 years have seen lots of work in the area of surrogate marker validation, partly devoted to frame the evaluation in a multitrial framework, leading to definitions in terms of the quality of trial- and individual-level association between a potential surrogate and a true endpoint (Buyse et al., 2000, Biostatistics 1, 49-67). A drawback is that different settings have led to different measures at the individual level. Here, we use information theory to create a unified framework, leading to a definition of surrogacy with an intuitive interpretation, offering interpretational advantages, and applicable in a wide range of situations. Our method provides a better insight into the chances of finding a good surrogate endpoint in a given situation. We further show that some of the previous proposals follow as special cases of our method. We illustrate our methodology using data from a clinical study in psychiatry.

5. Algorithmic Self

DEFF Research Database (Denmark)

Markham, Annette

This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

6. Finding an information concept suited for a universal theory of information.

Science.gov (United States)

Brier, Søren

2015-12-01

The view argued in this article is that if we want to define a universal concept of information covering subjective experiential and meaningful cognition - as well as intersubjective meaningful communication in nature, technology, society and life worlds - then the main problem is to decide, which epistemological, ontological and philosophy of science framework the concept of information should be based on and integrated in. All the ontological attempts to create objective concepts of information result in concepts that cannot encompass meaning and experience of embodied living and social systems. There is no conclusive evidence that the core of reality across nature, culture, life and mind is purely either mathematical, logical or of a computational nature. Therefore the core of the information concept should not only be based only on pure logical or mathematical rationality. We need to include interpretation, signification and meaning construction in our transdisciplinary framework for information as a basic aspect of reality alongside the physical, chemical and molecular biological. Dretske defines information as the content of new, true, meaningful, and understandable knowledge. According to this widely held definition information in a transdisciplinary theory cannot be 'objective', but has to be relativized in relation to the receiver's knowledge, as also proposed by Floridi. It is difficult to produce a quantitative statement independently of a qualitative analysis based on some sort of relation to the human condition as a semiotic animal. I therefore alternatively suggest to build information theories based on semiotics from the basic relations of embodied living systems meaningful cognition and communication. I agree with Peircean biosemiotics that all information must be part of real relational sign-processes manifesting as tokens. Copyright © 2015. Published by Elsevier Ltd.

7. Finding an Information Concept Suited for a Universal Theory of Information

DEFF Research Database (Denmark)

Brier, Søren

2015-01-01

. There is no conclusive evidence that the core of reality across nature, culture, life and mind is purely either mathematical, logical or of a computational nature. Therefore the core of the information concept should not only be based only on pure logical or mathematical rationality. We need to include interpretation...... definition information in a transdisciplinary theory cannot be ‘objective’, but has to be relativized in relation to the receiver's knowledge, as also proposed by Floridi. It is difficult to produce a quantitative statement independently of a qualitative analysis based on some sort of relation to the human...

8. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

Energy Technology Data Exchange (ETDEWEB)

Sera White

2012-04-01

This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.

9. Optimal design of hydrometric monitoring networks with dynamic components based on Information Theory

Science.gov (United States)

Alfonso, Leonardo; Chacon, Juan; Solomatine, Dimitri

2016-04-01

The EC-FP7 WeSenseIt project proposes the development of a Citizen Observatory of Water, aiming at enhancing environmental monitoring and forecasting with the help of citizens equipped with low-cost sensors and personal devices such as smartphones and smart umbrellas. In this regard, Citizen Observatories may complement the limited data availability in terms of spatial and temporal density, which is of interest, among other areas, to improve hydraulic and hydrological models. At this point, the following question arises: how can citizens, who are part of a citizen observatory, be optimally guided so that the data they collect and send is useful to improve modelling and water management? This research proposes a new methodology to identify the optimal location and timing of potential observations coming from moving sensors of hydrological variables. The methodology is based on Information Theory, which has been widely used in hydrometric monitoring design [1-4]. In particular, the concepts of Joint Entropy, as a measure of the amount of information that is contained in a set of random variables, which, in our case, correspond to the time series of hydrological variables captured at given locations in a catchment. The methodology presented is a step forward in the state of the art because it solves the multiobjective optimisation problem of getting simultaneously the minimum number of informative and non-redundant sensors needed for a given time, so that the best configuration of monitoring sites is found at every particular moment in time. To this end, the existing algorithms have been improved to make them efficient. The method is applied to cases in The Netherlands, UK and Italy and proves to have a great potential to complement the existing in-situ monitoring networks. [1] Alfonso, L., A. Lobbrecht, and R. Price (2010a), Information theory-based approach for location of monitoring water level gauges in polders, Water Resour. Res., 46(3), W03528 [2] Alfonso, L., A

10. Algorithms, Interfaces, and the Circulation of Information: Interrogating the Epistemological Challenges of Facebook

Directory of Open Access Journals (Sweden)

Jannick Schou

2016-05-01

Full Text Available As social and political life increasingly takes place on social network sites, new epistemological questions have emerged. How can information disseminated through new media be understood and disentangled? How can potential hidden agendas or sources be identified? And what mechanisms govern what and how information is presented to the user? By drawing on existing research on the algorithms and interfaces underlying social network sites, this paper provides a discussion of Facebook and the epistemological challenges, potentials, and questions raised by the platform. The paper specifically discusses the ways in which interfaces shape how information can be accessed and processed by different kinds of users as well as the role of algorithms in pre-selecting what appears as representable information. A key argument of the paper is that Facebook, as a complex socio-technical network of human and non-human actors, has profound epistemological implications for how information can be accessed, understood, and circulated. In this sense, the user’s potential acquisition of information is shaped and conditioned by the technological structure of the platform. Building on these arguments, the paper suggests that new epistemological challenges deserve more scholarly attention, as they hold wide implications for both researchers and users

11. A demand response modeling for residential consumers in smart grid environment using game theory based energy scheduling algorithm

Directory of Open Access Journals (Sweden)

S. Sofana Reka

2016-06-01

Full Text Available In this paper, demand response modeling scheme is proposed for residential consumers using game theory algorithm as Generalized Tit for Tat (GTFT Dominant Game based Energy Scheduler. The methodology is established as a work flow domain model between the utility and the user considering the smart grid framework. It exhibits an algorithm which schedules load usage by creating several possible tariffs for consumers such that demand is never raised. This can be done both individually and among multiple users of a community. The uniqueness behind the demand response proposed is that, the tariff is calculated for all hours and the load during the peak hours which can be rescheduled is shifted based on the Peak Average Ratio. To enable the vitality of the work simulation results of a general case of three domestic consumers are modeled extended to a comparative performance and evaluation with other algorithms and inference is analyzed.

12. Inference with minimal Gibbs free energy in information field theory

International Nuclear Information System (INIS)

Ensslin, Torsten A.; Weig, Cornelius

2010-01-01

Non-linear and non-Gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the Gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from Poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a Gaussian signal with unknown spectrum, and (iii) inference of a Poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how Gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-Gaussian posterior.

13. Generalised perturbation theory and source of information through chemical measurements

International Nuclear Information System (INIS)

Lelek, V.; Marek, T.

2001-01-01

It is important to make all analyses and collect all information from the work of the new facility (which the transmutation demonstration unit will surely be) to be sure that the operation corresponds to the forecast or to correct the equations of the facility. The behaviour of the molten salt reactor and in particular the system of measurement are very different from that of the solid fuel reactor. Key information from the long time kinetics could be the nearly on line knowledge of the fuel composition. In this work it is shown how to include it into the control and use such data for the correction of neutron cross-sections for the high actinides or other characteristics. Also the problem of safety - change of the boundary problem to the initial problem - is mentioned. The problem is transformed into the generalised perturbation theory in which the adjoint function is obtained through the solution of the equations with right hand side having the form of source. Such an approach should be a theoretical base for the calculation of the sensitivity coefficients. (authors)

14. Untangling the drivers of nonlinear systems with information theory

Science.gov (United States)

Wing, S.; Johnson, J.

2017-12-01

Many systems found in nature are nonlinear. The drivers of the system are often nonlinearly correlated with one another, which makes it a challenge to understand the effects of an individual driver. For example, solar wind velocity (Vsw) and density (nsw) are both found to correlate well with radiation belt fluxes and are thought to be drivers of the magnetospheric dynamics; however, the Vsw is anti-correlated with nsw, which can potentially confuse interpretation of these relationships as causal or coincidental. Information theory can untangle the drivers of these systems, describe the underlying dynamics, and offer constraints to modelers and theorists, leading to better understanding of the systems. Two examples are presented. In the first example, the solar wind drivers of geosynchronous electrons with energy range of 1.8-3.5 MeV are investigated using mutual information (MI), conditional mutual information (CMI), and transfer entropy (TE). The information transfer from Vsw to geosynchronous MeV electron flux (Je) peaks with a lag time (t) of 2 days. As previously reported, Je is anticorrelated with nsw with a lag of 1 day. However, this lag time and anticorrelation can be attributed mainly to the Je(t + 2 days) correlation with Vsw(t) and nsw(t + 1 day) anticorrelation with Vsw(t). Analyses of solar wind driving of the magnetosphere need to consider the large lag times, up to 3 days, in the (Vsw, nsw) anticorrelation. Using CMI to remove the effects of Vsw, the response of Je to nsw is 30% smaller and has a lag time < 24 hr, suggesting that the loss mechanism due to nsw or solar wind dynamic pressure has to start operating in < 24 hr. nsw transfers about 36% as much information as Vsw (the primary driver) to Je. Nonstationarity in the system dynamics are investigated using windowed TE. When the data is ordered according to high or low transfer entropy it is possible to understand details of the triangle distribution that has been identified between Je(t + 2

15. Comparing integral and incidental emotions: Testing insights from emotions as social information theory and attribution theory.

Science.gov (United States)

Hillebrandt, Annika; Barclay, Laurie J

2017-05-01

Studies have indicated that observers can infer information about others' behavioral intentions from others' emotions and use this information in making their own decisions. Integrating emotions as social information (EASI) theory and attribution theory, we argue that the interpersonal effects of emotions are not only influenced by the type of discrete emotion (e.g., anger vs. happiness) but also by the target of the emotion (i.e., how the emotion relates to the situation). We compare the interpersonal effects of emotions that are integral (i.e., related to the situation) versus incidental (i.e., lacking a clear target in the situation) in a negotiation context. Results from 4 studies support our general argument that the target of an opponent's emotion influences the degree to which observers attribute the emotion to their own behavior. These attributions influence observers' inferences regarding the perceived threat of an impasse or cooperativeness of an opponent, which can motivate observers to strategically adjust their behavior. Specifically, emotion target influenced concessions for both anger and happiness (Study 1, N = 254), with perceived threat and cooperativeness mediating the effects of anger and happiness, respectively (Study 2, N = 280). Study 3 (N = 314) demonstrated the mediating role of attributions and moderating role of need for closure. Study 4 (N = 193) outlined how observers' need for cognitive closure influences how they attribute incidental anger. We discuss theoretical implications related to the social influence of emotions as well as practical implications related to the impact of personality on negotiators' biases and behaviors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

16. An algorithm for high order strong coupling expansions: The mass gap in 3d pure Z2 lattice gauge theory

International Nuclear Information System (INIS)

Decker, K.; Hamburg Univ.

1985-12-01

An efficient description of all clusters contributing to the strong coupling expansion of the mass gap in three-dimensional pure Z 2 lattice gauge theory is presented. This description is correct to all orders in the strong coupling expansion and is chosen in such a way that it remains valid in four dimensions for gauge group Z 2 . Relying on this description an algorithm has been constructed which generates and processes all the contributing graphs to the exact strong coupling expansion of the mass gap in the three-dimensional model in a fully automatic fashion. A major component of this algorithm can also be used to generate exact strong coupling expansions for the free energy logZ. The algorithm is correct to any order; thus the order of these expansions is only limited by the available computing power. The presentation of the algorithm is such that it can serve as a guide-line for the construction of a generalized one which would also generate exact strong coupling expansions for the masses of low-lying excited states of four-dimensional pure Yang-Mills theories. (orig.)

17. A DEEP CUT ELLIPSOID ALGORITHM FOR CONVEX-PROGRAMMING - THEORY AND APPLICATIONS

NARCIS (Netherlands)

FRENK, JBG; GROMICHO, J; ZHANG, S

1994-01-01

This paper proposes a deep cut version of the ellipsoid algorithm for solving a general class of continuous convex programming problems. In each step the algorithm does not require more computational effort to construct these deep cuts than its corresponding central cut version. Rules that prevent

18. A Deep Cut Ellipsoid Algorithm for convex Programming: theory and Applications

NARCIS (Netherlands)

Frenk, J.B.G.; Gromicho Dos Santos, J.A.; Zhang, S.

1994-01-01

This paper proposes a deep cut version of the ellipsoid algorithm for solving a general class of continuous convex programming problems. In each step the algorithm does not require more computational effort to construct these deep cuts than its corresponding central cut version. Rules that prevent

19. A deep cut ellipsoid algorithm for convex programming : Theory and applications

NARCIS (Netherlands)

J.B.G. Frenk (Hans); J.A.S. Gromicho (Joaquim); S. Zhang (Shuzhong)

1994-01-01

textabstractThis paper proposes a deep cut version of the ellipsoid algorithm for solving a general class of continuous convex programming problems. In each step the algorithm does not require more computational effort to construct these deep cuts than its corresponding central cut version. Rules

20. Incorporation of local dependent reliability information into the Prior Image Constrained Compressed Sensing (PICCS) reconstruction algorithm

International Nuclear Information System (INIS)

Vaegler, Sven; Sauer, Otto; Stsepankou, Dzmitry; Hesser, Juergen

2015-01-01

incorporates prior images as well as information about location dependent uncertainties of the prior images into the algorithm. The computer phantom and experimental data studies indicate the potential to lowering the radiation dose to the patient due to imaging while maintaining good image quality.

1. Information loss in effective field theory: Entanglement and thermal entropies

Science.gov (United States)

Boyanovsky, Daniel

2018-03-01

Integrating out high energy degrees of freedom to yield a low energy effective field theory leads to a loss of information with a concomitant increase in entropy. We obtain the effective field theory of a light scalar field interacting with heavy fields after tracing out the heavy degrees of freedom from the time evolved density matrix. The initial density matrix describes the light field in its ground state and the heavy fields in equilibrium at a common temperature T . For T =0 , we obtain the reduced density matrix in a perturbative expansion; it reveals an emergent mixed state as a consequence of the entanglement between light and heavy fields. We obtain the effective action that determines the time evolution of the reduced density matrix for the light field in a nonperturbative Dyson resummation of one-loop correlations of the heavy fields. The Von-Neumann entanglement entropy associated with the reduced density matrix is obtained for the nonresonant and resonant cases in the asymptotic long time limit. In the nonresonant case the reduced density matrix displays an incipient thermalization albeit with a wave-vector, time and coupling dependent effective temperature as a consequence of memory of initial conditions. The entanglement entropy is time independent and is the thermal entropy for this effective, nonequilibrium temperature. In the resonant case the light field fully thermalizes with the heavy fields, the reduced density matrix loses memory of the initial conditions and the entanglement entropy becomes the thermal entropy of the light field. We discuss the relation between the entanglement entropy ultraviolet divergences and renormalization.

2. Cultural-Historical Activity Theory and Domain Analysis: Metatheoretical Implications for Information Science

Science.gov (United States)

Wang, Lin

2013-01-01

Background: Cultural-historical activity theory is an important theory in modern psychology. In recent years, it has drawn more attention from related disciplines including information science. Argument: This paper argues that activity theory and domain analysis which uses the theory as one of its bases could bring about some important…

3. An Effective Tri-Clustering Algorithm Combining Expression Data with Gene Regulation Information

Directory of Open Access Journals (Sweden)

Ao Li

2009-04-01

Full Text Available Motivation: Bi-clustering algorithms aim to identify sets of genes sharing similar expression patterns across a subset of conditions. However direct interpretation or prediction of gene regulatory mechanisms may be difficult as only gene expression data is used. Information about gene regulators may also be available, most commonly about which transcription factors may bind to the promoter region and thus control the expression level of a gene. Thus a method to integrate gene expression and gene regulation information is desirable for clustering and analyzing. Methods: By incorporating gene regulatory information with gene expression data, we define regulated expression values (REV as indicators of how a gene is regulated by a specific factor. Existing bi-clustering methods are extended to a three dimensional data space by developing a heuristic TRI-Clustering algorithm. An additional approach named Automatic Boundary Searching algorithm (ABS is introduced to automatically determine the boundary threshold. Results: Results based on incorporating ChIP-chip data representing transcription factor-gene interactions show that the algorithms are efficient and robust for detecting tri-clusters. Detailed analysis of the tri-cluster extracted from yeast sporulation REV data shows genes in this cluster exhibited significant differences during the middle and late stages. The implicated regulatory network was then reconstructed for further study of defined regulatory mechanisms. Topological and statistical analysis of this network demonstrated evidence of significant changes of TF activities during the different stages of yeast sporulation, and suggests this approach might be a general way to study regulatory networks undergoing transformations.

4. Informational and linguistic analysis of large genomic sequence collections via efficient Hadoop cluster algorithms.

Science.gov (United States)

Ferraro Petrillo, Umberto; Roscigno, Gianluca; Cattaneo, Giuseppe; Giancarlo, Raffaele

2018-06-01

Information theoretic and compositional/linguistic analysis of genomes have a central role in bioinformatics, even more so since the associated methodologies are becoming very valuable also for epigenomic and meta-genomic studies. The kernel of those methods is based on the collection of k-mer statistics, i.e. how many times each k-mer in {A,C,G,T}k occurs in a DNA sequence. Although this problem is computationally very simple and efficiently solvable on a conventional computer, the sheer amount of data available now in applications demands to resort to parallel and distributed computing. Indeed, those type of algorithms have been developed to collect k-mer statistics in the realm of genome assembly. However, they are so specialized to this domain that they do not extend easily to the computation of informational and linguistic indices, concurrently on sets of genomes. Following the well-established approach in many disciplines, and with a growing success also in bioinformatics, to resort to MapReduce and Hadoop to deal with 'Big Data' problems, we present KCH, the first set of MapReduce algorithms able to perform concurrently informational and linguistic analysis of large collections of genomic sequences on a Hadoop cluster. The benchmarking of KCH that we provide indicates that it is quite effective and versatile. It is also competitive with respect to the parallel and distributed algorithms highly specialized to k-mer statistics collection for genome assembly problems. In conclusion, KCH is a much needed addition to the growing number of algorithms and tools that use MapReduce for bioinformatics core applications. The software, including instructions for running it over Amazon AWS, as well as the datasets are available at http://www.di-srv.unisa.it/KCH. umberto.ferraro@uniroma1.it. Supplementary data are available at Bioinformatics online.

5. Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.

Science.gov (United States)

Khennak, Ilyes; Drias, Habiba

2017-02-01

With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.

6. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

Science.gov (United States)

2015-01-01

The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

7. A Pumping Algorithm for Ergodic Stochastic Mean Payoff Games with Perfect Information

Science.gov (United States)

Boros, Endre; Elbassioni, Khaled; Gurvich, Vladimir; Makino, Kazuhisa

In this paper, we consider two-person zero-sum stochastic mean payoff games with perfect information, or BWR-games, given by a digraph G = (V = V B ∪ V W ∪ V R , E), with local rewards r: E to { R}, and three types of vertices: black V B , white V W , and random V R . The game is played by two players, White and Black: When the play is at a white (black) vertex v, White (Black) selects an outgoing arc (v,u). When the play is at a random vertex v, a vertex u is picked with the given probability p(v,u). In all cases, Black pays White the value r(v,u). The play continues forever, and White aims to maximize (Black aims to minimize) the limiting mean (that is, average) payoff. It was recently shown in [7] that BWR-games are polynomially equivalent with the classical Gillette games, which include many well-known subclasses, such as cyclic games, simple stochastic games (SSG's), stochastic parity games, and Markov decision processes. In this paper, we give a new algorithm for solving BWR-games in the ergodic case, that is when the optimal values do not depend on the initial position. Our algorithm solves a BWR-game by reducing it, using a potential transformation, to a canonical form in which the optimal strategies of both players and the value for every initial position are obvious, since a locally optimal move in it is optimal in the whole game. We show that this algorithm is pseudo-polynomial when the number of random nodes is constant. We also provide an almost matching lower bound on its running time, and show that this bound holds for a wider class of algorithms. Let us add that the general (non-ergodic) case is at least as hard as SSG's, for which no pseudo-polynomial algorithm is known.

8. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems.

Science.gov (United States)

Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K

2017-12-19

Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

9. Algorithmization of problems on the personnel information support in the automatic chemical control systems at NPP

International Nuclear Information System (INIS)

Vilkov, N.Ya.; Kryukov, Yu.V.; Cheshun, A.V.

2001-01-01

When elaborating software for the standard algorithms of the information support of the efficient control (keeping) of water chemistry operation (WCO) at the NPP power units one introduces an approach when the systems of chemical control are realized as the systems of quality control of in-loop physical and chemical processes gathering force in the course of time. Elaboration of algorithms to proceed data of the operational chemical control seeks for elaboration of the statistic procedures to detect anomalies of the processes at the early stages of their development more efficient in contrast to the standard procedures of control. The introduced procedure is used in the demonstration model of the system for diagnostics of some typical reasons of violation of the first circuit WCO of WWER-1000 power units [ru

10. A Modified Spatiotemporal Fusion Algorithm Using Phenological Information for Predicting Reflectance of Paddy Rice in Southern China

Directory of Open Access Journals (Sweden)

Mengxue Liu

2018-05-01

Full Text Available Satellite data for studying surface dynamics in heterogeneous landscapes are missing due to frequent cloud contamination, low temporal resolution, and technological difficulties in developing satellites. A modified spatiotemporal fusion algorithm for predicting the reflectance of paddy rice is presented in this paper. The algorithm uses phenological information extracted from a moderate-resolution imaging spectroradiometer enhanced vegetation index time series to improve the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM. The algorithm is tested with satellite data on Yueyang City, China. The main contribution of the modified algorithm is the selection of similar neighborhood pixels by using phenological information to improve accuracy. Results show that the modified algorithm performs better than ESTARFM in visual inspection and quantitative metrics, especially for paddy rice. This modified algorithm provides not only new ideas for the improvement of spatiotemporal data fusion method, but also technical support for the generation of remote sensing data with high spatial and temporal resolution.

11. Information theory in econophysics: stock market and retirement funds

Science.gov (United States)

Vogel, Eugenio; Saravia, G.; Astete, J.; Díaz, J.; Erribarren, R.; Riadi, F.

2013-03-01

Information theory can help to recognize magnetic phase transitions, what can be seen as a way to recognize different regimes. This is achieved by means of zippers specifically designed to compact data in a meaningful way at is the case for compressor wlzip. In the present contribution we first apply wlzip to the Chilean stock market interpreting the compression rates for the files storing the minute variation of the IPSA indicator. Agitated days yield poor compression rates while calm days yield high compressibility. We then correlate this behavior to the value of the five retirement funds related to the Chilean economy. It is found that the covariance between the profitability of the retirement funds and the compressibility of the IPSA values of previous day is high for those funds investing in risky stocks. Surprisingly, there seems to be no great difference among the three riskier funds contrary to what could be expected from the limitations on the portfolio composition established by the laws that regulate this market.

12. Signal Detection Theory-Based Information Processing for the Detection of Breast Cancer at Microwave Frequencies

National Research Council Canada - National Science Library

Nolte, Loren

2002-01-01

The hypothesis is that one can use signal detection theory to improve the performance in detecting tumors in the breast by using this theory to develop task-oriented information processing techniques...

13. Information structures in economics studies in the theory of markets with imperfect information

CERN Document Server

Nermuth, Manfred

1982-01-01

This book is intended as a contribution to the theory of markets with imperfect information. The subject being nearly limitless, only certain selected topics are discussed. These are outlined in the Introduction (Ch. 0). The remainder of the book is divided into three parts. All results of economic significance are contained in Parts II & III. Part I introduces the main tools for the analysis, in particular the concept of an information structure. Although most of the material presented in Part I is not original, it is hoped that the detailed and self-contained exposition will help the reader to understand not only the following pages, but also the existing technical and variegated literature on markets with imperfect information. The mathematical prerequisites needed, but not explained in the text rarely go beyond elementary calculus and probability theory. Whenever more advanced concepts are used, I have made an effort to give an intuitive explanation as well, so that the argument can also be followed o...

14. An efficient biological pathway layout algorithm combining grid-layout and spring embedder for complicated cellular location information.

Science.gov (United States)

Kojima, Kaname; Nagasaki, Masao; Miyano, Satoru

2010-06-18

Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the spring embedder algorithm due to

15. Finite element methods for viscous incompressible flows a guide to theory, practice, and algorithms

CERN Document Server

Gunzburger, Max D

2012-01-01

In this book, the author examines mathematical aspects of finite element methods for the approximate solution of incompressible flow problems. The principal goal is to present some of the important mathematical results that are relevant to practical computations. In so doing, useful algorithms are also discussed. Although rigorous results are stated, no detailed proofs are supplied; rather, the intention is to present these results so that they can serve as a guide for the selection and, in certain respects, the implementation of algorithms.

16. Pangenesis as a source of new genetic information. The history of a now disproven theory.

Science.gov (United States)

Bergman, Gerald

2006-01-01

Evolution is based on natural selection of existing biological phenotypic traits. Natural selection can only eliminate traits. It cannot create new ones, requiring a theory to explain the origin of new genetic information. The theory of pangenesis was a major attempt to explain the source of new genetic information required to produce phenotypic variety. This theory, advocated by Darwin as the main source of genetic variety, has now been empirically disproved. It is currently a theory mainly of interest to science historians.

17. Informal meeting on recent developments in field theory

International Nuclear Information System (INIS)

Anon.

1977-12-01

A topical meeting on recent developments in field theory was organized by the International Centre for Theoretical Physics from 21 to 23 November 1977. The publication is a compilation of the abstracts of lecture given. The mayor themes of the meeting were the problem of confinement, the quantization of Yang-Mills theories and the topological aspects of field theories in flat and curved spaces

18. Biologically inspired information theory: Adaptation through construction of external reality models by living systems.

Science.gov (United States)

Nakajima, Toshiyuki

2015-12-01

Higher animals act in the world using their external reality models to cope with the uncertain environment. Organisms that have not developed such information-processing organs may also have external reality models built in the form of their biochemical, physiological, and behavioral structures, acquired by natural selection through successful models constructed internally. Organisms subject to illusions would fail to survive in the material universe. How can organisms, or living systems in general, determine the external reality from within? This paper starts with a phenomenological model, in which the self constitutes a reality model developed through the mental processing of phenomena. Then, the it-from-bit concept is formalized using a simple mathematical model. For this formalization, my previous work on an algorithmic process is employed to constitute symbols referring to the external reality, called the inverse causality, with additional improvements to the previous work. Finally, as an extension of this model, the cognizers system model is employed to describe the self as one of many material entities in a world, each of which acts as a subject by responding to the surrounding entities. This model is used to propose a conceptual framework of information theory that can deal with both the qualitative (semantic) and quantitative aspects of the information involved in biological processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

19. Anticipated detection of favorable periods for wind energy production by means of information theory

Science.gov (United States)

Vogel, Eugenio; Saravia, Gonzalo; Kobe, Sigismund; Schumann, Rolf; Schuster, Rolf

Managing the electric power produced by different sources requires mixing the different response times they present. Thus, for instance, coal burning presents large time lags until operational conditions are reached while hydroelectric generation can react in a matter of some seconds or few minutes to reach the desired productivity. Wind energy production (WEP) can be instantaneously fed to the network to save fuels with low thermal inertia (gas burning for instance), but this source presents sudden variations within few hours. We report here for the first time a method based on information theory to handle WEP. This method has been successful in detecting dynamical changes in magnetic transitions and variations of stock markets. An algorithm called wlzip based on information recognition is used to recognize the information content of a time series. We make use of publically available energy data in Germany to simulate real applications. After a calibration process the system can recognize directly on the WEP data the onset of favorable periods of a desired strength. Optimization can lead to a few hours of anticipation which is enough to control the mixture of WEP with other energy sources, thus saving fuels.

20. Studying the varied shapes of gold clusters by an elegant optimization algorithm that hybridizes the density functional tight-binding theory and the density functional theory

Science.gov (United States)

Yen, Tsung-Wen; Lim, Thong-Leng; Yoon, Tiem-Leong; Lai, S. K.

2017-11-01

We combined a new parametrized density functional tight-binding (DFTB) theory (Fihey et al. 2015) with an unbiased modified basin hopping (MBH) optimization algorithm (Yen and Lai 2015) and applied it to calculate the lowest energy structures of Au clusters. From the calculated topologies and their conformational changes, we find that this DFTB/MBH method is a necessary procedure for a systematic study of the structural development of Au clusters but is somewhat insufficient for a quantitative study. As a result, we propose an extended hybridized algorithm. This improved algorithm proceeds in two steps. In the first step, the DFTB theory is employed to calculate the total energy of the cluster and this step (through running DFTB/MBH optimization for given Monte-Carlo steps) is meant to efficiently bring the Au cluster near to the region of the lowest energy minimum since the cluster as a whole has explicitly considered the interactions of valence electrons with ions, albeit semi-quantitatively. Then, in the second succeeding step, the energy-minimum searching process will continue with a skilledly replacement of the energy function calculated by the DFTB theory in the first step by one calculated in the full density functional theory (DFT). In these subsequent calculations, we couple the DFT energy also with the MBH strategy and proceed with the DFT/MBH optimization until the lowest energy value is found. We checked that this extended hybridized algorithm successfully predicts the twisted pyramidal structure for the Au40 cluster and correctly confirms also the linear shape of C8 which our previous DFTB/MBH method failed to do so. Perhaps more remarkable is the topological growth of Aun: it changes from a planar (n =3-11) → an oblate-like cage (n =12-15) → a hollow-shape cage (n =16-18) and finally a pyramidal-like cage (n =19, 20). These varied forms of the cluster's shapes are consistent with those reported in the literature.

1. An Optimal Joint User Association and Power Allocation Algorithm for Secrecy Information Transmission in Heterogeneous Networks

Directory of Open Access Journals (Sweden)

Rong Chai

2017-01-01

Full Text Available In recent years, heterogeneous radio access technologies have experienced rapid development and gradually achieved effective coordination and integration, resulting in heterogeneous networks (HetNets. In this paper, we consider the downlink secure transmission of HetNets where the information transmission from base stations (BSs to legitimate users is subject to the interception of eavesdroppers. In particular, we stress the problem of joint user association and power allocation of the BSs. To achieve data transmission in a secure and energy efficient manner, we introduce the concept of secrecy energy efficiency which is defined as the ratio of the secrecy transmission rate and power consumption of the BSs and formulate the problem of joint user association and power allocation as an optimization problem which maximizes the joint secrecy energy efficiency of all the BSs under the power constraint of the BSs and the minimum data rate constraint of user equipment (UE. By equivalently transforming the optimization problem into two subproblems, that is, power allocation subproblem and user association subproblem of the BSs, and applying iterative method and Kuhn-Munkres (K-M algorithm to solve the two subproblems, respectively, the optimal user association and power allocation strategies can be obtained. Numerical results demonstrate that the proposed algorithm outperforms previously proposed algorithms.

2. Algorithm for predicting the evolution of series of dynamics of complex systems in solving information problems

Science.gov (United States)

Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.

2018-03-01

In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.

3. Metal artifact reduction algorithm based on model images and spatial information

Energy Technology Data Exchange (ETDEWEB)

Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)

2011-10-01

Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.

4. Research on Quantum Algorithms at the Institute for Quantum Information and Matter

Science.gov (United States)

2016-05-29

Spyridon_Michalakis. Quantization of Hall Conductance For Interacting Electrons on a Torus, Commun. Math . Phys., (09 2014): 433. doi: I. H. Kim...Long-range entanglement is necessary for a topological storage of quantum information, Phys. Rev. Lett. (accepted), (08 2013): 80503. doi...John_Preskill, Sumit_Sijher. Protected gates for topological quantum field theories, Journal of Mathematical Physics, (01 2016): 22201. doi

5. Behavioral data requirements for translating cognitive theories into computer software algorithms

International Nuclear Information System (INIS)

Meister, D.

1992-01-01

This paper reviews the characteristics of cognitive theories and their links to behavioral science and advanced intelligent systems. Cognitive theories model human cognition, perception, and communication. They suggest the human functions the system should have, serve as a philosophical basis for system development, and provide abstract design guidelines. The underlying assumption behind this paper is that if the cognitive theories are to have any value at all, they must be translated into usable systems. A process for testing a cognitive theory in terms of conceptual criteria, behavioral predictions and tests, and software development and tests, is suggested. Criteria for measuring the problem solving success of the advanced system are described. A theory of the system as an intelligent problem solver is presented. (author)

6. Improved PSO algorithm based on chaos theory and its application to design flood hydrograph

Directory of Open Access Journals (Sweden)

Si-Fang Dong

2010-06-01

Full Text Available The deficiencies of basic particle swarm optimization (bPSO are its ubiquitous prematurity and its inability to seek the global optimal solution when optimizing complex high-dimensional functions. To overcome such deficiencies, the chaos-PSO (COSPSO algorithm was established by introducing the chaos optimization mechanism and a global particle stagnation-disturbance strategy into bPSO. In the improved algorithm, chaotic movement was adopted for the particles' initial movement trajectories to replace the former stochastic movement, and the chaos factor was used to guide the particles' path. When the global particles were stagnant, the disturbance strategy was used to keep the particles in motion. Five benchmark optimizations were introduced to test COSPSO, and they proved that COSPSO can remarkably improve efficiency in optimizing complex functions. Finally, a case study of COSPSO in calculating design flood hydrographs demonstrated the applicability of the improved algorithm.

7. Local versus nonlocal information in quantum-information theory: Formalism and phenomena

International Nuclear Information System (INIS)

Horodecki, Michal; Horodecki, Ryszard; Synak-Radtke, Barbara; Horodecki, Pawel; Oppenheim, Jonathan; Sen, Aditi; Sen, Ujjwal

2005-01-01

In spite of many results in quantum information theory, the complex nature of compound systems is far from clear. In general the information is a mixture of local and nonlocal ('quantum') information. It is important from both pragmatic and theoretical points of view to know the relationships between the two components. To make this point more clear, we develop and investigate the quantum-information processing paradigm in which parties sharing a multipartite state distill local information. The amount of information which is lost because the parties must use a classical communication channel is the deficit. This scheme can be viewed as complementary to the notion of distilling entanglement. After reviewing the paradigm in detail, we show that the upper bound for the deficit is given by the relative entropy distance to so-called pseudoclassically correlated states; the lower bound is the relative entropy of entanglement. This implies, in particular, that any entangled state is informationally nonlocal - i.e., has nonzero deficit. We also apply the paradigm to defining the thermodynamical cost of erasing entanglement. We show the cost is bounded from below by relative entropy of entanglement. We demonstrate the existence of several other nonlocal phenomena which can be found using the paradigm of local information. For example, we prove the existence of a form of nonlocality without entanglement and with distinguishability. We analyze the deficit for several classes of multipartite pure states and obtain that in contrast to the GHZ state, the Aharonov state is extremely nonlocal. We also show that there do not exist states for which the deficit is strictly equal to the whole informational content (bound local information). We discuss the relation of the paradigm with measures of classical correlations introduced earlier. It is also proved that in the one-way scenario, the deficit is additive for Bell diagonal states. We then discuss complementary features of

8. Practice Evaluation Strategies Among Social Workers: Why an Evidence-Informed Dual-Process Theory Still Matters.

Science.gov (United States)

Davis, Thomas D

2017-01-01

Practice evaluation strategies range in style from the formal-analytic tools of single-subject designs, rapid assessment instruments, algorithmic steps in evidence-informed practice, and computer software applications, to the informal-interactive tools of clinical supervision, consultation with colleagues, use of client feedback, and clinical experience. The purpose of this article is to provide practice researchers in social work with an evidence-informed theory that is capable of explaining both how and why social workers use practice evaluation strategies to self-monitor the effectiveness of their interventions in terms of client change. The author delineates the theoretical contours and consequences of what is called dual-process theory. Drawing on evidence-informed advances in the cognitive and social neurosciences, the author identifies among everyday social workers a theoretically stable, informal-interactive tool preference that is a cognitively necessary, sufficient, and stand-alone preference that requires neither the supplementation nor balance of formal-analytic tools. The author's delineation of dual-process theory represents a theoretical contribution in the century-old attempt to understand how and why social workers evaluate their practice the way they do.

9. Properties of the numerical algorithms for problems of quantum information technologies: Benefits of deep analysis

Science.gov (United States)

2016-10-01

In recent years, quantum information technologies (QIT) showed great development, although, the way of the implementation of QIT faces the serious difficulties, some of which are challenging computational tasks. This work is devoted to the deep and broad analysis of the parallel algorithmic properties of such tasks. As an example we take one- and two-qubit transformations of a many-qubit quantum state, which are the most critical kernels of many important QIT applications. The analysis of the algorithms uses the methodology of the AlgoWiki project (algowiki-project.org) and consists of two parts: theoretical and experimental. Theoretical part includes features like sequential and parallel complexity, macro structure, and visual information graph. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia) and includes the analysis of locality and memory access, scalability and the set of more specific dynamic characteristics of realization. This approach allowed us to obtain bottlenecks and generate ideas of efficiency improvement.

10. Pre-Game-Theory Based Information Technology (GAMBIT) Study

National Research Council Canada - National Science Library

Polk, Charles

2003-01-01

.... The generic GAMBIT scenario has been characterized as Dynamic Hierarchical Gaming (DHG). Game theory is not yet ready to fully support analysis of DHG, though existing partial analysis suggests that a full treatment is practical in the midterm...

11. Standalone Mobile Application for Shipping Services Based on Geographic Information System and A-Star Algorithm

Science.gov (United States)

Gunawan, D.; Marzuki, I.; Candra, A.

2018-03-01

Geographic Information Systems (GIS) plays an essential role in shipping service related application. By utilizing GIS, the courier can find the route to deliver goods for its customer. This research proposes a standalone mobile application to provide the shortest route to the destinations by utilizing geographic information systems with A-Star algorithm. This application is intended to be used although the area has no Internet network available. The developed application can handle several drop off points then calculates the shortest route that passes through all the drop off points. According to the conducted testing, the number of drop off points that can be calculated is influenced by the specification of the smartphone. More destinations require more smartphone resources and time to process.

12. Using game theory for perceptual tuned rate control algorithm in video coding

Science.gov (United States)

2005-03-01

This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

13. Who Owns Educational Theory? Big Data, Algorithms and the Expert Power of Education Data Science

Science.gov (United States)

Williamson, Ben

2017-01-01

"Education data science" is an emerging methodological field which possesses the algorithm-driven technologies required to generate insights and knowledge from educational big data. This article consists of an analysis of the Lytics Lab, Stanford University's laboratory for research and development in learning analytics, and the Center…

14. A Novel Control Algorithm for Static Series Compensators by Use of PQR Instantaneous Power Theory

DEFF Research Database (Denmark)

Lee, Sang-Joon; Kim, Hyosung; Sul, Seung-Ki

2004-01-01

in coordinates is very simple and clear, has better steady state and dynamic performance. The controlled variables in coordinates are then inversely transformed to the original coordinates without time delay, generating control signals to SSCs. The control algorithm can be used for various kinds of SSCs...

15. A Global Convergence Theory for General Trust-Region-Based Algorithms for Equality Constrained Optimization

National Research Council Canada - National Science Library

Dennis, John E; El-Alem, Mahmoud; Maciel, Maria C

1995-01-01

.... The normal Component need not be computed accurately. The theory requires a quasi-normal component to satisfy a fraction of Cauchy decrease condition on the quadratic model of the linearized constraints...

16. Quantum entanglement in non-local games, graph parameters and zero-error information theory

NARCIS (Netherlands)

Scarpa, G.

2013-01-01

We study quantum entanglement and some of its applications in graph theory and zero-error information theory. In Chapter 1 we introduce entanglement and other fundamental concepts of quantum theory. In Chapter 2 we address the question of how much quantum correlations generated by entanglement can

17. Finding Commonalities: Social Information Processing and Domain Theory in the Study of Aggression

Science.gov (United States)

Nucci, Larry

2004-01-01

The Arsenio and Lemerise (this issue) proposal integrating social information processing (SIP) and domain theory to study children's aggression is evaluated from a domain theory perspective. Basic tenets of domain theory rendering it compatible with SIP are discussed as well as points of divergence. Focus is directed to the proposition that…

18. Noniterative accurate algorithm for the exact exchange potential of density-functional theory

International Nuclear Information System (INIS)

Cinal, M.; Holas, A.

2007-01-01

An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errors at least four orders of magnitude smaller than known in the literature

19. Practical mathematical optimization basic optimization theory and gradient-based algorithms

CERN Document Server

Snyman, Jan A

2018-01-01

This textbook presents a wide range of tools for a course in mathematical optimization for upper undergraduate and graduate students in mathematics, engineering, computer science, and other applied sciences. Basic optimization principles are presented with emphasis on gradient-based numerical optimization strategies and algorithms for solving both smooth and noisy discontinuous optimization problems. Attention is also paid to the difficulties of expense of function evaluations and the existence of multiple minima that often unnecessarily inhibit the use of gradient-based methods. This second edition addresses further advancements of gradient-only optimization strategies to handle discontinuities in objective functions. New chapters discuss the construction of surrogate models as well as new gradient-only solution strategies and numerical optimization using Python. A special Python module is electronically available (via springerlink) that makes the new algorithms featured in the text easily accessible and dir...

20. Sparse representation, modeling and learning in visual recognition theory, algorithms and applications

CERN Document Server

Cheng, Hong

2015-01-01

This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen

1. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

Science.gov (United States)

Harikumar, G; Bresler, Y

1999-01-01

We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

2. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

Energy Technology Data Exchange (ETDEWEB)

Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Lab. (INL), Idaho Falls, ID (United States)

2015-09-01

The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

3. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

International Nuclear Information System (INIS)

Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph

2015-01-01

The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

4. Mathematics Education as a Proving-Ground for Information-Processing Theories.

Science.gov (United States)

Greer, Brian, Ed.; Verschaffel, Lieven, Ed.

1990-01-01

Five papers discuss the current and potential contributions of information-processing theory to our understanding of mathematical thinking as those contributions affect the practice of mathematics education. It is concluded that information-processing theories need to be supplemented in various ways to more adequately reflect the complexity of…

5. Actor-network Theory and cartography of controversies in Information Science

OpenAIRE

LOURENÇO, Ramon Fernandes; TOMAÉL, Maria Inês

2018-01-01

Abstract The present study aims to discuss the interactions between the Actor-network Theory and the Cartography of Controversies method in Information Science research. A literature review was conducted on books, scholarly articles, and any other sources addressing the Theory-Actor Network and Cartography of Controversies. The understanding of the theoretical assumptions that guide the Network-Actor Theory allows examining important aspects to Information Science research, seeking to identif...

6. A Trust Evaluation Algorithm for Wireless Sensor Networks Based on Node Behaviors and D-S Evidence Theory

Directory of Open Access Journals (Sweden)

Jiangwen Wan

2011-01-01

Full Text Available For wireless sensor networks (WSNs, many factors, such as mutual interference of wireless links, battlefield applications and nodes exposed to the environment without good physical protection, result in the sensor nodes being more vulnerable to be attacked and compromised. In order to address this network security problem, a novel trust evaluation algorithm defined as NBBTE (Node Behavioral Strategies Banding Belief Theory of the Trust Evaluation Algorithm is proposed, which integrates the approach of nodes behavioral strategies and modified evidence theory. According to the behaviors of sensor nodes, a variety of trust factors and coefficients related to the network application are established to obtain direct and indirect trust values through calculating weighted average of trust factors. Meanwhile, the fuzzy set method is applied to form the basic input vector of evidence. On this basis, the evidence difference is calculated between the indirect and direct trust values, which link the revised D-S evidence combination rule to finally synthesize integrated trust value of nodes. The simulation results show that NBBTE can effectively identify malicious nodes and reflects the characteristic of trust value that ‘hard to acquire and easy to lose’. Furthermore, it is obvious that the proposed scheme has an outstanding advantage in terms of illustrating the real contribution of different nodes to trust evaluation.

7. A trust evaluation algorithm for wireless sensor networks based on node behaviors and D-S evidence theory.

Science.gov (United States)

Feng, Renjian; Xu, Xiaofeng; Zhou, Xiang; Wan, Jiangwen

2011-01-01

For wireless sensor networks (WSNs), many factors, such as mutual interference of wireless links, battlefield applications and nodes exposed to the environment without good physical protection, result in the sensor nodes being more vulnerable to be attacked and compromised. In order to address this network security problem, a novel trust evaluation algorithm defined as NBBTE (Node Behavioral Strategies Banding Belief Theory of the Trust Evaluation Algorithm) is proposed, which integrates the approach of nodes behavioral strategies and modified evidence theory. According to the behaviors of sensor nodes, a variety of trust factors and coefficients related to the network application are established to obtain direct and indirect trust values through calculating weighted average of trust factors. Meanwhile, the fuzzy set method is applied to form the basic input vector of evidence. On this basis, the evidence difference is calculated between the indirect and direct trust values, which link the revised D-S evidence combination rule to finally synthesize integrated trust value of nodes. The simulation results show that NBBTE can effectively identify malicious nodes and reflects the characteristic of trust value that 'hard to acquire and easy to lose'. Furthermore, it is obvious that the proposed scheme has an outstanding advantage in terms of illustrating the real contribution of different nodes to trust evaluation.

8. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

Science.gov (United States)

Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

2015-09-01

The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate and stable for steep slopes, and also conclude that, for longer time steps, the optimal

9. The urban informal economy in Ethiopia: theory and empirical ...

African Journals Online (AJOL)

Eastern Africa Social Science Research Review ... data to explore the roles and characteristics of the informal sector in urban centers of Ethiopia, ... informal sources, 4) the level of income per person varied sharply among the various sectors.

10. Boosting foundations and algorithms

CERN Document Server

Schapire, Robert E

2012-01-01

Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

11. Algorithmic cryptanalysis

CERN Document Server

Joux, Antoine

2009-01-01

Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

12. Reference group theory with implications for information studies: a theoretical essay

Directory of Open Access Journals (Sweden)

E. Murell Dawson

2001-01-01

Full Text Available This article explores the role and implications of reference group theory in relation to the field of library and information science. Reference group theory is based upon the principle that people take the standards of significant others as a basis for making self-appraisals, comparisons, and choices regarding need and use of information. Research that applies concepts of reference group theory to various sectors of library and information studies can provide data useful in enhancing areas such as information-seeking research, special populations, and uses of information. Implications are promising that knowledge gained from like research can be beneficial in helping information professionals better understand the role theory plays in examining ways in which people manage their information and social worlds.

13. Teaching Qualitative Research: Using Theory to Inform Practice

Science.gov (United States)

Sallee, Margaret W.

2010-01-01

This article considers how theories of instructional scaffolding--which call for a skilled expert to teach a novice a new task by breaking it into smaller pieces--might be employed in graduate-level qualitative methods courses. The author discusses how she used instructional scaffolding in the design and delivery of a qualitative methods course…

14. Information and Uncertainty in the Theory of Monetary Policy

OpenAIRE

Wagner, Helmut

2007-01-01

Theory and practice of monetary policy have changed significantly over the past three decades. A very important part of today's monetary policy is management of the expectations of private market participants. Publishing and justifying the central bank's best forecast of inflation, output, and the instrument rate is argued to be the most effective way to manage those expectations.

15. Genetic algorithm for lattice gauge theory on SU(2) and U(1) on 4 dimensional lattice, how to hitchhike to thermal equilibrium state

International Nuclear Information System (INIS)

Yamaguchi, A.; Sugamoto, A.

2000-01-01

Applying Genetic Algorithm for the Lattice Gauge Theory is formed to be an effective method to minimize the action of gauge field on a lattice. In 4 dimensions, the critical point and the Wilson loop behaviour of SU(2) lattice gauge theory as well as the phase transition of U(1) theory have been studied. The proper coding methodi has been developed in order to avoid the increase of necessary memory and the overload of calculation for Genetic Algorithm. How hichhikers toward equilibrium appear against kidnappers is clarified

16. Informed baseline subtraction of proteomic mass spectrometry data aided by a novel sliding window algorithm.

Science.gov (United States)

Stanford, Tyman E; Bagley, Christopher J; Solomon, Patty J

2016-01-01

Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein profiles from biological samples with the aim of discovering biomarkers for disease. However, the raw protein profiles suffer from several sources of bias or systematic variation which need to be removed via pre-processing before meaningful downstream analysis of the data can be undertaken. Baseline subtraction, an early pre-processing step that removes the non-peptide signal from the spectra, is complicated by the following: (i) each spectrum has, on average, wider peaks for peptides with higher mass-to-charge ratios ( m / z ), and (ii) the time-consuming and error-prone trial-and-error process for optimising the baseline subtraction input arguments. With reference to the aforementioned complications, we present an automated pipeline that includes (i) a novel 'continuous' line segment algorithm that efficiently operates over data with a transformed m / z -axis to remove the relationship between peptide mass and peak width, and (ii) an input-free algorithm to estimate peak widths on the transformed m / z scale. The automated baseline subtraction method was deployed on six publicly available proteomic MS datasets using six different m/z-axis transformations. Optimality of the automated baseline subtraction pipeline was assessed quantitatively using the mean absolute scaled error (MASE) when compared to a gold-standard baseline subtracted signal. Several of the transformations investigated were able to reduce, if not entirely remove, the peak width and peak location relationship resulting in near-optimal baseline subtraction using the automated pipeline. The proposed novel 'continuous' line segment algorithm is shown to far outperform naive sliding window algorithms with regard to the computational time required. The improvement in computational time was at least four-fold on real MALDI TOF-MS data and at least an order of

17. Information theory and signal transduction systems: from molecular information processing to network inference.

Science.gov (United States)

Mc Mahon, Siobhan S; Sim, Aaron; Filippi, Sarah; Johnson, Robert; Liepe, Juliane; Smith, Dominic; Stumpf, Michael P H

2014-11-01

Sensing and responding to the environment are two essential functions that all biological organisms need to master for survival and successful reproduction. Developmental processes are marshalled by a diverse set of signalling and control systems, ranging from systems with simple chemical inputs and outputs to complex molecular and cellular networks with non-linear dynamics. Information theory provides a powerful and convenient framework in which such systems can be studied; but it also provides the means to reconstruct the structure and dynamics of molecular interaction networks underlying physiological and developmental processes. Here we supply a brief description of its basic concepts and introduce some useful tools for systems and developmental biologists. Along with a brief but thorough theoretical primer, we demonstrate the wide applicability and biological application-specific nuances by way of different illustrative vignettes. In particular, we focus on the characterisation of biological information processing efficiency, examining cell-fate decision making processes, gene regulatory network reconstruction, and efficient signal transduction experimental design. Copyright © 2014 Elsevier Ltd. All rights reserved.

18. Understanding the aerosol information content in multi-spectral reflectance measurements using a synergetic retrieval algorithm

Directory of Open Access Journals (Sweden)

D. Martynenko

2010-11-01

Full Text Available An information content analysis for multi-wavelength SYNergetic AErosol Retrieval algorithm SYNAER was performed to quantify the number of independent pieces of information that can be retrieved. In particular, the capability of SYNAER to discern various aerosol types is assessed. This information content depends on the aerosol optical depth, the surface albedo spectrum and the observation geometry. The theoretical analysis is performed for a large number of scenarios with various geometries and surface albedo spectra for ocean, soil and vegetation. When the surface albedo spectrum and its accuracy is known under cloud-free conditions, reflectance measurements used in SYNAER is able to provide for 2–4° of freedom that can be attributed to retrieval parameters: aerosol optical depth, aerosol type and surface albedo.

The focus of this work is placed on an information content analysis with emphasis to the aerosol type classification. This analysis is applied to synthetic reflectance measurements for 40 predefined aerosol mixtures of different basic components, given by sea salt, mineral dust, biomass burning and diesel aerosols, water soluble and water insoluble aerosols. The range of aerosol parameters considered through the 40 mixtures covers the natural variability of tropospheric aerosols. After the information content analysis performed in Holzer-Popp et al. (2008 there was a necessity to compare derived degrees of freedom with retrieved aerosol optical depth for different aerosol types, which is the main focus of this paper.

The principle component analysis was used to determine the correspondence between degrees of freedom for signal in the retrieval and derived aerosol types. The main results of the analysis indicate correspondence between the major groups of the aerosol types, which are: water soluble aerosol, soot, mineral dust and sea salt and degrees of freedom in the algorithm and show the ability of the SYNAER to

19. Algorithms for biomagnetic source imaging with prior anatomical and physiological information

Energy Technology Data Exchange (ETDEWEB)

Hughett, Paul William [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

1995-12-01

This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.

20. Parallel scientific computing theory, algorithms, and applications of mesh based and meshless methods

CERN Document Server

Trobec, Roman

2015-01-01

This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes

1. Designing mixed metal halide ammines for ammonia storage using density functional theory and genetic algorithms

DEFF Research Database (Denmark)

Jensen, Peter Bjerre; Lysgaard, Steen; Quaade, Ulrich J.

2014-01-01

electrolyte membrane fuel cells (PEMFC). We use genetic algorithms (GAs) to search for materials containing up to three different metals (alkaline-earth, 3d and 4d) and two different halides (Cl, Br and I) – almost 27000 combinations, and have identified novel mixtures, with significantly improved storage......Metal halide ammines have great potential as a future, high-density energy carrier in vehicles. So far known materials, e.g. Mg(NH3)6Cl2 and Sr(NH3)8Cl2, are not suitable for automotive, fuel cell applications, because the release of ammonia is a multi-step reaction, requiring too much heat...

2. Combinatorial theory of the semiclassical evaluation of transport moments II: Algorithmic approach for moment generating functions

Energy Technology Data Exchange (ETDEWEB)

Berkolaiko, G. [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J. [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)

2013-12-15

Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.

3. Information needs of generalists and specialists using online best-practice algorithms to answer clinical questions.

Science.gov (United States)

Cook, David A; Sorensen, Kristi J; Linderbaum, Jane A; Pencille, Laurie J; Rhodes, Deborah J

2017-07-01

To better understand clinician information needs and learning opportunities by exploring the use of best-practice algorithms across different training levels and specialties. We developed interactive online algorithms (care process models [CPMs]) that integrate current guidelines, recent evidence, and local expertise to represent cross-disciplinary best practices for managing clinical problems. We reviewed CPM usage logs from January 2014 to June 2015 and compared usage across specialty and provider type. During the study period, 4009 clinicians (2014 physicians in practice, 1117 resident physicians, and 878 nurse practitioners/physician assistants [NP/PAs]) viewed 140 CPMs a total of 81 764 times. Usage varied from 1 to 809 views per person, and from 9 to 4615 views per CPM. Residents and NP/PAs viewed CPMs more often than practicing physicians. Among 2742 users with known specialties, generalists ( N  = 1397) used CPMs more often (mean 31.8, median 7 views) than specialists ( N  = 1345; mean 6.8, median 2; P  < .0001). The topics used by specialists largely aligned with topics within their specialties. The top 20% of available CPMs (28/140) collectively accounted for 61% of uses. In all, 2106 clinicians (52%) returned to the same CPM more than once (average 7.8 views per topic; median 4, maximum 195). Generalists revisited topics more often than specialists (mean 8.8 vs 5.1 views per topic; P  < .0001). CPM usage varied widely across topics, specialties, and individual clinicians. Frequently viewed and recurrently viewed topics might warrant special attention. Specialists usually view topics within their specialty and may have unique information needs. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

4. Selection of security system design via games of imperfect information and multi-objective genetic algorithm

International Nuclear Information System (INIS)

Lins, Isis Didier; Rêgo, Leandro Chaves; Moura, Márcio das Chagas

2013-01-01

This work analyzes the strategic interaction between a defender and an intelligent attacker by means of a game and reliability framework involving a multi-objective approach and imperfect information so as to support decision-makers in choosing efficiently designed security systems. A multi-objective genetic algorithm is used to determine the optimal security system's configurations representing the tradeoff between the probability of a successful defense and the acquisition and operational costs. Games with imperfect information are considered, in which the attacker has limited knowledge about the actual security system. The types of security alternatives are readily observable, but the number of redundancies actually implemented in each security subsystem is not known. The proposed methodology is applied to an illustrative example considering power transmission lines in the Northeast of Brazil, which are often targets for attackers who aims at selling the aluminum conductors. The empirical results show that the framework succeeds in handling this sort of strategic interaction. -- Highlights: ► Security components must have feasible costs and must be reliable. ► The optimal design of security systems considers a multi-objective approach. ► Games of imperfect information enable the choice of non-dominated configurations. ► MOGA, reliability and games support the entire defender's decision process. ► The selection of effective security systems may discourage attacker's actions

5. FCJ-198 New International Information Order (NIIO Revisited: Global Algorithmic Governance and Neocolonialism

Directory of Open Access Journals (Sweden)

Danny Butt

2016-03-01

Full Text Available The field of Internet governance has been dominated by Euro-American actors and has largely resisted consideration of a holistic and integrative rights-based agenda, confining itself to narrow discussions on the technical stability of Internet Protocol resources and debates about nation-state involvement in multistakeholder governance of those resources. In light of the work of Edward Snowden documenting the close relationship between government security agencies and dominant social media platforms, this paper revisits the relevance of the New International Information Order (NIIO, a conceptualisation of the global politics of information described at the 1973 Fourth Summit Conference of the Non-Aligned Movement of nations in Algiers. This paper argues that critical analysis of the oligopolistic structure of “platforms” and their algorithmic forms of governance can build a more inclusive movement toward social justice by extending the NIIO framework’s emphasis on decolonisation, collective ownership of strategic information resources, and documentation of powerful transnational entities.

6. A Novel Object Tracking Algorithm Based on Compressed Sensing and Entropy of Information

Directory of Open Access Journals (Sweden)

Ding Ma

2015-01-01

Full Text Available Object tracking has always been a hot research topic in the field of computer vision; its purpose is to track objects with specific characteristics or representation and estimate the information of objects such as their locations, sizes, and rotation angles in the current frame. Object tracking in complex scenes will usually encounter various sorts of challenges, such as location change, dimension change, illumination change, perception change, and occlusion. This paper proposed a novel object tracking algorithm based on compressed sensing and information entropy to address these challenges. First, objects are characterized by the Haar (Haar-like and ORB features. Second, the dimensions of computation space of the Haar and ORB features are effectively reduced through compressed sensing. Then the above-mentioned features are fused based on information entropy. Finally, in the particle filter framework, an object location was obtained by selecting candidate object locations in the current frame from the local context neighboring the optimal locations in the last frame. Our extensive experimental results demonstrated that this method was able to effectively address the challenges of perception change, illumination change, and large area occlusion, which made it achieve better performance than existing approaches such as MIL and CT.

7. Using theories of behaviour change to inform interventions for addictive behaviours.

Science.gov (United States)

Webb, Thomas L; Sniehotta, Falko F; Michie, Susan

2010-11-01

This paper reviews a set of theories of behaviour change that are used outside the field of addiction and considers their relevance for this field. Ten theories are reviewed in terms of (i) the main tenets of each theory, (ii) the implications of the theory for promoting change in addictive behaviours and (iii) studies in the field of addiction that have used the theory. An augmented feedback loop model based on Control Theory is used to organize the theories and to show how different interventions might achieve behaviour change. Briefly, each theory provided the following recommendations for intervention: Control Theory: prompt behavioural monitoring, Goal-Setting Theory: set specific and challenging goals, Model of Action Phases: form 'implementation intentions', Strength Model of Self-Control: bolster self-control resources, Social Cognition Models (Protection Motivation Theory, Theory of Planned Behaviour, Health Belief Model): modify relevant cognitions, Elaboration Likelihood Model: consider targets' motivation and ability to process information, Prototype Willingness Model: change perceptions of the prototypical person who engages in behaviour and Social Cognitive Theory: modify self-efficacy. There are a range of theories in the field of behaviour change that can be applied usefully to addiction, each one pointing to a different set of modifiable determinants and/or behaviour change techniques. Studies reporting interventions should describe theoretical basis, behaviour change techniques and mode of delivery accurately so that effective interventions can be understood and replicated. © 2010 The Authors. Journal compilation © 2010 Society for the Study of Addiction.

8. Critical Theory as a foundation for Pragmatic Information Systems Design

OpenAIRE

Gerald Benoît

2001-01-01

This paper considers how designers of information systems and end-user perspectives, communication models and linguistic behaviors differ. A critique of these differences is made by applying Habermas's communicative action principles. An empirical study of human-human information seeking, based on those principles, indicates which behaviors are predictors of successful interactions and so are candidate behaviors may be integrated into computerized information systems.

9. Stakeholder theory and reporting information The case of performance prism

Directory of Open Access Journals (Sweden)

Bartłomiej Nita

2016-07-01

Full Text Available The aim of the paper is to explain the stakeholder theory in the context of performance measurement in integrated reporting. Main research methods used in the article include logical reasoning, critical analysis of academic literature, and observation. The principal result of the discussion is included in the statement that the stakeholder theory in the field of accounting is reflected in the so-called integrated reporting. Moreover, among the large variety of performance measurement methods, such as balanced scorecard and others, the concept of performance prism can be considered as the only method that fully takes into account the wide range of stakeholders. The analysis performed leads to the conclusion that development in accounting research takes into account the objectives of an organization in the context of the so-called corporate social responsibility as well as performance reporting oriented towards the communication of the company with its environment and the various stakeholder groups.

10. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems

Directory of Open Access Journals (Sweden)

D'Onofrio David J

2012-03-01

Full Text Available Abstract The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI. PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

11. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems.

Science.gov (United States)

D'Onofrio, David J; Abel, David L; Johnson, Donald E

2012-03-14

The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

12. The logic of logistics theory, algorithms, and applications for logistics management

CERN Document Server

Simchi-Levi, David; Bramel, Julien

2014-01-01

Fierce competition in today's global market provides a powerful motivation for developing ever more sophisticated logistics systems. This book, written for the logistics manager and researcher, presents a survey of the modern theory and application of logistics. The goal of the book is to present the state of the art in the science of logistics management. This third edition includes new chapters on the subjects of game theory, the power of process flexibility, supply chain competition and collaboration. Among the other materials new to the edition are sections on discrete convex analysis and its applications to stochastic inventory models, as well as extended discussions of integrated inventory and pricing models. The material presents a timely and authoritative survey of the field that will make an invaluable companion to the work of many researchers and practitioners.   Review of earlier edition:   "The present book focuses on the application of operational research and mathematical modelling technique...

13. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

Directory of Open Access Journals (Sweden)

Zhao Hong-hao

2016-01-01

Full Text Available Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distributed normal process. Then the Bases theory is used to characterize the end-to-end network traffic. By calculating the parameters, the model is determined correctly. Simulation results show that our approach is feasible and effective.

14. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

OpenAIRE

Zhao Hong-hao; Meng Fan-bo; Zhao Si-wen; Zhao Si-hang; Lu Yi

2016-01-01

Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distrib...

15. Internet information triangulation: Design theory and prototype evaluation

NARCIS (Netherlands)

Wijnhoven, Alphonsus B.J.M.; Brinkhuis, Michel

2014-01-01

Many discussions exist regarding the credibility of information on the Internet. Similar discussions happen on the interpretation of social scientific research data, for which information triangulation has been proposed as a useful method. In this article, we explore a design theory—consisting of a

16. Innovations in information retrieval perspectives for theory and practice

CERN Document Server

Foster, Allen

2011-01-01

The advent of various information retrieval (IR) technologies and approaches to storage and retrieval provide communities with opportunities for mass documentation, digitization, and the recording of information in different forms. This book introduces and contextualizes these developments and looks at supporting research in IR.

17. On the assessment of visual communication by information theory

Science.gov (United States)

Huck, Friedrich O.; Fales, Carl L.

1993-01-01

This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

18. Information Theory Broadens the Spectrum of Molecular Ecology and Evolution.

Science.gov (United States)

Sherwin, W B; Chao, A; Jost, L; Smouse, P E

2017-12-01

Information or entropy analysis of diversity is used extensively in community ecology, and has recently been exploited for prediction and analysis in molecular ecology and evolution. Information measures belong to a spectrum (or q profile) of measures whose contrasting properties provide a rich summary of diversity, including allelic richness (q=0), Shannon information (q=1), and heterozygosity (q=2). We present the merits of information measures for describing and forecasting molecular variation within and among groups, comparing forecasts with data, and evaluating underlying processes such as dispersal. Importantly, information measures directly link causal processes and divergence outcomes, have straightforward relationship to allele frequency differences (including monotonicity that q=2 lacks), and show additivity across hierarchical layers such as ecology, behaviour, cellular processes, and nongenetic inheritance. Copyright © 2017 Elsevier Ltd. All rights reserved.

19. Metaheuristic Algorithms Applied to Bioenergy Supply Chain Problems: Theory, Review, Challenges, and Future

Directory of Open Access Journals (Sweden)

Krystel K. Castillo-Villar

2014-11-01

Full Text Available Bioenergy is a new source of energy that accounts for a substantial portion of the renewable energy production in many countries. The production of bioenergy is expected to increase due to its unique advantages, such as no harmful emissions and abundance. Supply-related problems are the main obstacles precluding the increase of use of biomass (which is bulky and has low energy density to produce bioenergy. To overcome this challenge, large-scale optimization models are needed to be solved to enable decision makers to plan, design, and manage bioenergy supply chains. Therefore, the use of effective optimization approaches is of great importance. The traditional mathematical methods (such as linear, integer, and mixed-integer programming frequently fail to find optimal solutions for non-convex and/or large-scale models whereas metaheuristics are efficient approaches for finding near-optimal solutions that use less computational resources. This paper presents a comprehensive review by studying and analyzing the application of metaheuristics to solve bioenergy supply chain models as well as the exclusive challenges of the mathematical problems applied in the bioenergy supply chain field. The reviewed metaheuristics include: (1 population approaches, such as ant colony optimization (ACO, the genetic algorithm (GA, particle swarm optimization (PSO, and bee colony algorithm (BCA; and (2 trajectory approaches, such as the tabu search (TS and simulated annealing (SA. Based on the outcomes of this literature review, the integrated design and planning of bioenergy supply chains problem has been solved primarily by implementing the GA. The production process optimization was addressed primarily by using both the GA and PSO. The supply chain network design problem was treated by utilizing the GA and ACO. The truck and task scheduling problem was solved using the SA and the TS, where the trajectory-based methods proved to outperform the population

20. Assessment of various failure theories for weight and cost optimized laminated composites using genetic algorithm

Energy Technology Data Exchange (ETDEWEB)

Goyal, T. [Indian Institute of Technology Kanpur. Dept. of Aerospace Engineering, UP (India); Gupta, R. [Infotech Enterprises Ltd., Hyderabad (India)

2012-07-01

1. The information a history, a theory, a flood

CERN Document Server

Gleick, James

2011-01-01

Winner of the Royal Society Winton Prize for Science Books 2012, the world's leading prize for popular science writing. We live in the information age. But every era of history has had its own information revolution: the invention of writing, the composition of dictionaries, the creation of the charts that made navigation possible, the discovery of the electronic signal, the cracking of the genetic code. In 'The Information' James Gleick tells the story of how human beings use, transmit and keep what they know. From African talking drums to Wikipedia, from Morse code to the 'bit', it is a fascinating account of the modern age's defining idea and a brilliant exploration of how information has revolutionised our lives.

2. Reflections on the Right to Information Based on Citizenship Theories

Directory of Open Access Journals (Sweden)

Vitor Gentilli

2007-06-01

Full Text Available In modern societies, structured as representative democracies, all rights to some extent are related to the right to information: the enlargement of participation in citizenship presupposes an enlargement of the right to information as a premise. It is a right which encourages the exercising of citizenship and aﬀ ords the citizens access to and criticism of the instruments necessary for the full exercising of the group of citizenship rights. The right to information can have characteristics of emancipation or of tutelage. An emancipating right is a right to freedom, a right whose basic presupposition is freedom of choice. Accordingly, the maxim which could sum up the ethical issue of the right to information would be: give maximum publicity to everything which refers to the public sphere and keep secret that which refers to the private sphere.

3. Multi-Sensor Information Fusion for Optimizing Electric Bicycle Routes Using a Swarm Intelligence Algorithm

Science.gov (United States)

Villarubia, Gabriel; De Paz, Juan F.; Bajo, Javier

2017-01-01

The use of electric bikes (e-bikes) has grown in popularity, especially in large cities where overcrowding and traffic congestion are common. This paper proposes an intelligent engine management system for e-bikes which uses the information collected from sensors to optimize battery energy and time. The intelligent engine management system consists of a built-in network of sensors in the e-bike, which is used for multi-sensor data fusion; the collected data is analysed and fused and on the basis of this information the system can provide the user with optimal and personalized assistance. The user is given recommendations related to battery consumption, sensors, and other parameters associated with the route travelled, such as duration, speed, or variation in altitude. To provide a user with these recommendations, artificial neural networks are used to estimate speed and consumption for each of the segments of a route. These estimates are incorporated into evolutionary algorithms in order to make the optimizations. A comparative analysis of the results obtained has been conducted for when routes were travelled with and without the optimization system. From the experiments, it is evident that the use of an engine management system results in significant energy and time savings. Moreover, user satisfaction increases as the level of assistance adapts to user behavior and the characteristics of the route. PMID:29088087

4. Multi-Sensor Information Fusion for Optimizing Electric Bicycle Routes Using a Swarm Intelligence Algorithm

Directory of Open Access Journals (Sweden)

Daniel H. De La Iglesia

2017-10-01

Full Text Available The use of electric bikes (e-bikes has grown in popularity, especially in large cities where overcrowding and traffic congestion are common. This paper proposes an intelligent engine management system for e-bikes which uses the information collected from sensors to optimize battery energy and time. The intelligent engine management system consists of a built-in network of sensors in the e-bike, which is used for multi-sensor data fusion; the collected data is analysed and fused and on the basis of this information the system can provide the user with optimal and personalized assistance. The user is given recommendations related to battery consumption, sensors, and other parameters associated with the route travelled, such as duration, speed, or variation in altitude. To provide a user with these recommendations, artificial neural networks are used to estimate speed and consumption for each of the segments of a route. These estimates are incorporated into evolutionary algorithms in order to make the optimizations. A comparative analysis of the results obtained has been conducted for when routes were travelled with and without the optimization system. From the experiments, it is evident that the use of an engine management system results in significant energy and time savings. Moreover, user satisfaction increases as the level of assistance adapts to user behavior and the characteristics of the route.

5. Multi-Sensor Information Fusion for Optimizing Electric Bicycle Routes Using a Swarm Intelligence Algorithm.

Science.gov (United States)

De La Iglesia, Daniel H; Villarrubia, Gabriel; De Paz, Juan F; Bajo, Javier

2017-10-31

The use of electric bikes (e-bikes) has grown in popularity, especially in large cities where overcrowding and traffic congestion are common. This paper proposes an intelligent engine management system for e-bikes which uses the information collected from sensors to optimize battery energy and time. The intelligent engine management system consists of a built-in network of sensors in the e-bike, which is used for multi-sensor data fusion; the collected data is analysed and fused and on the basis of this information the system can provide the user with optimal and personalized assistance. The user is given recommendations related to battery consumption, sensors, and other parameters associated with the route travelled, such as duration, speed, or variation in altitude. To provide a user with these recommendations, artificial neural networks are used to estimate speed and consumption for each of the segments of a route. These estimates are incorporated into evolutionary algorithms in order to make the optimizations. A comparative analysis of the results obtained has been conducted for when routes were travelled with and without the optimization system. From the experiments, it is evident that the use of an engine management system results in significant energy and time savings. Moreover, user satisfaction increases as the level of assistance adapts to user behavior and the characteristics of the route.

6. Improved Inverse Kinematics Algorithm Using Screw Theory for a Six-DOF Robot Manipulator

OpenAIRE

Chen, Qingcheng; Zhu, Shiqiang; Zhang, Xuequn

2015-01-01

Based on screw theory, a novel improved inverse-kinematics approach for a type of six-DOF serial robot, “Qianjiang I”, is proposed in this paper. The common kinematics model of the robot is based on the Denavit-Hartenberg (D-H) notation method while its inverse kinematics has inefficient calculation and complicated solution, which cannot meet the demands of online real-time application. To solve this problem, this paper presents a new method to improve the efficiency of the inverse kinematics...

7. The logic of logistics theory, algorithms, and applications for logistics and supply chain management

CERN Document Server

Simchi-Levi, David; Bramel, Julien

2005-01-01

Fierce competition in today's global market provides a powerful motivation for developing ever more sophisticated logistics systems. This book, written for the logistics manager and researcher, presents a survey of the modern theory and application of logistics. The goal of the book is to present the state of the art in the science of logistics management. As a result, the authors have written a timely and authoritative survey of this field that many practitioners and researchers will find makes an invaluable companion to their work.

8. Robust algorithms and system theory applied to the reconstruction of primary and secondary vertices

International Nuclear Information System (INIS)

Fruehwirth, R.; Liko, D.; Mitaroff, W.; Regler, M.

1990-01-01

Filter techniques from system theory have recently been applied to the estimation of track and vertex parameters. In this paper, vertex fitting by the Kalman filter method is discussed. These techniques have been applied to the identification of short-lived decay vertices in the case of high multiplicities as expected at LEP (Monte Carlo data in the DELPHI detector). Then in this context the need of further rebustification of the Kalman filter method is discussed. Finally results of an application with real data at a heavy ion experiment (NA36) will be presented. Here the vertex fit is used to select the interaction point among possible targets

9. Blind information-theoretic multiuser detection algorithms for DS-CDMA and WCDMA downlink systems.

Science.gov (United States)

Waheed, Khuram; Salem, Fathi M

2005-07-01

10. Human vision is determined based on information theory

Science.gov (United States)

2016-11-01

It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

11. IEEE International Symposium on Information Theory (ISIT): Abstracts of Papers, Held in Ann Arbor, Michigan on 6-9 October 1986.

Science.gov (United States)

1986-10-01

code algorithms of Adler- Coopersmith -Hassner and Karabed-Marcus, which exploit techniques of symbolic dynamics to derive systematic code construction...pro- cedures for finite and infinite memory channels. (The paper of Adler- Coopersmith - Hassner received the 1985 Information Theory Group Paper Award...Research Center K69/802, 650 Harry Road, San Jose, CA 95120, USA. We continue here the work of Adler, Coopersmith , and Hassner (see IEEE-IT 29, 5-22

12. Theory of impossible worlds: Toward a physics of information.

Science.gov (United States)

Buscema, Paolo Massimo; Sacco, Pier Luigi; Della Torre, Francesca; Massini, Giulia; Breda, Marco; Ferilli, Guido

2018-05-01

In this paper, we introduce an innovative approach to the fusion between datasets in terms of attributes and observations, even when they are not related at all. With our technique, starting from datasets representing independent worlds, it is possible to analyze a single global dataset, and transferring each dataset onto the others is always possible. This procedure allows a deeper perspective in the study of a problem, by offering the chance of looking into it from other, independent points of view. Even unrelated datasets create a metaphoric representation of the problem, useful in terms of speed of convergence and predictive results, preserving the fundamental relationships in the data. In order to extract such knowledge, we propose a new learning rule named double backpropagation, by which an auto-encoder concurrently codifies all the different worlds. We test our methodology on different datasets and different issues, to underline the power and flexibility of the Theory of Impossible Worlds.

13. Application of fuzzy C-Means Algorithm for Determining Field of Interest in Information System Study STTH Medan

Science.gov (United States)

Rahman Syahputra, Edy; Agustina Dalimunthe, Yulia; Irvan

2017-12-01

Many students are confused in choosing their own field of specialization, ultimately choosing areas of specialization that are incompatible with a variety of reasons such as just following a friend or because of the area of interest of many choices without knowing whether they have Competencies in the chosen field of interest. This research aims to apply Clustering method with Fuzzy C-means algorithm to classify students in the chosen interest field. The Fuzzy C-Means algorithm is one of the easiest and often used algorithms in data grouping techniques because it makes efficient estimates and does not require many parameters. Several studies have led to the conclusion that the Fuzzy C-Means algorithm can be used to group data based on certain attributes. In this research will be used Fuzzy C-Means algorithm to classify student data based on the value of core subjects in the selection of specialization field. This study also tested the accuracy of the Fuzzy C-Means algorithm in the determination of interest area. The study was conducted on the STT-Harapan Medan Information System Study program, and the object of research is the value of all students of STT-Harapan Medan Information System Study Program 2012. From this research, it is expected to get the specialization field, according to the students' ability based on the prerequisite principal value.

14. Product-oriented design theory for digital information services: A literature review.

NARCIS (Netherlands)

Wijnhoven, Alphonsus B.J.M.; Kraaijenbrink, Jeroen

2008-01-01

Purpose – The purpose of this paper is to give a structured literature review, design concepts, and research propositions related to a product-oriented design theory for information services. Information services facilitate the exchange of information goods with or without transforming these goods.

15. Prosody's Contribution to Fluency: An Examination of the Theory of Automatic Information Processing

Science.gov (United States)

Schrauben, Julie E.

2010-01-01

LaBerge and Samuels' (1974) theory of automatic information processing in reading offers a model that explains how and where the processing of information occurs and the degree to which processing of information occurs. These processes are dependent upon two criteria: accurate word decoding and automatic word recognition. However, LaBerge and…

16. Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I

Energy Technology Data Exchange (ETDEWEB)

Thomas, L. (ed.)

1979-01-01

The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations.

17. Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I

International Nuclear Information System (INIS)

Thomas, L.

1979-01-01

The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations

18. Algorithmic mathematics

CERN Document Server

Hougardy, Stefan

2016-01-01

Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

19. Relation between entropy functional of Keizer and information theory

International Nuclear Information System (INIS)

Freidkin, E.S.; Nettleton, R.E.

1990-01-01

An equation given by Keizer which relates the second-order functional derivative of the steady-state entropy to the inverse fluctuation correlation function is satisified by the information-theoretic entropy if the equation is extended to arbitrary nonequilibrium states

20. A Modified Method Combined with a Support Vector Machine and Bayesian Algorithms in Biological Information

Directory of Open Access Journals (Sweden)

Wen-Gang Zhou

2015-06-01

Full Text Available With the deep research of genomics and proteomics, the number of new protein sequences has expanded rapidly. With the obvious shortcomings of high cost and low efficiency of the traditional experimental method, the calculation method for protein localization prediction has attracted a lot of attention due to its convenience and low cost. In the machine learning techniques, neural network and support vector machine (SVM are often used as learning tools. Due to its complete theoretical framework, SVM has been widely applied. In this paper, we make an improvement on the existing machine learning algorithm of the support vector machine algorithm, and a new improved algorithm has been developed, combined with Bayesian algorithms. The proposed algorithm can improve calculation efficiency, and defects of the original algorithm are eliminated. According to the verification, the method has proved to be valid. At the same time, it can reduce calculation time and improve prediction efficiency.