WorldWideScience

Sample records for algorithmic information theory

  1. Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory

    Science.gov (United States)

    Kitazono, Jun; Kanai, Ryota; Oizumi, Masafumi

    2018-03-01

    The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ($\\Phi$) in the brain is related to the level of consciousness. IIT proposes that to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that if a measure of $\\Phi$ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of $\\Phi$ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of $\\Phi$ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure $\\Phi$ in large systems within a practical amount of time.

  2. [Prediction of regional soil quality based on mutual information theory integrated with decision tree algorithm].

    Science.gov (United States)

    Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu

    2012-02-01

    In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.

  3. Generalized phase retrieval algorithm based on information measures

    OpenAIRE

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  4. Algorithmic information theory mathematics of digital information processing

    CERN Document Server

    Seibt, Peter

    2007-01-01

    Treats the Mathematics of many important areas in digital information processing. This book covers, in a unified presentation, five topics: Data Compression, Cryptography, Sampling (Signal Theory), Error Control Codes, Data Reduction. It is useful for teachers, students and practitioners in Electronic Engineering, Computer Science and Mathematics.

  5. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Huimin Lu

    2013-01-01

    Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  6. Galois theory and algorithms for linear differential equations

    NARCIS (Netherlands)

    Put, Marius van der

    2005-01-01

    This paper is an informal introduction to differential Galois theory. It surveys recent work on differential Galois groups, related algorithms and some applications. (c) 2005 Elsevier Ltd. All rights reserved.

  7. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  8. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Duane, S.; Kogut, J.B.

    1986-01-01

    The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)

  9. Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.

    Science.gov (United States)

    Petry, Frederick E.; And Others

    1993-01-01

    Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…

  10. Understanding how replication processes can maintain systems away from equilibrium using Algorithmic Information Theory.

    Science.gov (United States)

    Devine, Sean D

    2016-02-01

    Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  11. Learning theory of distributed spectral algorithms

    International Nuclear Information System (INIS)

    Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan

    2017-01-01

    Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms. (paper)

  12. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Kennedy, A.D.

    1989-01-01

    These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs

  13. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  14. Interior point algorithms theory and analysis

    CERN Document Server

    Ye, Yinyu

    2011-01-01

    The first comprehensive review of the theory and practice of one of today's most powerful optimization techniques. The explosive growth of research into and development of interior point algorithms over the past two decades has significantly improved the complexity of linear programming and yielded some of today's most sophisticated computing techniques. This book offers a comprehensive and thorough treatment of the theory, analysis, and implementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basic and advanced aspects of the subject.

  15. Epistemology as Information Theory: From Leibniz to Omega

    OpenAIRE

    Chaitin, G. J.

    2005-01-01

    In 1686 in his Discours de Metaphysique, Leibniz points out that if an arbitrarily complex theory is permitted then the notion of "theory" becomes vacuous because there is always a theory. This idea is developed in the modern theory of algorithmic information, which deals with the size of computer programs and provides a new view of Godel's work on incompleteness and Turing's work on uncomputability. Of particular interest is the halting probability Omega, whose bits are irreducible, i.e., ma...

  16. Quantum Information Theory - an Invitation

    Science.gov (United States)

    Werner, Reinhard F.

    Quantum information and quantum computers have received a lot of public attention recently. Quantum computers have been advertised as a kind of warp drive for computing, and indeed the promise of the algorithms of Shor and Grover is to perform computations which are extremely hard or even provably impossible on any merely ``classical'' computer.In this article I shall give an account of the basic concepts of quantum information theory is given, staying as much as possible in the area of general agreement.The article is divided into two parts. The first (up to the end of Sect. 2.5) is mostly in plain English, centered around the exploration of what can or cannot be done with quantum systems as information carriers. The second part, Sect. 2.6, then gives a description of the mathematical structures and of some of the tools needed to develop the theory.

  17. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  18. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  19. Nonequilibrium molecular dynamics theory, algorithms and applications

    CERN Document Server

    Todd, Billy D

    2017-01-01

    Written by two specialists with over twenty-five years of experience in the field, this valuable text presents a wide range of topics within the growing field of nonequilibrium molecular dynamics (NEMD). It introduces theories which are fundamental to the field - namely, nonequilibrium statistical mechanics and nonequilibrium thermodynamics - and provides state-of-the-art algorithms and advice for designing reliable NEMD code, as well as examining applications for both atomic and molecular fluids. It discusses homogenous and inhomogenous flows and pays considerable attention to highly confined fluids, such as nanofluidics. In addition to statistical mechanics and thermodynamics, the book covers the themes of temperature and thermodynamic fluxes and their computation, the theory and algorithms for homogenous shear and elongational flows, response theory and its applications, heat and mass transport algorithms, applications in molecular rheology, highly confined fluids (nanofluidics), the phenomenon of slip and...

  20. Accuracy verification methods theory and algorithms

    CERN Document Server

    Mali, Olli; Repin, Sergey

    2014-01-01

    The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control.   The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.

  1. Planar graphs theory and algorithms

    CERN Document Server

    Nishizeki, T

    1988-01-01

    Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

  2. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    Science.gov (United States)

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  3. Linear programming mathematics, theory and algorithms

    CERN Document Server

    1996-01-01

    Linear Programming provides an in-depth look at simplex based as well as the more recent interior point techniques for solving linear programming problems. Starting with a review of the mathematical underpinnings of these approaches, the text provides details of the primal and dual simplex methods with the primal-dual, composite, and steepest edge simplex algorithms. This then is followed by a discussion of interior point techniques, including projective and affine potential reduction, primal and dual affine scaling, and path following algorithms. Also covered is the theory and solution of the linear complementarity problem using both the complementary pivot algorithm and interior point routines. A feature of the book is its early and extensive development and use of duality theory. Audience: The book is written for students in the areas of mathematics, economics, engineering and management science, and professionals who need a sound foundation in the important and dynamic discipline of linear programming.

  4. Parallelization of a spherical Sn transport theory algorithm

    International Nuclear Information System (INIS)

    Haghighat, A.

    1989-01-01

    The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)

  5. Optimisation combinatoire Theorie et algorithmes

    CERN Document Server

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  6. Higher arithmetic an algorithmic introduction to number theory

    CERN Document Server

    Edwards, Harold M

    2008-01-01

    Although number theorists have sometimes shunned and even disparaged computation in the past, today's applications of number theory to cryptography and computer security demand vast arithmetical computations. These demands have shifted the focus of studies in number theory and have changed attitudes toward computation itself. The important new applications have attracted a great many students to number theory, but the best reason for studying the subject remains what it was when Gauss published his classic Disquisitiones Arithmeticae in 1801: Number theory is the equal of Euclidean geometry--some would say it is superior to Euclidean geometry--as a model of pure, logical, deductive thinking. An arithmetical computation, after all, is the purest form of deductive argument. Higher Arithmetic explains number theory in a way that gives deductive reasoning, including algorithms and computations, the central role. Hands-on experience with the application of algorithms to computational examples enables students to m...

  7. Neighborhood Hypergraph Based Classification Algorithm for Incomplete Information System

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2015-01-01

    Full Text Available The problem of classification in incomplete information system is a hot issue in intelligent information processing. Hypergraph is a new intelligent method for machine learning. However, it is hard to process the incomplete information system by the traditional hypergraph, which is due to two reasons: (1 the hyperedges are generated randomly in traditional hypergraph model; (2 the existing methods are unsuitable to deal with incomplete information system, for the sake of missing values in incomplete information system. In this paper, we propose a novel classification algorithm for incomplete information system based on hypergraph model and rough set theory. Firstly, we initialize the hypergraph. Second, we classify the training set by neighborhood hypergraph. Third, under the guidance of rough set, we replace the poor hyperedges. After that, we can obtain a good classifier. The proposed approach is tested on 15 data sets from UCI machine learning repository. Furthermore, it is compared with some existing methods, such as C4.5, SVM, NavieBayes, and KNN. The experimental results show that the proposed algorithm has better performance via Precision, Recall, AUC, and F-measure.

  8. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  9. Genetic algorithm based on virus theory of evolution for traveling salesman problem; Virus shinkaron ni motozuku identeki algorithm no junkai salesman mondai eno oyo

    Energy Technology Data Exchange (ETDEWEB)

    Kubota, N. [Osaka Inst. of Technology, Osaka (Japan); Fukuda, T. [Nagoya University, Nagoya (Japan)

    1998-05-31

    This paper deals with virus evolutionary genetic algorithm. The genetic algorithms (GAs) have been demonstrated its effectiveness in optimization problems in these days. In general, the GAs simulate the survival of fittest by natural selection and the heredity of the Darwin`s theory of evolution. However, some types of evolutionary hypotheses such as neutral theory of molecular evolution, Imanishi`s evolutionary theory, serial symbiosis theory, and virus theory of evolution, have been proposed in addition to the Darwinism. Virus theory of evolution is based on the view that the virus transduction is a key mechanism for transporting segments of DNA across species. This paper proposes genetic algorithm based on the virus theory of evolution (VE-GA), which has two types of populations: host population and virus population. The VE-GA is composed of genetic operators and virus operators such as reverse transcription and incorporation. The reverse transcription operator transcribes virus genes on the chromosome of host individual and the incorporation operator creates new genotype of virus from host individual. These operators by virus population make it possible to transmit segment of DNA between individuals in the host population. Therefore, the VE-GA realizes not only vertical but also horizontal propagation of genetic information. Further, the VE-GA is applied to the traveling salesman problem in order to show the effectiveness. 20 refs., 10 figs., 3 tabs.

  10. Information theoretic methods for image processing algorithm optimization

    Science.gov (United States)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  11. New MPPT algorithm based on hybrid dynamical theory

    KAUST Repository

    Elmetennani, Shahrazed

    2014-11-01

    This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

  12. New MPPT algorithm based on hybrid dynamical theory

    KAUST Repository

    Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem; Benmansour, K.; Boucherit, M. S.; Tadjine, M.

    2014-01-01

    This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

  13. An Algorithmic Information Calculus for Causal Discovery and Reprogramming Systems

    KAUST Repository

    Zenil, Hector

    2017-09-08

    We introduce a conceptual framework and an interventional calculus to steer and manipulate systems based on their intrinsic algorithmic probability using the universal principles of the theory of computability and algorithmic information. By applying sequences of controlled interventions to systems and networks, we estimate how changes in their algorithmic information content are reflected in positive/negative shifts towards and away from randomness. The strong connection between approximations to algorithmic complexity (the size of the shortest generating mechanism) and causality induces a sequence of perturbations ranking the network elements by the steering capabilities that each of them is capable of. This new dimension unmasks a separation between causal and non-causal components providing a suite of powerful parameter-free algorithms of wide applicability ranging from optimal dimension reduction, maximal randomness analysis and system control. We introduce methods for reprogramming systems that do not require the full knowledge or access to the system\\'s actual kinetic equations or any probability distributions. A causal interventional analysis of synthetic and regulatory biological networks reveals how the algorithmic reprogramming qualitatively reshapes the system\\'s dynamic landscape. For example, during cellular differentiation we find a decrease in the number of elements corresponding to a transition away from randomness and a combination of the system\\'s intrinsic properties and its intrinsic capabilities to be algorithmically reprogrammed can reconstruct an epigenetic landscape. The interventional calculus is broadly applicable to predictive causal inference of systems such as networks and of relevance to a variety of machine and causal learning techniques driving model-based approaches to better understanding and manipulate complex systems.

  14. Monte Carlo algorithms for lattice gauge theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-05-01

    Various techniques are reviewed which have been used in numerical simulations of lattice gauge theories. After formulating the problem, the Metropolis et al. algorithm and some interesting variations are discussed. The numerous proposed schemes for including fermionic fields in the simulations are summarized. Langevin, microcanonical, and hybrid approaches to simulating field theories via differential evolution in a fictitious time coordinate are treated. Some speculations are made on new approaches to fermionic simulations

  15. The application of quadtree algorithm for information integration in the high-level radioactive waste geological disposal

    International Nuclear Information System (INIS)

    Gao Min; Zhong Xia; Huang Shutao

    2008-01-01

    A multi-source database for high-level radioactive waste geological disposal, aims to promote the information process of the geological of HLW. In the periods of the multi-dimensional and multi-source and the integration of information and applications, it also relates to computer software and hardware, the paper preliminary analysises the data resources Beishan area, Gansu Province. The paper introduces a theory based on GIS technology and methods and open source code GDAL application, at the same time, it discusses the technical methods how to finish the application of the Quadtree algorithm in the area of information resources management system, fully sharing, rapid retrieval and so on. A more detailed description of the characteristics of existing data resources, space-related data retrieval algorithm theory, programming design and implementation of ideas are showed in the paper. (authors)

  16. Astrophysical data analysis with information field theory

    International Nuclear Information System (INIS)

    Enßlin, Torsten

    2014-01-01

    Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented

  17. Astrophysical data analysis with information field theory

    Science.gov (United States)

    Enßlin, Torsten

    2014-12-01

    Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented.

  18. Astrophysical data analysis with information field theory

    Energy Technology Data Exchange (ETDEWEB)

    Enßlin, Torsten, E-mail: ensslin@mpa-garching.mpg.de [Max Planck Institut für Astrophysik, Karl-Schwarzschild-Straße 1, D-85748 Garching, Germany and Ludwig-Maximilians-Universität München, Geschwister-Scholl-Platz 1, D-80539 München (Germany)

    2014-12-05

    Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented.

  19. Inference algorithms and learning theory for Bayesian sparse factor analysis

    International Nuclear Information System (INIS)

    Rattray, Magnus; Sharp, Kevin; Stegle, Oliver; Winn, John

    2009-01-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  20. Inference algorithms and learning theory for Bayesian sparse factor analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rattray, Magnus; Sharp, Kevin [School of Computer Science, University of Manchester, Manchester M13 9PL (United Kingdom); Stegle, Oliver [Max-Planck-Institute for Biological Cybernetics, Tuebingen (Germany); Winn, John, E-mail: magnus.rattray@manchester.ac.u [Microsoft Research Cambridge, Roger Needham Building, Cambridge, CB3 0FB (United Kingdom)

    2009-12-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  1. Research on electricity consumption forecast based on mutual information and random forests algorithm

    Science.gov (United States)

    Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu

    2018-02-01

    Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.

  2. Theory and Algorithms for Global/Local Design Optimization

    National Research Council Canada - National Science Library

    Watson, Layne T; Guerdal, Zafer; Haftka, Raphael T

    2005-01-01

    The motivating application for this research is the global/local optimal design of composite aircraft structures such as wings and fuselages, but the theory and algorithms are more widely applicable...

  3. Data clustering theory, algorithms, and applications

    CERN Document Server

    Gan, Guojun; Wu, Jianhong

    2007-01-01

    Cluster analysis is an unsupervised process that divides a set of objects into homogeneous groups. This book starts with basic information on cluster analysis, including the classification of data and the corresponding similarity measures, followed by the presentation of over 50 clustering algorithms in groups according to some specific baseline methodologies such as hierarchical, center-based, and search-based methods. As a result, readers and users can easily identify an appropriate algorithm for their applications and compare novel ideas with existing results. The book also provides examples of clustering applications to illustrate the advantages and shortcomings of different clustering architectures and algorithms. Application areas include pattern recognition, artificial intelligence, information technology, image processing, biology, psychology, and marketing. Readers also learn how to perform cluster analysis with the C/C++ and MATLAB® programming languages.

  4. Nonsmooth Optimization Algorithms, System Theory, and Software Tools

    Science.gov (United States)

    1993-04-13

    Optimization Algorithms, System Theory , and Scftware Tools" AFOSR-90-OO68 L AUTHOR($) Elijah Polak -Professor and Principal Investigator 7. PERFORMING...NSN 754Q-01-2W0-S500 Standard Form 295 (69O104 Draft) F’wsa*W by hA Sit 230.1""V AFOSR-90-0068 NONSMO0 TH OPTIMIZA TION A L GORI THMS, SYSTEM THEORY , AND

  5. Machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2005-01-01

    In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

  6. A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network.

    Science.gov (United States)

    Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue

    2016-02-19

    Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency.

  7. A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network

    Directory of Open Access Journals (Sweden)

    Yuzhong Chen

    2016-02-01

    Full Text Available Vehicular ad hoc networks (VANETs have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency.

  8. The use of network theory to model disparate ship design information

    Science.gov (United States)

    Rigterink, Douglas; Piks, Rebecca; Singer, David J.

    2014-06-01

    This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship's distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

  9. Fundamentals of information theory and coding design

    CERN Document Server

    Togneri, Roberto

    2003-01-01

    In a clear, concise, and modular format, this book introduces the fundamental concepts and mathematics of information and coding theory. The authors emphasize how a code is designed and discuss the main properties and characteristics of different coding algorithms along with strategies for selecting the appropriate codes to meet specific requirements. They provide comprehensive coverage of source and channel coding, address arithmetic, BCH, and Reed-Solomon codes and explore some more advanced topics such as PPM compression and turbo codes. Worked examples and sets of basic and advanced exercises in each chapter reinforce the text's clear explanations of all concepts and methodologies.

  10. Multiscale Monte Carlo algorithms in statistical mechanics and quantum field theory

    Energy Technology Data Exchange (ETDEWEB)

    Lauwers, P G

    1990-12-01

    Conventional Monte Carlo simulation algorithms for models in statistical mechanics and quantum field theory are afflicted by problems caused by their locality. They become highly inefficient if investigations of critical or nearly-critical systems, i.e., systems with important large scale phenomena, are undertaken. We present two types of multiscale approaches that alleveate problems of this kind: Stochastic cluster algorithms and multigrid Monte Carlo simulation algorithms. Another formidable computational problem in simulations of phenomenologically relevant field theories with fermions is the need for frequently inverting the Dirac operator. This inversion can be accelerated considerably by means of deterministic multigrid methods, very similar to the ones used for the numerical solution of differential equations. (orig.).

  11. Support vector machines optimization based theory, algorithms, and extensions

    CERN Document Server

    Deng, Naiyang; Zhang, Chunhua

    2013-01-01

    Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions presents an accessible treatment of the two main components of support vector machines (SVMs)-classification problems and regression problems. The book emphasizes the close connection between optimization theory and SVMs since optimization is one of the pillars on which SVMs are built.The authors share insight on many of their research achievements. They give a precise interpretation of statistical leaning theory for C-support vector classification. They also discuss regularized twi

  12. Parallel/vector algorithms for the spherical SN transport theory method

    International Nuclear Information System (INIS)

    Haghighat, A.; Mattis, R.E.

    1990-01-01

    This paper discusses vector and parallel processing of a 1-D curvilinear (i.e. spherical) S N transport theory algorithm on the Cornell National SuperComputer Facility (CNSF) IBM 3090/600E. Two different vector algorithms were developed and parallelized based on angular decomposition. It is shown that significant speedups are attainable. For example, for problems with large granularity, using 4 processors, the parallel/vector algorithm achieves speedups (for wall-clock time) of more than 4.5 relative to the old serial/scalar algorithm. Furthermore, this work has demonstrated the existing potential for the development of faster processing vector and parallel algorithms for multidimensional curvilinear geometries. (author)

  13. NIFTY - Numerical Information Field Theory. A versatile PYTHON library for signal inference

    Science.gov (United States)

    Selig, M.; Bell, M. R.; Junklewitz, H.; Oppermann, N.; Reinecke, M.; Greiner, M.; Pachajoa, C.; Enßlin, T. A.

    2013-06-01

    NIFTy (Numerical Information Field Theory) is a software package designed to enable the development of signal inference algorithms that operate regardless of the underlying spatial grid and its resolution. Its object-oriented framework is written in Python, although it accesses libraries written in Cython, C++, and C for efficiency. NIFTy offers a toolkit that abstracts discretized representations of continuous spaces, fields in these spaces, and operators acting on fields into classes. Thereby, the correct normalization of operations on fields is taken care of automatically without concerning the user. This allows for an abstract formulation and programming of inference algorithms, including those derived within information field theory. Thus, NIFTy permits its user to rapidly prototype algorithms in 1D, and then apply the developed code in higher-dimensional settings of real world problems. The set of spaces on which NIFTy operates comprises point sets, n-dimensional regular grids, spherical spaces, their harmonic counterparts, and product spaces constructed as combinations of those. The functionality and diversity of the package is demonstrated by a Wiener filter code example that successfully runs without modification regardless of the space on which the inference problem is defined. NIFTy homepage http://www.mpa-garching.mpg.de/ift/nifty/; Excerpts of this paper are part of the NIFTy source code and documentation.

  14. Information theory of molecular systems

    CERN Document Server

    Nalewajski, Roman F

    2006-01-01

    As well as providing a unified outlook on physics, Information Theory (IT) has numerous applications in chemistry and biology owing to its ability to provide a measure of the entropy/information contained within probability distributions and criteria of their information ""distance"" (similarity) and independence. Information Theory of Molecular Systems applies standard IT to classical problems in the theory of electronic structure and chemical reactivity. The book starts by introducing the basic concepts of modern electronic structure/reactivity theory based upon the Density Functional Theory

  15. The use of network theory to model disparate ship design information

    Directory of Open Access Journals (Sweden)

    Douglas Rigterink

    2014-06-01

    Full Text Available This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship's distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

  16. The use of network theory to model disparate ship design information

    Directory of Open Access Journals (Sweden)

    Rigterink Douglas

    2014-06-01

    Full Text Available This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship’s distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

  17. Algorithms, architectures and information systems security

    CERN Document Server

    Sur-Kolay, Susmita; Nandy, Subhas C; Bagchi, Aditya

    2008-01-01

    This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security coverin

  18. Parameter-free Network Sparsification and Data Reduction by Minimal Algorithmic Information Loss

    KAUST Repository

    Zenil, Hector

    2018-02-16

    The study of large and complex datasets, or big data, organized as networks has emerged as one of the central challenges in most areas of science and technology. Cellular and molecular networks in biology is one of the prime examples. Henceforth, a number of techniques for data dimensionality reduction, especially in the context of networks, have been developed. Yet, current techniques require a predefined metric upon which to minimize the data size. Here we introduce a family of parameter-free algorithms based on (algorithmic) information theory that are designed to minimize the loss of any (enumerable computable) property contributing to the object\\'s algorithmic content and thus important to preserve in a process of data dimension reduction when forcing the algorithm to delete first the least important features. Being independent of any particular criterion, they are universal in a fundamental mathematical sense. Using suboptimal approximations of efficient (polynomial) estimations we demonstrate how to preserve network properties outperforming other (leading) algorithms for network dimension reduction. Our method preserves all graph-theoretic indices measured, ranging from degree distribution, clustering-coefficient, edge betweenness, and degree and eigenvector centralities. We conclude and demonstrate numerically that our parameter-free, Minimal Information Loss Sparsification (MILS) method is robust, has the potential to maximize the preservation of all recursively enumerable features in data and networks, and achieves equal to significantly better results than other data reduction and network sparsification methods.

  19. An information theory criteria based blind method for enumerating active users in DS-CDMA system

    Science.gov (United States)

    Samsami Khodadad, Farid; Abed Hodtani, Ghosheh

    2014-11-01

    In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.

  20. INFORMATIONAL-METHODICAL SUPPORT OF THE COURSE «MATHEMATICAL LOGIC AND THEORY OF ALGORITHMS»

    Directory of Open Access Journals (Sweden)

    Y. I. Sinko

    2010-06-01

    Full Text Available In this article the basic principles of training technique of future teachers of mathematics to foundations of mathematical logic and theory of algorithms in the Kherson State University with the use of information technologies are examined. General description of functioning of the methodical system of learning of mathematical logic with the use of information technologies, in that variant, when information technologies are presented by the integrated specialized programmatic environment of the educational purpose «MatLog» is given.

  1. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  2. Quantum biological information theory

    CERN Document Server

    Djordjevic, Ivan B

    2016-01-01

    This book is a self-contained, tutorial-based introduction to quantum information theory and quantum biology. It serves as a single-source reference to the topic for researchers in bioengineering, communications engineering, electrical engineering, applied mathematics, biology, computer science, and physics. The book provides all the essential principles of the quantum biological information theory required to describe the quantum information transfer from DNA to proteins, the sources of genetic noise and genetic errors as well as their effects. Integrates quantum information and quantum biology concepts; Assumes only knowledge of basic concepts of vector algebra at undergraduate level; Provides a thorough introduction to basic concepts of quantum information processing, quantum information theory, and quantum biology; Includes in-depth discussion of the quantum biological channel modelling, quantum biological channel capacity calculation, quantum models of aging, quantum models of evolution, quantum models o...

  3. Novel information theory techniques for phonon spectroscopy

    International Nuclear Information System (INIS)

    Hague, J P

    2007-01-01

    The maximum entropy method (MEM) and spectral reverse Monte Carlo (SRMC) techniques are applied to the determination of the phonon density of states (PDOS) from heat-capacity data. The approach presented here takes advantage of the standard integral transform relating the PDOS with the specific heat at constant volume. MEM and SRMC are highly successful numerical approaches for inverting integral transforms. The formalism and algorithms necessary to carry out the inversion of specific heat curves are introduced, and where possible, I have concentrated on algorithms and experimental details for practical usage. Simulated data are used to demonstrate the accuracy of the approach. The main strength of the techniques presented here is that the resulting spectra are always physical: Computed PDOS is always positive and properly applied information theory techniques only show statistically significant detail. The treatment set out here provides a simple, cost-effective and reliable method to determine phonon properties of new materials. In particular, the new technique is expected to be very useful for establishing where interesting phonon modes and properties can be found, before spending time at large scale facilities

  4. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    Science.gov (United States)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  5. A general algorithm for distributing information in a graph

    OpenAIRE

    Aji, Srinivas M.; McEliece, Robert J.

    1997-01-01

    We present a general “message-passing” algorithm for distributing information in a graph. This algorithm may help us to understand the approximate correctness of both the Gallager-Tanner-Wiberg algorithm, and the turbo-decoding algorithm.

  6. Science and information theory

    CERN Document Server

    Brillouin, Léon

    1962-01-01

    A classic source for exploring the connections between information theory and physics, this text is geared toward upper-level undergraduates and graduate students. The author, a giant of 20th-century mathematics, applies the principles of information theory to a variety of issues, including Maxwell's demon, thermodynamics, and measurement problems. 1962 edition.

  7. A Simple But Effective Canonical Dual Theory Unified Algorithm for Global Optimization

    OpenAIRE

    Zhang, Jiapu

    2011-01-01

    Numerical global optimization methods are often very time consuming and could not be applied for high-dimensional nonconvex/nonsmooth optimization problems. Due to the nonconvexity/nonsmoothness, directly solving the primal problems sometimes is very difficult. This paper presents a very simple but very effective canonical duality theory (CDT) unified global optimization algorithm. This algorithm has convergence is proved in this paper. More important, for this CDT-unified algorithm, numerous...

  8. Introduction to quantum information science

    CERN Document Server

    Hayashi, Masahito; Kawachi, Akinori; Kimura, Gen; Ogawa, Tomohiro

    2015-01-01

    This book presents the basics of quantum information, e.g., foundation of quantum theory, quantum algorithms, quantum entanglement, quantum entropies, quantum coding, quantum error correction and quantum cryptography. The required knowledge is only elementary calculus and linear algebra. This way the book can be understood by undergraduate students. In order to study quantum information, one usually has to study the foundation of quantum theory. This book describes it from more an operational viewpoint which is suitable for quantum information while traditional textbooks of quantum theory lack this viewpoint. The current  book bases on Shor's algorithm, Grover's algorithm, Deutsch-Jozsa's algorithm as basic algorithms. To treat several topics in quantum information, this book covers several kinds of information quantities in quantum systems including von Neumann entropy. The limits of several kinds of quantum information processing are given. As important quantum protocols,this book contains quantum teleport...

  9. Information Design Theories

    Science.gov (United States)

    Pettersson, Rune

    2014-01-01

    Information design has practical and theoretical components. As an academic discipline we may view information design as a combined discipline, a practical theory, or as a theoretical practice. So far information design has incorporated facts, influences, methods, practices, principles, processes, strategies, and tools from a large number of…

  10. Constructor theory of information

    Science.gov (United States)

    Deutsch, David; Marletto, Chiara

    2015-01-01

    We propose a theory of information expressed solely in terms of which transformations of physical systems are possible and which are impossible—i.e. in constructor-theoretic terms. It includes conjectured, exact laws of physics expressing the regularities that allow information to be physically instantiated. Although these laws are directly about information, independently of the details of particular physical instantiations, information is not regarded as an a priori mathematical or logical concept, but as something whose nature and properties are determined by the laws of physics alone. This theory solves a problem at the foundations of existing information theory, namely that information and distinguishability are each defined in terms of the other. It also explains the relationship between classical and quantum information, and reveals the single, constructor-theoretic property underlying the most distinctive phenomena associated with the latter, including the lack of in-principle distinguishability of some states, the impossibility of cloning, the existence of pairs of variables that cannot simultaneously have sharp values, the fact that measurement processes can be both deterministic and unpredictable, the irreducible perturbation caused by measurement, and locally inaccessible information (as in entangled systems). PMID:25663803

  11. Theories of information behavior

    CERN Document Server

    Erdelez, Sandra; McKechnie, Lynne

    2005-01-01

    This unique book presents authoritative overviews of more than 70 conceptual frameworks for understanding how people seek, manage, share, and use information in different contexts. A practical and readable reference to both well-established and newly proposed theories of information behavior, the book includes contributions from 85 scholars from 10 countries. Each theory description covers origins, propositions, methodological implications, usage, links to related conceptual frameworks, and listings of authoritative primary and secondary references. The introductory chapters explain key concepts, theory–method connections, and the process of theory development.

  12. Information and meaning revisiting Shannon's theory of communication and extending it to address todays technical problems.

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Travis LaDell

    2009-12-01

    This paper has three goals. The first is to review Shannon's theory of information and the subsequent advances leading to today's statistics-based text analysis algorithms, showing that the semantics of the text is neglected. The second goal is to propose an extension of Shannon's original model that can take into account semantics, where the 'semantics' of a message is understood in terms of the intended or actual changes on the recipient of a message. The third goal is to propose several lines of research that naturally fall out of the proposed model. Each computational approach to solving some problem rests on an underlying model or set of models that describe how key phenomena in the real world are represented and how they are manipulated. These models are both liberating and constraining. They are liberating in that they suggest a path of development for new tools and algorithms. They are constraining in that they intentionally ignore other potential paths of development. Modern statistical-based text analysis algorithms have a specific intellectual history and set of underlying models rooted in Shannon's theory of communication. For Shannon, language is treated as a stochastic generator of symbol sequences. Shannon himself, subsequently Weaver, and at least one of his predecessors are all explicit in their decision to exclude semantics from their models. This rejection of semantics as 'irrelevant to the engineering problem' is elegant and combined with developments particularly by Salton and subsequently by Latent Semantic Analysis, has led to a whole collection of powerful algorithms and an industry for data mining technologies. However, the kinds of problems currently facing us go beyond what can be accounted for by this stochastic model. Today's problems increasingly focus on the semantics of specific pieces of information. And although progress is being made with the old models, it seems natural to develop or

  13. System parameter identification information criteria and algorithms

    CERN Document Server

    Chen, Badong; Hu, Jinchun; Principe, Jose C

    2013-01-01

    Recently, criterion functions based on information theoretic measures (entropy, mutual information, information divergence) have attracted attention and become an emerging area of study in signal processing and system identification domain. This book presents a systematic framework for system identification and information processing, investigating system identification from an information theory point of view. The book is divided into six chapters, which cover the information needed to understand the theory and application of system parameter identification. The authors' research pr

  14. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  15. FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A., E-mail: mcewen.24@osu.edu, E-mail: fang.307@osu.edu, E-mail: hirata.10@osu.edu, E-mail: blazek@berkeley.edu [Center for Cosmology and AstroParticle Physics, Department of Physics, The Ohio State University, 191 W Woodruff Ave, Columbus OH 43210 (United States)

    2016-09-01

    We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theory and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.

  16. Quantum information theory

    CERN Document Server

    Wilde, Mark M

    2017-01-01

    Developing many of the major, exciting, pre- and post-millennium developments from the ground up, this book is an ideal entry point for graduate students into quantum information theory. Significant attention is given to quantum mechanics for quantum information theory, and careful studies of the important protocols of teleportation, superdense coding, and entanglement distribution are presented. In this new edition, readers can expect to find over 100 pages of new material, including detailed discussions of Bell's theorem, the CHSH game, Tsirelson's theorem, the axiomatic approach to quantum channels, the definition of the diamond norm and its interpretation, and a proof of the Choi–Kraus theorem. Discussion of the importance of the quantum dynamic capacity formula has been completely revised, and many new exercises and references have been added. This new edition will be welcomed by the upcoming generation of quantum information theorists and the already established community of classical information theo...

  17. Hiding data selected topics : Rudolf Ahlswede’s lectures on information theory 3

    CERN Document Server

    Althöfer, Ingo; Deppe, Christian; Tamm, Ulrich

    2016-01-01

    Devoted to information security, this volume begins with a short course on cryptography, mainly based on lectures given by Rudolf Ahlswede at the University of Bielefeld in the mid 1990s. It was the second of his cycle of lectures on information theory which opened with an introductory course on basic coding theorems, as covered in Volume 1 of this series. In this third volume, Shannon’s historical work on secrecy systems is detailed, followed by an introduction to an information-theoretic model of wiretap channels, and such important concepts as homophonic coding and authentication. Once the theoretical arguments have been presented, comprehensive technical details of AES are given. Furthermore, a short introduction to the history of public-key cryptology, RSA and El Gamal cryptosystems is provided, followed by a look at the basic theory of elliptic curves, and algorithms for efficient addition in elliptic curves. Lastly, the important topic of “oblivious transfer” is discussed, which is strongly conne...

  18. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  19. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

    Science.gov (United States)

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

  20. Designing and implementing of improved cryptographic algorithm using modular arithmetic theory

    Directory of Open Access Journals (Sweden)

    Maryam Kamarzarrin

    2015-05-01

    Full Text Available Maintaining the privacy and security of people information are two most important principles of electronic health plan. One of the methods of creating privacy and securing of information is using Public key cryptography system. In this paper, we compare two algorithms, Common And Fast Exponentiation algorithms, for enhancing the efficiency of public key cryptography. We express that a designed system by Fast Exponentiation Algorithm has high speed and performance but low power consumption and space occupied compared with Common Exponentiation algorithm. Although designed systems by Common Exponentiation algorithm have slower speed and lower performance, designing by this algorithm has less complexity, and easier designing compared with Fast Exponentiation algorithm. In this paper, we will try to examine and compare two different methods of exponentiation, also observe performance Impact of these two approaches in the form of hardware with VHDL language on FPGA.

  1. New Aspects of Probabilistic Forecast Verification Using Information Theory

    Science.gov (United States)

    Tödter, Julian; Ahrens, Bodo

    2013-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

  2. Dynamic statistical information theory

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In recent years we extended Shannon static statistical information theory to dynamic processes and established a Shannon dynamic statistical information theory, whose core is the evolution law of dynamic entropy and dynamic information. We also proposed a corresponding Boltzmman dynamic statistical information theory. Based on the fact that the state variable evolution equation of respective dynamic systems, i.e. Fokker-Planck equation and Liouville diffusion equation can be regarded as their information symbol evolution equation, we derived the nonlinear evolution equations of Shannon dynamic entropy density and dynamic information density and the nonlinear evolution equations of Boltzmann dynamic entropy density and dynamic information density, that describe respectively the evolution law of dynamic entropy and dynamic information. The evolution equations of these two kinds of dynamic entropies and dynamic informations show in unison that the time rate of change of dynamic entropy densities is caused by their drift, diffusion and production in state variable space inside the systems and coordinate space in the transmission processes; and that the time rate of change of dynamic information densities originates from their drift, diffusion and dissipation in state variable space inside the systems and coordinate space in the transmission processes. Entropy and information have been combined with the state and its law of motion of the systems. Furthermore we presented the formulas of two kinds of entropy production rates and information dissipation rates, the expressions of two kinds of drift information flows and diffusion information flows. We proved that two kinds of information dissipation rates (or the decrease rates of the total information) were equal to their corresponding entropy production rates (or the increase rates of the total entropy) in the same dynamic system. We obtained the formulas of two kinds of dynamic mutual informations and dynamic channel

  3. Practice Evaluation Strategies Among Social Workers: Why an Evidence-Informed Dual-Process Theory Still Matters.

    Science.gov (United States)

    Davis, Thomas D

    2017-01-01

    Practice evaluation strategies range in style from the formal-analytic tools of single-subject designs, rapid assessment instruments, algorithmic steps in evidence-informed practice, and computer software applications, to the informal-interactive tools of clinical supervision, consultation with colleagues, use of client feedback, and clinical experience. The purpose of this article is to provide practice researchers in social work with an evidence-informed theory that is capable of explaining both how and why social workers use practice evaluation strategies to self-monitor the effectiveness of their interventions in terms of client change. The author delineates the theoretical contours and consequences of what is called dual-process theory. Drawing on evidence-informed advances in the cognitive and social neurosciences, the author identifies among everyday social workers a theoretically stable, informal-interactive tool preference that is a cognitively necessary, sufficient, and stand-alone preference that requires neither the supplementation nor balance of formal-analytic tools. The author's delineation of dual-process theory represents a theoretical contribution in the century-old attempt to understand how and why social workers evaluate their practice the way they do.

  4. Algorithmic Relative Complexity

    Directory of Open Access Journals (Sweden)

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  5. A statistical mechanical interpretation of algorithmic information theory: Total statistical mechanical interpretation based on physical argument

    International Nuclear Information System (INIS)

    Tadaki, Kohtaro

    2010-01-01

    The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T is an element of (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.

  6. An introduction to information theory

    CERN Document Server

    Reza, Fazlollah M

    1994-01-01

    Graduate-level study for engineering students presents elements of modern probability theory, information theory, coding theory, more. Emphasis on sample space, random variables, capacity, etc. Many reference tables and extensive bibliography. 1961 edition.

  7. Quantum information and relativity theory

    International Nuclear Information System (INIS)

    Peres, Asher; Terno, Daniel R.

    2004-01-01

    This article discusses the intimate relationship between quantum mechanics, information theory, and relativity theory. Taken together these are the foundations of present-day theoretical physics, and their interrelationship is an essential part of the theory. The acquisition of information from a quantum system by an observer occurs at the interface of classical and quantum physics. The authors review the essential tools needed to describe this interface, i.e., Kraus matrices and positive-operator-valued measures. They then discuss how special relativity imposes severe restrictions on the transfer of information between distant systems and the implications of the fact that quantum entropy is not a Lorentz-covariant concept. This leads to a discussion of how it comes about that Lorentz transformations of reduced density matrices for entangled systems may not be completely positive maps. Quantum field theory is, of course, necessary for a consistent description of interactions. Its structure implies a fundamental tradeoff between detector reliability and localizability. Moreover, general relativity produces new and counterintuitive effects, particularly when black holes (or, more generally, event horizons) are involved. In this more general context the authors discuss how most of the current concepts in quantum information theory may require a reassessment

  8. Web multimedia information retrieval using improved Bayesian algorithm.

    Science.gov (United States)

    Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong

    2003-01-01

    The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

  9. Imaging for dismantlement verification: Information management and analysis algorithms

    International Nuclear Information System (INIS)

    Robinson, S.M.; Jarman, K.D.; Pitts, W.K.; Seifert, A.; Misner, A.C.; Woodring, M.L.; Myjak, M.J.

    2012-01-01

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute, which must be non-sensitive to be acceptable in an Information Barrier regime. However, this process must be performed with care. Features like the perimeter, area, and intensity of an object, for example, might reveal sensitive information. Any data-reduction technique must provide sufficient information to discriminate a real object from a spoofed or incorrect one, while avoiding disclosure (or storage) of any sensitive object qualities. Ultimately, algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We discuss the utility of imaging for arms control applications and present three image-based verification algorithms in this context. The algorithms reduce full image information to non-sensitive feature information, in a process that is intended to enable verification while eliminating the possibility of image reconstruction. The underlying images can be highly detailed, since they are dynamically generated behind an information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography. We study these algorithms in terms of technical performance in image analysis and application to an information barrier scheme.

  10. The theory of quantum information

    CERN Document Server

    Watrous, John

    2018-01-01

    This largely self-contained book on the theory of quantum information focuses on precise mathematical formulations and proofs of fundamental facts that form the foundation of the subject. It is intended for graduate students and researchers in mathematics, computer science, and theoretical physics seeking to develop a thorough understanding of key results, proof techniques, and methodologies that are relevant to a wide range of research topics within the theory of quantum information and computation. The book is accessible to readers with an understanding of basic mathematics, including linear algebra, mathematical analysis, and probability theory. An introductory chapter summarizes these necessary mathematical prerequisites, and starting from this foundation, the book includes clear and complete proofs of all results it presents. Each subsequent chapter includes challenging exercises intended to help readers to develop their own skills for discovering proofs concerning the theory of quantum information.

  11. Fast filtering algorithm based on vibration systems and neural information exchange and its application to micro motion robot

    International Nuclear Information System (INIS)

    Gao Wa; Zha Fu-Sheng; Li Man-Tian; Song Bao-Yu

    2014-01-01

    This paper develops a fast filtering algorithm based on vibration systems theory and neural information exchange approach. The characters, including the derivation process and parameter analysis, are discussed and the feasibility and the effectiveness are testified by the filtering performance compared with various filtering methods, such as the fast wavelet transform algorithm, the particle filtering method and our previously developed single degree of freedom vibration system filtering algorithm, according to simulation and practical approaches. Meanwhile, the comparisons indicate that a significant advantage of the proposed fast filtering algorithm is its extremely fast filtering speed with good filtering performance. Further, the developed fast filtering algorithm is applied to the navigation and positioning system of the micro motion robot, which is a high real-time requirement for the signals preprocessing. Then, the preprocessing data is used to estimate the heading angle error and the attitude angle error of the micro motion robot. The estimation experiments illustrate the high practicality of the proposed fast filtering algorithm. (general)

  12. Information theory and coding solved problems

    CERN Document Server

    Ivaniš, Predrag

    2017-01-01

    This book is offers a comprehensive overview of information theory and error control coding, using a different approach then in existed literature. The chapters are organized according to the Shannon system model, where one block affects the others. A relatively brief theoretical introduction is provided at the beginning of every chapter, including a few additional examples and explanations, but without any proofs. And a short overview of some aspects of abstract algebra is given at the end of the corresponding chapters. The characteristic complex examples with a lot of illustrations and tables are chosen to provide detailed insights into the nature of the problem. Some limiting cases are presented to illustrate the connections with the theoretical bounds. The numerical values are carefully selected to provide in-depth explanations of the described algorithms. Although the examples in the different chapters can be considered separately, they are mutually connected and the conclusions for one considered proble...

  13. Automation of Algorithmic Tasks for Virtual Laboratories Based on Automata Theory

    Directory of Open Access Journals (Sweden)

    Evgeniy A. Efimchik

    2016-03-01

    Full Text Available In the work a description of an automata model of standard algorithm for constructing a correct solution of algorithmic tests is given. The described model allows a formal determination of the variant complexity of algorithmic test and serves as a basis for determining the complexity functions, including the collision concept – the situation of uncertainty, when a choice must be made upon fulfilling the task between the alternatives with various priorities. The influence of collisions on the automata model and its inner structure is described. The model and complexity functions are applied for virtual laboratories upon designing the algorithms of constructing variant with a predetermined complexity in real time and algorithms of the estimation procedures of students’ solution with respect to collisions. The results of the work are applied to the development of virtual laboratories, which are used in the practical part of massive online course on graph theory.

  14. The logic of logistics: theory, algorithms and applications for logistics management

    Directory of Open Access Journals (Sweden)

    Claudio Barbieri da Cunha

    2010-04-01

    Full Text Available

    Nesse texto o autor apresenta uma resenha acerca do livro "The logic of logistics: theory, algorithms and applications for logistics management", de autoria de Julien Bramel e David Simchi-Levi, publicado pela Springer-Verlag, em 1997.

  15. Wave theory of information

    CERN Document Server

    Franceschetti, Massimo

    2017-01-01

    Understand the relationship between information theory and the physics of wave propagation with this expert guide. Balancing fundamental theory with engineering applications, it describes the mechanism and limits for the representation and communication of information using electromagnetic waves. Information-theoretic laws relating functional approximation and quantum uncertainty principles to entropy, capacity, mutual information, rate distortion, and degrees of freedom of band-limited radiation are derived and explained. Both stochastic and deterministic approaches are explored, and applications for sensing and signal reconstruction, wireless communication, and networks of multiple transmitters and receivers are reviewed. With end-of-chapter exercises and suggestions for further reading enabling in-depth understanding of key concepts, it is the ideal resource for researchers and graduate students in electrical engineering, physics and applied mathematics looking for a fresh perspective on classical informat...

  16. Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco

    2011-01-01

    The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.

  17. Chemical Thermodynamics and Information Theory with Applications

    CERN Document Server

    Graham, Daniel J

    2011-01-01

    Thermodynamics and information touch theory every facet of chemistry. However, the physical chemistry curriculum digested by students worldwide is still heavily skewed toward heat/work principles established more than a century ago. Rectifying this situation, Chemical Thermodynamics and Information Theory with Applications explores applications drawn from the intersection of thermodynamics and information theory--two mature and far-reaching fields. In an approach that intertwines information science and chemistry, this book covers: The informational aspects of thermodynamic state equations The

  18. AN EDUCATIONAL THEORY MODEL--(SIGGS), AN INTEGRATION OF SET THEORY, INFORMATION THEORY, AND GRAPH THEORY WITH GENERAL SYSTEMS THEORY.

    Science.gov (United States)

    MACCIA, ELIZABETH S.; AND OTHERS

    AN ANNOTATED BIBLIOGRAPHY OF 20 ITEMS AND A DISCUSSION OF ITS SIGNIFICANCE WAS PRESENTED TO DESCRIBE CURRENT UTILIZATION OF SUBJECT THEORIES IN THE CONSTRUCTION OF AN EDUCATIONAL THEORY. ALSO, A THEORY MODEL WAS USED TO DEMONSTRATE CONSTRUCTION OF A SCIENTIFIC EDUCATIONAL THEORY. THE THEORY MODEL INCORPORATED SET THEORY (S), INFORMATION THEORY…

  19. Characterization and visualization of RNA secondary structure Boltzmann ensemble via information theory.

    Science.gov (United States)

    Lin, Luan; McKerrow, Wilson H; Richards, Bryce; Phonsom, Chukiat; Lawrence, Charles E

    2018-03-05

    The nearest neighbor model and associated dynamic programming algorithms allow for the efficient estimation of the RNA secondary structure Boltzmann ensemble. However because a given RNA secondary structure only contains a fraction of the possible helices that could form from a given sequence, the Boltzmann ensemble is multimodal. Several methods exist for clustering structures and finding those modes. However less focus is given to exploring the underlying reasons for this multimodality: the presence of conflicting basepairs. Information theory, or more specifically mutual information, provides a method to identify those basepairs that are key to the secondary structure. To this end we find most informative basepairs and visualize the effect of these basepairs on the secondary structure. Knowing whether a most informative basepair is present tells us not only the status of the particular pair but also provides a large amount of information about which other pairs are present or not present. We find that a few basepairs account for a large amount of the structural uncertainty. The identification of these pairs indicates small changes to sequence or stability that will have a large effect on structure. We provide a novel algorithm that uses mutual information to identify the key basepairs that lead to a multimodal Boltzmann distribution. We then visualize the effect of these pairs on the overall Boltzmann ensemble.

  20. Scheduling theory, algorithms, and systems

    CERN Document Server

    Pinedo, Michael L

    2016-01-01

    This new edition of the well-established text Scheduling: Theory, Algorithms, and Systems provides an up-to-date coverage of important theoretical models in the scheduling literature as well as important scheduling problems that appear in the real world. The accompanying website includes supplementary material in the form of slide-shows from industry as well as movies that show actual implementations of scheduling systems. The main structure of the book, as per previous editions, consists of three parts. The first part focuses on deterministic scheduling and the related combinatorial problems. The second part covers probabilistic scheduling models; in this part it is assumed that processing times and other problem data are random and not known in advance. The third part deals with scheduling in practice; it covers heuristics that are popular with practitioners and discusses system design and implementation issues. All three parts of this new edition have been revamped, streamlined, and extended. The reference...

  1. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    Science.gov (United States)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  2. Applied information science, engineering and technology selected topics from the field of production information engineering and IT for manufacturing : theory and practice

    CERN Document Server

    Tóth, Tibor

    2014-01-01

    The objective of the book is to give a selection from the papers, which summarize several important results obtained within the framework of the József Hatvany Doctoral School operating at the University of Miskolc, Hungary. In accordance with the three main research areas of the Doctoral School established for Information Science, Engineering and Technology, the papers can be classified into three groups. They are as follows: (1) Applied Computational Science; (2) Production Information Engineering (IT for Manufacturing included); (3) Material Stream Systems and IT for Logistics. As regards the first area, some papers deal with special issues of algorithms theory and its applications, with computing algorithms for engineering tasks, as well as certain issues of data base systems and knowledge intensive systems. Related to the second research area, the focus is on Production Information Engineering with special regard to discrete production processes. In the second research area the papers show some new inte...

  3. Highly accurate fluorogenic DNA sequencing with information theory-based error correction.

    Science.gov (United States)

    Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi

    2017-12-01

    Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.

  4. Algorithmic and experimental methods in algebra, geometry, and number theory

    CERN Document Server

    Decker, Wolfram; Malle, Gunter

    2017-01-01

    This book presents state-of-the-art research and survey articles that highlight work done within the Priority Program SPP 1489 “Algorithmic and Experimental Methods in Algebra, Geometry and Number Theory”, which was established and generously supported by the German Research Foundation (DFG) from 2010 to 2016. The goal of the program was to substantially advance algorithmic and experimental methods in the aforementioned disciplines, to combine the different methods where necessary, and to apply them to central questions in theory and practice. Of particular concern was the further development of freely available open source computer algebra systems and their interaction in order to create powerful new computational tools that transcend the boundaries of the individual disciplines involved.  The book covers a broad range of topics addressing the design and theoretical foundations, implementation and the successful application of algebraic algorithms in order to solve mathematical research problems. It off...

  5. Rudolf Ahlswede’s lectures on information theory

    CERN Document Server

    Althöfer, Ingo; Deppe, Christian; Tamm, Ulrich

    Volume 1 : The volume “Storing and Transmitting Data” is based on Rudolf Ahlswede's introductory course on "Information Theory I" and presents an introduction to Shannon Theory. Readers, familiar or unfamiliar with the technical intricacies of Information Theory, will benefit considerably from working through the book; especially Chapter VI with its lively comments and uncensored insider views from the world of science and research offers informative and revealing insights. This is the first of several volumes that will serve as a collected research documentation of Rudolf Ahlswede’s lectures on information theory. Each volume includes comments from an invited well-known expert. Holger Boche contributed his insights in the supplement of the present volume. Classical information processing concerns the main tasks of gaining knowledge, storage, transmitting and hiding data. The first task is the prime goal of Statistics. For the two next, Shannon presented an impressive mathematical theory called Informat...

  6. The discrete Fourier transform theory, algorithms and applications

    CERN Document Server

    Sundaraajan, D

    2001-01-01

    This authoritative book provides comprehensive coverage of practical Fourier analysis. It develops the concepts right from the basics and gradually guides the reader to the advanced topics. It presents the latest and practically efficient DFT algorithms, as well as the computation of discrete cosine and Walsh-Hadamard transforms. The large number of visual aids such as figures, flow graphs and flow charts makes the mathematical topic easy to understand. In addition, the numerous examples and the set of C-language programs (a supplement to the book) help greatly in understanding the theory and

  7. Walking pattern classification and walking distance estimation algorithms using gait phase information.

    Science.gov (United States)

    Wang, Jeen-Shing; Lin, Che-Wei; Yang, Ya-Ting C; Ho, Yu-Jen

    2012-10-01

    This paper presents a walking pattern classification and a walking distance estimation algorithm using gait phase information. A gait phase information retrieval algorithm was developed to analyze the duration of the phases in a gait cycle (i.e., stance, push-off, swing, and heel-strike phases). Based on the gait phase information, a decision tree based on the relations between gait phases was constructed for classifying three different walking patterns (level walking, walking upstairs, and walking downstairs). Gait phase information was also used for developing a walking distance estimation algorithm. The walking distance estimation algorithm consists of the processes of step count and step length estimation. The proposed walking pattern classification and walking distance estimation algorithm have been validated by a series of experiments. The accuracy of the proposed walking pattern classification was 98.87%, 95.45%, and 95.00% for level walking, walking upstairs, and walking downstairs, respectively. The accuracy of the proposed walking distance estimation algorithm was 96.42% over a walking distance.

  8. Genre theory in information studies

    CERN Document Server

    Andersen, Jack

    2015-01-01

    This book highlights the important role genre theory plays within information studies. It illustrates how modern genre studies inform and enrich the study of information, and conversely how the study of information makes its own independent contributions to the study of genre.

  9. Information theory and rate distortion theory for communications and compression

    CERN Document Server

    Gibson, Jerry

    2013-01-01

    This book is very specifically targeted to problems in communications and compression by providing the fundamental principles and results in information theory and rate distortion theory for these applications and presenting methods that have proved and will prove useful in analyzing and designing real systems. The chapters contain treatments of entropy, mutual information, lossless source coding, channel capacity, and rate distortion theory; however, it is the selection, ordering, and presentation of the topics within these broad categories that is unique to this concise book. While the cover

  10. Rational hybrid Monte Carlo algorithm for theories with unknown spectral bounds

    International Nuclear Information System (INIS)

    Kogut, J. B.; Sinclair, D. K.

    2006-01-01

    The Rational Hybrid Monte Carlo (RHMC) algorithm extends the Hybrid Monte Carlo algorithm for lattice QCD simulations to situations involving fractional powers of the determinant of the quadratic Dirac operator. This avoids the updating increment (dt) dependence of observables which plagues the Hybrid Molecular-dynamics (HMD) method. The RHMC algorithm uses rational approximations to fractional powers of the quadratic Dirac operator. Such approximations are only available when positive upper and lower bounds to the operator's spectrum are known. We apply the RHMC algorithm to simulations of 2 theories for which a positive lower spectral bound is unknown: lattice QCD with staggered quarks at finite isospin chemical potential and lattice QCD with massless staggered quarks and chiral 4-fermion interactions (χQCD). A choice of lower bound is made in each case, and the properties of the RHMC simulations these define are studied. Justification of our choices of lower bounds is made by comparing measurements with those from HMD simulations, and by comparing different choices of lower bounds

  11. Cooperative Localization for Multi-AUVs Based on GM-PHD Filters and Information Entropy Theory

    Directory of Open Access Journals (Sweden)

    Lichuan Zhang

    2017-10-01

    Full Text Available Cooperative localization (CL is considered a promising method for underwater localization with respect to multiple autonomous underwater vehicles (multi-AUVs. In this paper, we proposed a CL algorithm based on information entropy theory and the probability hypothesis density (PHD filter, aiming to enhance the global localization accuracy of the follower. In the proposed framework, the follower carries lower cost navigation systems, whereas the leaders carry better ones. Meanwhile, the leaders acquire the followers’ observations, including both measurements and clutter. Then, the PHD filters are utilized on the leaders and the results are communicated to the followers. The followers then perform weighted summation based on all received messages and obtain a final positioning result. Based on the information entropy theory and the PHD filter, the follower is able to acquire a precise knowledge of its position.

  12. Generating information-rich high-throughput experimental materials genomes using functional clustering via multitree genetic programming and information theory.

    Science.gov (United States)

    Suram, Santosh K; Haber, Joel A; Jin, Jian; Gregoire, John M

    2015-04-13

    High-throughput experimental methodologies are capable of synthesizing, screening and characterizing vast arrays of combinatorial material libraries at a very rapid rate. These methodologies strategically employ tiered screening wherein the number of compositions screened decreases as the complexity, and very often the scientific information obtained from a screening experiment, increases. The algorithm used for down-selection of samples from higher throughput screening experiment to a lower throughput screening experiment is vital in achieving information-rich experimental materials genomes. The fundamental science of material discovery lies in the establishment of composition-structure-property relationships, motivating the development of advanced down-selection algorithms which consider the information value of the selected compositions, as opposed to simply selecting the best performing compositions from a high throughput experiment. Identification of property fields (composition regions with distinct composition-property relationships) in high throughput data enables down-selection algorithms to employ advanced selection strategies, such as the selection of representative compositions from each field or selection of compositions that span the composition space of the highest performing field. Such strategies would greatly enhance the generation of data-driven discoveries. We introduce an informatics-based clustering of composition-property functional relationships using a combination of information theory and multitree genetic programming concepts for identification of property fields in a composition library. We demonstrate our approach using a complex synthetic composition-property map for a 5 at. % step ternary library consisting of four distinct property fields and finally explore the application of this methodology for capturing relationships between composition and catalytic activity for the oxygen evolution reaction for 5429 catalyst compositions in a

  13. An introduction to single-user information theory

    CERN Document Server

    Alajaji, Fady

    2018-01-01

    This book presents a succinct and mathematically rigorous treatment of the main pillars of Shannon’s information theory, discussing the fundamental concepts and indispensable results of Shannon’s mathematical theory of communications. It includes five meticulously written core chapters (with accompanying problems), emphasizing the key topics of information measures; lossless and lossy data compression; channel coding; and joint source-channel coding for single-user (point-to-point) communications systems. It also features two appendices covering necessary background material in real analysis and in probability theory and stochastic processes. The book is ideal for a one-semester foundational course on information theory for senior undergraduate and entry-level graduate students in mathematics, statistics, engineering, and computing and information sciences. A comprehensive instructor’s solutions manual is available.

  14. Aerosol Retrievals from Proposed Satellite Bistatic Lidar Observations: Algorithm and Information Content

    Science.gov (United States)

    Alexandrov, M. D.; Mishchenko, M. I.

    2017-12-01

    Accurate aerosol retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. We suggested to address this ill-posedness by flying a bistatic lidar system. Such a system would consist of formation flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and an additional platform hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar. Thus, bistatic lidar observations will be free of deficiencies affecting both monostatic lidar measurements (caused by the highly limited information content) and passive photopolarimetric measurements (caused by vertical integration and surface reflection).We present a preliminary aerosol retrieval algorithm for a bistatic lidar system consisting of a high spectral resolution lidar (HSRL) and an additional receiver flown in formation with it at a scattering angle of 165 degrees. This algorithm was applied to synthetic data generated using Mie-theory computations. The model/retrieval parameters in our tests were the effective radius and variance of the aerosol size distribution, complex refractive index of the particles, and their number concentration. Both mono- and bimodal aerosol mixtures were considered. Our algorithm allowed for definitive evaluation of error propagation from measurements to retrievals using a Monte Carlo technique, which involves random distortion of the observations and statistical characterization of the resulting retrieval errors. Our tests demonstrated that supplementing a conventional monostatic HSRL with an additional receiver dramatically increases the information content of the measurements and allows for a sufficiently accurate characterization of tropospheric aerosols.

  15. Introduction to quantum information science

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Masahito [Nagoya Univ. (Japan). Graduate School of Mathematics; Ishizaka, Satoshi [Hiroshima Univ., Higashi-Hiroshima (Japan). Graduate School of Integrated Arts and Sciences; Kawachi, Akinori [Tokyo Institute of Technology (Japan). Dept. of Mathematical and Computing Sciences; Kimura, Gen [Shibaura Institute of Technology, Saitama (Japan). College of Systems Engineering and Science; Ogawa, Tomohiro [Univ. of Electro-Communications, Tokyo (Japan). Graduate School of Information Systems

    2015-04-01

    Presents the mathematical foundation for quantum information in a very didactic way. Summarizes all required mathematical knowledge in linear algebra. Supports teaching and learning with more than 100 exercises with solutions. Includes brief descriptions to recent results with references. This book presents the basics of quantum information, e.g., foundation of quantum theory, quantum algorithms, quantum entanglement, quantum entropies, quantum coding, quantum error correction and quantum cryptography. The required knowledge is only elementary calculus and linear algebra. This way the book can be understood by undergraduate students. In order to study quantum information, one usually has to study the foundation of quantum theory. This book describes it from more an operational viewpoint which is suitable for quantum information while traditional textbooks of quantum theory lack this viewpoint. The current book bases on Shor's algorithm, Grover's algorithm, Deutsch-Jozsa's algorithm as basic algorithms. To treat several topics in quantum information, this book covers several kinds of information quantities in quantum systems including von Neumann entropy. The limits of several kinds of quantum information processing are given. As important quantum protocols,this book contains quantum teleportation, quantum dense coding, quantum data compression. In particular conversion theory of entanglement via local operation and classical communication are treated too. This theory provides the quantification of entanglement, which coincides with von Neumann entropy. The next part treats the quantum hypothesis testing. The decision problem of two candidates of the unknown state are given. The asymptotic performance of this problem is characterized by information quantities. Using this result, the optimal performance of classical information transmission via noisy quantum channel is derived. Quantum information transmission via noisy quantum channel by quantum error

  16. Introduction to quantum information science

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Ishizaka, Satoshi; Kawachi, Akinori; Kimura, Gen; Ogawa, Tomohiro

    2015-01-01

    Presents the mathematical foundation for quantum information in a very didactic way. Summarizes all required mathematical knowledge in linear algebra. Supports teaching and learning with more than 100 exercises with solutions. Includes brief descriptions to recent results with references. This book presents the basics of quantum information, e.g., foundation of quantum theory, quantum algorithms, quantum entanglement, quantum entropies, quantum coding, quantum error correction and quantum cryptography. The required knowledge is only elementary calculus and linear algebra. This way the book can be understood by undergraduate students. In order to study quantum information, one usually has to study the foundation of quantum theory. This book describes it from more an operational viewpoint which is suitable for quantum information while traditional textbooks of quantum theory lack this viewpoint. The current book bases on Shor's algorithm, Grover's algorithm, Deutsch-Jozsa's algorithm as basic algorithms. To treat several topics in quantum information, this book covers several kinds of information quantities in quantum systems including von Neumann entropy. The limits of several kinds of quantum information processing are given. As important quantum protocols,this book contains quantum teleportation, quantum dense coding, quantum data compression. In particular conversion theory of entanglement via local operation and classical communication are treated too. This theory provides the quantification of entanglement, which coincides with von Neumann entropy. The next part treats the quantum hypothesis testing. The decision problem of two candidates of the unknown state are given. The asymptotic performance of this problem is characterized by information quantities. Using this result, the optimal performance of classical information transmission via noisy quantum channel is derived. Quantum information transmission via noisy quantum channel by quantum error correction are

  17. A potential theory approach to an algorithm of conceptual space partitioning

    Directory of Open Access Journals (Sweden)

    Roman Urban

    2017-12-01

    Full Text Available A potential theory approach to an algorithm of conceptual space partitioning This paper proposes a new classification algorithm for the partitioning of a conceptual space. All the algorithms which have been used until now have mostly been based on the theory of Voronoi diagrams. This paper proposes an approach based on potential theory, with the criteria for measuring similarities between objects in the conceptual space being based on the Newtonian potential function. The notion of a fuzzy prototype, which generalizes the previous definition of a prototype, is introduced. Furthermore, the necessary conditions that a natural concept must meet are discussed. Instead of convexity, as proposed by Gärdenfors, the notion of geodesically convex sets is used. Thus, if a concept corresponds to a set which is geodesically convex, it is a natural concept. This definition applies, for example, if the conceptual space is an Euclidean space. As a by-product of the construction of the algorithm, an extension of the conceptual space to d-dimensional Riemannian manifolds is obtained.   Algorytm podziału przestrzeni konceptualnych przy użyciu teorii potencjału W niniejszej pracy zaproponowany został nowy algorytm podziału przestrzeni konceptualnej. Dotąd podział taki zazwyczaj wykorzystywał teorię diagramów Voronoi. Nasze podejście do problemu oparte jest na teorii potencjału Miara podobieństwa pomiędzy elementami przestrzeni konceptualnej bazuje na Newtonowskiej funkcji potencjału. Definiujemy pojęcie rozmytego prototypu, który uogólnia dotychczas stosowane definicje prototypu. Ponadto zajmujemy się warunkiem koniecznym, który musi spełniać naturalny koncept. Zamiast wypukłości zaproponowanej przez Gärdenforsa, rozważamy linie geodezyjne w obszarze odpowiadającym danemu konceptowi naturalnemu, otrzymując warunek mówiący, że koncept jest konceptem naturalnym, jeżeli zbiór odpowiadający temu konceptowi jest geodezyjnie wypuk

  18. An information theory account of cognitive control.

    Science.gov (United States)

    Fan, Jin

    2014-01-01

    Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory.

  19. An information theory account of cognitive control

    Directory of Open Access Journals (Sweden)

    Jin eFan

    2014-09-01

    Full Text Available Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory.

  20. A short course in quantum information theory. An approach from theoretical physics. 2. ed.

    International Nuclear Information System (INIS)

    Diosi, Lajos

    2011-01-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. For the second edition, the authors has succeeded in adding many new topics while sticking to the conciseness of the overall approach. A new chapter on qubit thermodynamics has been added, while new sections and subsections have been incorporated in various chapter to deal with weak and time-continuous measurements, period-finding quantum algorithms and quantum error corrections. From the reviews of the first edition: ''The best things about this book are its brevity and clarity. In around 100 pages it provides a tutorial introduction to quantum information theory, including problems and solutions.. it's worth a look if you want to quickly get up to speed with the language and central concepts of quantum information theory, including the background classical information theory.'' (Craig Savage, Australian Physics, Vol. 44 (2), 2007). (orig.)

  1. Sorting a distribution theory

    CERN Document Server

    Mahmoud, Hosam M

    2011-01-01

    A cutting-edge look at the emerging distributional theory of sorting Research on distributions associated with sorting algorithms has grown dramatically over the last few decades, spawning many exact and limiting distributions of complexity measures for many sorting algorithms. Yet much of this information has been scattered in disparate and highly specialized sources throughout the literature. In Sorting: A Distribution Theory, leading authority Hosam Mahmoud compiles, consolidates, and clarifies the large volume of available research, providing a much-needed, comprehensive treatment of the

  2. Information Theory for Information Science: Antecedents, Philosophy, and Applications

    Science.gov (United States)

    Losee, Robert M.

    2017-01-01

    This paper provides an historical overview of the theoretical antecedents leading to information theory, specifically those useful for understanding and teaching information science and systems. Information may be discussed in a philosophical manner and at the same time be measureable. This notion of information can thus be the subject of…

  3. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  4. Development of the algorithm for obtaining 3-dimensional information using the structured light

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Dong Uk; Lee, Jae Hyub; Kim, Chung Soo [Korea University of Technology and Education, Cheonan (Korea)

    1998-03-01

    The utilization of robot in atomic power plants or nuclear-related facilities has grown rapidly. In order to perform preassigned jobs using robot in nuclear-related facilities, advanced technology extracting 3D information of objects is essential. We have studied an algorithm to extract 3D information of objects using laser slit light and camera, and developed the following hardware system and algorithms. (1) We have designed and fabricated the hardware system which consists of laser light and two cameras. The hardware system can be easily installed on the robot. (2) In order to reduce the occlusion problem when measuring 3D information using laser slit light and camera, we have studied system with laser slit light and two cameras and developed algorithm to synthesize 3D information obtained from two cameras. (2) For easy use of obtained 3D information, we expressed it as digital distance image format and developed algorithm to interpolate 3D information of points which is not obtained. (4) In order to simplify calibration of the camera's parameter, we have also designed an fabricated LED plate, and developed an algorithm detecting the center position of LED automatically. We can certify the efficiency of developed algorithm and hardware system through experimental results. 16 refs., 26 figs., 1 tabs. (Author)

  5. The g-theorem and quantum information theory

    Energy Technology Data Exchange (ETDEWEB)

    Casini, Horacio; Landea, Ignacio Salazar; Torroba, Gonzalo [Centro Atómico Bariloche and CONICET,S.C. de Bariloche, Río Negro, R8402AGP (Argentina)

    2016-10-25

    We study boundary renormalization group flows between boundary conformal field theories in 1+1 dimensions using methods of quantum information theory. We define an entropic g-function for theories with impurities in terms of the relative entanglement entropy, and we prove that this g-function decreases along boundary renormalization group flows. This entropic g-theorem is valid at zero temperature, and is independent from the g-theorem based on the thermal partition function. We also discuss the mutual information in boundary RG flows, and how it encodes the correlations between the impurity and bulk degrees of freedom. Our results provide a quantum-information understanding of (boundary) RG flow as increase of distinguishability between the UV fixed point and the theory along the RG flow.

  6. Information Foraging Theory: A Framework for Intelligence Analysis

    Science.gov (United States)

    2014-11-01

    oceanographic information, human intelligence (HUMINT), open-source intelligence ( OSINT ), and information provided by other governmental departments [1][5...Human Intelligence IFT Information Foraging Theory LSA Latent Semantic Similarity MVT Marginal Value Theorem OFT Optimal Foraging Theory OSINT

  7. Fast clustering algorithm for large ECG data sets based on CS theory in combination with PCA and K-NN methods.

    Science.gov (United States)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2014-01-01

    Long-term recording of Electrocardiogram (ECG) signals plays an important role in health care systems for diagnostic and treatment purposes of heart diseases. Clustering and classification of collecting data are essential parts for detecting concealed information of P-QRS-T waves in the long-term ECG recording. Currently used algorithms do have their share of drawbacks: 1) clustering and classification cannot be done in real time; 2) they suffer from huge energy consumption and load of sampling. These drawbacks motivated us in developing novel optimized clustering algorithm which could easily scan large ECG datasets for establishing low power long-term ECG recording. In this paper, we present an advanced K-means clustering algorithm based on Compressed Sensing (CS) theory as a random sampling procedure. Then, two dimensionality reduction methods: Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) followed by sorting the data using the K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers are applied to the proposed algorithm. We show our algorithm based on PCA features in combination with K-NN classifier shows better performance than other methods. The proposed algorithm outperforms existing algorithms by increasing 11% classification accuracy. In addition, the proposed algorithm illustrates classification accuracy for K-NN and PNN classifiers, and a Receiver Operating Characteristics (ROC) area of 99.98%, 99.83%, and 99.75% respectively.

  8. The Quantitative Theory of Information

    DEFF Research Database (Denmark)

    Topsøe, Flemming; Harremoës, Peter

    2008-01-01

    Information Theory as developed by Shannon and followers is becoming more and more important in a number of sciences. The concepts appear to be just the right ones with intuitively appealing operational interpretations. Furthermore, the information theoretical quantities are connected by powerful...

  9. Generalized information theory: aims, results, and open problems

    International Nuclear Information System (INIS)

    Klir, George J.

    2004-01-01

    The principal purpose of this paper is to present a comprehensive overview of generalized information theory (GIT): a research program whose objective is to develop a broad treatment of uncertainty-based information, not restricted to classical notions of uncertainty. After a brief overview of classical information theories, a broad framework for formalizing uncertainty and the associated uncertainty-based information of a great spectrum of conceivable types is sketched. The various theories of imprecise probabilities that have already been developed within this framework are then surveyed, focusing primarily on some important unifying principles applying to all these theories. This is followed by introducing two higher levels of the theories of imprecise probabilities: (i) the level of measuring the amount of relevant uncertainty (predictive, retrodictive, prescriptive, diagnostic, etc.) in any situation formalizable in each given theory, and (ii) the level of some methodological principles of uncertainty, which are contingent upon the capability to measure uncertainty and the associated uncertainty-based information. Various issues regarding both the measurement of uncertainty and the uncertainty principles are discussed. Again, the focus is on unifying principles applicable to all the theories. Finally, the current status of GIT is assessed and future research in the area is discussed

  10. What Density Functional Theory could do for Quantum Information

    Science.gov (United States)

    Mattsson, Ann

    2015-03-01

    The Hohenberg-Kohn theorem of Density Functional Theory (DFT), and extensions thereof, tells us that all properties of a system of electrons can be determined through their density, which uniquely determines the many-body wave-function. Given access to the appropriate, universal, functionals of the density we would, in theory, be able to determine all observables of any electronic system, without explicit reference to the wave-function. On the other hand, the wave-function is at the core of Quantum Information (QI), with the wave-function of a set of qubits being the central computational resource in a quantum computer. While there is seemingly little overlap between DFT and QI, reliance upon observables form a key connection. Though the time-evolution of the wave-function and associated phase information is fundamental to quantum computation, the initial and final states of a quantum computer are characterized by observables of the system. While observables can be extracted directly from a system's wave-function, DFT tells us that we may be able to intuit a method for extracting them from its density. In this talk, I will review the fundamentals of DFT and how these principles connect to the world of QI. This will range from DFT's utility in the engineering of physical qubits, to the possibility of using it to efficiently (but approximately) simulate Hamiltonians at the logical level. The apparent paradox of describing algorithms based on the quantum mechanical many-body wave-function with a DFT-like theory based on observables will remain a focus throughout. The ultimate goal of this talk is to initiate a dialog about what DFT could do for QI, in theory and in practice. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Angular discretization errors in transport theory

    International Nuclear Information System (INIS)

    Nelson, P.; Yu, F.

    1992-01-01

    Elements of the information-based complexity theory are computed for several types of information and associated algorithms for angular approximations in the setting of a on-dimensional model problem. For point-evaluation information, the local and global radii of information are computed, a (trivial) optimal algorithm is determined, and the local and global error of a discrete ordinates algorithm are shown to be infinite. For average cone-integral information, the local and global radii of information are computed, the local and global error tends to zero as the underlying partition is indefinitely refined. A central algorithm for such information and an optimal partition (of given cardinality) are described. It is further shown that the analytic first-collision source method has zero error (for the purely absorbing model problem). Implications of the restricted problem domains suitable for the various types of information are discussed

  12. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  13. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  14. Generalized Net Model of the Cognitive and Neural Algorithm for Adaptive Resonance Theory 1

    Directory of Open Access Journals (Sweden)

    Todor Petkov

    2013-12-01

    Full Text Available The artificial neural networks are inspired by biological properties of human and animal brains. One of the neural networks type is called ART [4]. The abbreviation of ART stands for Adaptive Resonance Theory that has been invented by Stephen Grossberg in 1976 [5]. ART represents a family of Neural Networks. It is a cognitive and neural theory that describes how the brain autonomously learns to categorize, recognize and predict objects and events in the changing world. In this paper we introduce a GN model that represent ART1 Neural Network learning algorithm [1]. The purpose of this model is to explain when the input vector will be clustered or rejected among all nodes by the network. It can also be used for explanation and optimization of ART1 learning algorithm.

  15. Information theory in analytical chemistry

    National Research Council Canada - National Science Library

    Eckschlager, Karel; Danzer, Klaus

    1994-01-01

    Contents: The aim of analytical chemistry - Basic concepts of information theory - Identification of components - Qualitative analysis - Quantitative analysis - Multicomponent analysis - Optimum analytical...

  16. Information theory in molecular biology

    OpenAIRE

    Adami, Christoph

    2004-01-01

    This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon's theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the...

  17. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  18. e-DMDAV: A new privacy preserving algorithm for wearable enterprise information systems

    Science.gov (United States)

    Zhang, Zhenjiang; Wang, Xiaoni; Uden, Lorna; Zhang, Peng; Zhao, Yingsi

    2018-04-01

    Wearable devices have been widely used in many fields to improve the quality of people's lives. More and more data on individuals and businesses are collected by statistical organizations though those devices. Almost all of this data holds confidential information. Statistical Disclosure Control (SDC) seeks to protect statistical data in such a way that it can be released without giving away confidential information that can be linked to specific individuals or entities. The MDAV (Maximum Distance to Average Vector) algorithm is an efficient micro-aggregation algorithm belonging to SDC. However, the MDAV algorithm cannot survive homogeneity and background knowledge attacks because it was designed for static numerical data. This paper proposes a systematic dynamic-updating anonymity algorithm based on MDAV called the e-DMDAV algorithm. This algorithm introduces a new parameter and a table to ensure that the k records in one cluster with the range of the distinct values in each cluster is no less than e for numerical and non-numerical datasets. This new algorithm has been evaluated and compared with the MDAV algorithm. The simulation results show that the new algorithm outperforms MDAV in terms of minimizing distortion and disclosure risk with a similar computational cost.

  19. Evaluation of the efficiency of computer-aided spectra search systems based on information theory

    International Nuclear Information System (INIS)

    Schaarschmidt, K.

    1979-01-01

    Application of information theory allows objective evaluation of the efficiency of computer-aided spectra search systems. For this purpose, a significant number of search processes must be analyzed. The amount of information gained by computer application is considered as the difference between the entropy of the data bank and a conditional entropy depending on the proportion of unsuccessful search processes and ballast. The influence of the following factors can be estimated: volume, structure, and quality of the spectra collection stored, efficiency of the encoding instruction and the comparing algorithm, and subjective errors involved in the encoding of spectra. The relations derived are applied to two published storage and retrieval systems for infared spectra. (Auth.)

  20. An information integration theory of consciousness

    Directory of Open Access Journals (Sweden)

    Tononi Giulio

    2004-11-01

    Full Text Available Abstract Background Consciousness poses two main problems. The first is understanding the conditions that determine to what extent a system has conscious experience. For instance, why is our consciousness generated by certain parts of our brain, such as the thalamocortical system, and not by other parts, such as the cerebellum? And why are we conscious during wakefulness and much less so during dreamless sleep? The second problem is understanding the conditions that determine what kind of consciousness a system has. For example, why do specific parts of the brain contribute specific qualities to our conscious experience, such as vision and audition? Presentation of the hypothesis This paper presents a theory about what consciousness is and how it can be measured. According to the theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation – the availability of a very large number of conscious experiences; and integration – the unity of each such experience. The theory states that the quantity of consciousness available to a system can be measured as the Φ value of a complex of elements. Φ is the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements. A complex is a subset of elements with Φ>0 that is not part of a subset of higher Φ. The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each particular conscious experience is specified by the value, at any given time, of the variables mediating informational interactions among the elements of a complex. Testing the hypothesis The information integration theory accounts, in a principled manner, for several neurobiological observations

  1. Intercept Algorithm for Maneuvering Targets Based on Differential Geometry and Lyapunov Theory

    Directory of Open Access Journals (Sweden)

    Yunes Sh. ALQUDSI

    2018-03-01

    Full Text Available Nowadays, the homing guidance is utilized in the existed and under development air defense systems (ADS to effectively intercept the targets. The targets became smarter and capable to fly and maneuver professionally and the tendency to design missile with a small warhead became greater, then there is a pressure to produce a more precise and accurate missile guidance system based on intelligent algorithms to ensure effective interception of highly maneuverable targets. The aim of this paper is to present an intelligent guidance algorithm that effectively and precisely intercept the maneuverable and smart targets by virtue of the differential geometry (DG concepts. The intercept geometry and engagement kinematics, in addition to the direct intercept condition are developed and expressed in DG terms. The guidance algorithm is then developed by virtue of DG and Lyapunov theory. The study terminates with 2D engagement simulation with illustrative examples, to demonstrate that, the derived DG guidance algorithm is a generalized guidance approach and the well-known proportional navigation (PN guidance law is a subset of this approach.

  2. A New Recommendation Algorithm Based on User’s Dynamic Information in Complex Social Network

    Directory of Open Access Journals (Sweden)

    Jiujun Cheng

    2015-01-01

    Full Text Available The development of recommendation system comes with the research of data sparsity, cold start, scalability, and privacy protection problems. Even though many papers proposed different improved recommendation algorithms to solve those problems, there is still plenty of room for improvement. In the complex social network, we can take full advantage of dynamic information such as user’s hobby, social relationship, and historical log to improve the performance of recommendation system. In this paper, we proposed a new recommendation algorithm which is based on social user’s dynamic information to solve the cold start problem of traditional collaborative filtering algorithm and also considered the dynamic factors. The algorithm takes user’s response information, dynamic interest, and the classic similar measurement of collaborative filtering algorithm into account. Then, we compared the new proposed recommendation algorithm with the traditional user based collaborative filtering algorithm and also presented some of the findings from experiment. The results of experiment demonstrate that the new proposed algorithm has a better recommended performance than the collaborative filtering algorithm in cold start scenario.

  3. Algorithms for selecting informative marker panels for population assignment.

    Science.gov (United States)

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  4. Reasonable fermionic quantum information theories require relativity

    International Nuclear Information System (INIS)

    Friis, Nicolai

    2016-01-01

    We show that any quantum information theory based on anticommuting operators must be supplemented by a superselection rule deeply rooted in relativity to establish a reasonable notion of entanglement. While quantum information may be encoded in the fermionic Fock space, the unrestricted theory has a peculiar feature: the marginals of bipartite pure states need not have identical entropies, which leads to an ambiguous definition of entanglement. We solve this problem, by proving that it is removed by relativity, i.e., by the parity superselection rule that arises from Lorentz invariance via the spin-statistics connection. Our results hence unveil a fundamental conceptual inseparability of quantum information and the causal structure of relativistic field theory. (paper)

  5. Quantum theory from first principles an informational approach

    CERN Document Server

    D'Ariano, Giacomo Mauro; Perinotti, Paolo

    2017-01-01

    Quantum theory is the soul of theoretical physics. It is not just a theory of specific physical systems, but rather a new framework with universal applicability. This book shows how we can reconstruct the theory from six information-theoretical principles, by rebuilding the quantum rules from the bottom up. Step by step, the reader will learn how to master the counterintuitive aspects of the quantum world, and how to efficiently reconstruct quantum information protocols from first principles. Using intuitive graphical notation to represent equations, and with shorter and more efficient derivations, the theory can be understood and assimilated with exceptional ease. Offering a radically new perspective on the field, the book contains an efficient course of quantum theory and quantum information for undergraduates. The book is aimed at researchers, professionals, and students in physics, computer science and philosophy, as well as the curious outsider seeking a deeper understanding of the theory.

  6. Bellman Ford algorithm - in Routing Information Protocol (RIP)

    Science.gov (United States)

    Krianto Sulaiman, Oris; Mahmud Siregar, Amir; Nasution, Khairuddin; Haramaini, Tasliyah

    2018-04-01

    In a large scale network need a routing that can handle a lot number of users, one of the solutions to cope with large scale network is by using a routing protocol, There are 2 types of routing protocol that is static and dynamic, Static routing is manually route input based on network admin, while dynamic routing is automatically route input formed based on existing network. Dynamic routing is efficient used to network extensively because of the input of route automatic formed, Routing Information Protocol (RIP) is one of dynamic routing that uses the bellman-ford algorithm where this algorithm will search for the best path that traversed the network by leveraging the value of each link, so with the bellman-ford algorithm owned by RIP can optimize existing networks.

  7. Towards a critical theory of information

    Directory of Open Access Journals (Sweden)

    Christian Fuchs

    2009-11-01

    The debate on redistribution and recognition between critical theorists Nancy Fraser and Axel Honneth gives the opportunity to renew the discussion of the relationship of base and superstructure in critical social theory. Critical information theory needs to be aware of economic, political, and cultural demands that it needs to make in struggles for ending domination and oppression, and of the unifying role that the economy and class play in these demands and struggles. Objective and subjective information concepts are based on the underlying worldview of reification. Reification endangers human existence. Information as process and relation enables political and ethical alternatives that have radical implications for society.

  8. Critical Theory and Information Studies: A Marcusean Infusion

    Science.gov (United States)

    Pyati, Ajit K.

    2006-01-01

    In the field of library and information science, also known as information studies, critical theory is often not included in debates about the discipline's theoretical foundations. This paper argues that the critical theory of Herbert Marcuse, in particular, has a significant contribution to make to the field of information studies. Marcuse's…

  9. Processing Information in Quantum Decision Theory

    OpenAIRE

    Yukalov, V. I.; Sornette, D.

    2008-01-01

    A survey is given summarizing the state of the art of describing information processing in Quantum Decision Theory, which has been recently advanced as a novel variant of decision making, based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intended actions. The theory characterizes entangled decision making, non-commutativity of subsequent decisions, and intention int...

  10. Information theory and the ethylene genetic network.

    Science.gov (United States)

    González-García, José S; Díaz, José

    2011-10-01

    The original aim of the Information Theory (IT) was to solve a purely technical problem: to increase the performance of communication systems, which are constantly affected by interferences that diminish the quality of the transmitted information. That is, the theory deals only with the problem of transmitting with the maximal precision the symbols constituting a message. In Shannon's theory messages are characterized only by their probabilities, regardless of their value or meaning. As for its present day status, it is generally acknowledged that Information Theory has solid mathematical foundations and has fruitful strong links with Physics in both theoretical and experimental areas. However, many applications of Information Theory to Biology are limited to using it as a technical tool to analyze biopolymers, such as DNA, RNA or protein sequences. The main point of discussion about the applicability of IT to explain the information flow in biological systems is that in a classic communication channel, the symbols that conform the coded message are transmitted one by one in an independent form through a noisy communication channel, and noise can alter each of the symbols, distorting the message; in contrast, in a genetic communication channel the coded messages are not transmitted in the form of symbols but signaling cascades transmit them. Consequently, the information flow from the emitter to the effector is due to a series of coupled physicochemical processes that must ensure the accurate transmission of the message. In this review we discussed a novel proposal to overcome this difficulty, which consists of the modeling of gene expression with a stochastic approach that allows Shannon entropy (H) to be directly used to measure the amount of uncertainty that the genetic machinery has in relation to the correct decoding of a message transmitted into the nucleus by a signaling pathway. From the value of H we can define a function I that measures the amount of

  11. A fingerprint classification algorithm based on combination of local and global information

    Science.gov (United States)

    Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

    2011-12-01

    Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

  12. Realism and Antirealism in Informational Foundations of Quantum Theory

    Directory of Open Access Journals (Sweden)

    Tina Bilban

    2014-08-01

    Full Text Available Zeilinger-Brukner's informational foundations of quantum theory, a theory based on Zeilinger's foundational principle for quantum mechanics that an elementary system carried one bit of information, explains seemingly unintuitive quantum behavior with simple theoretical framework. It is based on the notion that distinction between reality and information cannot be made, therefore they are the same. As the critics of informational foundations of quantum theory show, this antirealistic move captures the theory in tautology, where information only refers to itself, while the relationships outside the information with the help of which the nature of information would be defined are lost and the questions "Whose information? Information about what?" cannot be answered. The critic's solution is a return to realism, where the observer's effects on the information are neglected. We show that radical antirealism of informational foundations of quantum theory is not necessary and that the return to realism is not the only way forward. A comprehensive approach that exceeds mere realism and antirealism is also possible: we can consider both sources of the constraints on the information, those coming from the observer and those coming from the observed system/nature/reality. The information is always the observer's information about the observed. Such a comprehensive philosophical approach can still support the theoretical framework of informational foundations of quantum theory: If we take that one bit is the smallest amount of information in the form of which the observed reality can be grasped by the observer, we can say that an elementary system (grasped and defined as such by the observer correlates to one bit of information. Our approach thus explains all the features of the quantum behavior explained by informational foundations of quantum theory: the wave function and its collapse, entanglement, complementarity and quantum randomness. However, it does

  13. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  14. How to Produce a Transdisciplinary Information Concept for a Universal Theory of Information?

    DEFF Research Database (Denmark)

    Brier, Søren

    2017-01-01

    the natural, technical, social and humanistic sciences must be defined as a part of real relational meaningful sign-processes manifesting as tokens. Thus Peirce’s information theory is empirically based in a realistic worldview, which through modern biosemiotics includes all living systems....... concept of information as a difference that makes a difference and in Luhmann’s triple autopoietic communication based system theory, where information is always a part of a message. Charles Sanders Peirce’s pragmaticist semiotics differs from other paradigms in that it integrates logic and information...... in interpretative semiotics. I therefore suggest alternatively building information theories based on semiotics from the basic relations of embodied living systems meaningful cognition and communication. I agree with Peircean biosemiotics that all transdisciplinary information concepts in order to work across...

  15. Optimal interconnection trees in the plane theory, algorithms and applications

    CERN Document Server

    Brazil, Marcus

    2015-01-01

    This book explores fundamental aspects of geometric network optimisation with applications to a variety of real world problems. It presents, for the first time in the literature, a cohesive mathematical framework within which the properties of such optimal interconnection networks can be understood across a wide range of metrics and cost functions. The book makes use of this mathematical theory to develop efficient algorithms for constructing such networks, with an emphasis on exact solutions.  Marcus Brazil and Martin Zachariasen focus principally on the geometric structure of optimal interconnection networks, also known as Steiner trees, in the plane. They show readers how an understanding of this structure can lead to practical exact algorithms for constructing such trees.  The book also details numerous breakthroughs in this area over the past 20 years, features clearly written proofs, and is supported by 135 colour and 15 black and white figures. It will help graduate students, working mathematicians, ...

  16. Implications of Information Theory for Computational Modeling of Schizophrenia.

    Science.gov (United States)

    Silverstein, Steven M; Wibral, Michael; Phillips, William A

    2017-10-01

    Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory-such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio-can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.

  17. Fringe pattern analysis for optical metrology theory, algorithms, and applications

    CERN Document Server

    Servin, Manuel; Padilla, Moises

    2014-01-01

    The main objective of this book is to present the basic theoretical principles and practical applications for the classical interferometric techniques and the most advanced methods in the field of modern fringe pattern analysis applied to optical metrology. A major novelty of this work is the presentation of a unified theoretical framework based on the Fourier description of phase shifting interferometry using the Frequency Transfer Function (FTF) along with the theory of Stochastic Process for the straightforward analysis and synthesis of phase shifting algorithms with desired properties such

  18. TURING MACHINE AS UNIVERSAL ALGORITHM EXECUTOR AND ITS APPLICATION IN THE PROCESS OF HIGH-SCHOOL STUDENTS` ADVANCED STUDY OF ALGORITHMIZATION AND PROGRAMMING FUNDAMENTALS

    Directory of Open Access Journals (Sweden)

    Oleksandr B. Yashchyk

    2016-05-01

    Full Text Available The article discusses the importance of studying the notion of algorithm and its formal specification using Turing machines. In the article it was identified the basic hypothesis of the theory of algorithms for Turing as well as reviewed scientific research of modern scientists devoted to this issue and found the main principles of the Turing machine as an abstract mathematical model. The process of forming information competencies components, information culture and students` logical thinking development with the inclusion of the topic “Study and Application of Turing machine as Universal Algorithm Executor” in the course of Informatics was analyzed.

  19. BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.

    Science.gov (United States)

    Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter

    2013-02-01

    Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of

  20. On Representation in Information Theory

    Directory of Open Access Journals (Sweden)

    Joseph E. Brenner

    2011-09-01

    Full Text Available Semiotics is widely applied in theories of information. Following the original triadic characterization of reality by Peirce, the linguistic processes involved in information—production, transmission, reception, and understanding—would all appear to be interpretable in terms of signs and their relations to their objects. Perhaps the most important of these relations is that of the representation-one, entity, standing for or representing some other. For example, an index—one of the three major kinds of signs—is said to represent something by being directly related to its object. My position, however, is that the concept of symbolic representations having such roles in information, as intermediaries, is fraught with the same difficulties as in representational theories of mind. I have proposed an extension of logic to complex real phenomena, including mind and information (Logic in Reality; LIR, most recently at the 4th International Conference on the Foundations of Information Science (Beijing, August, 2010. LIR provides explanations for the evolution of complex processes, including information, that do not require any entities other than the processes themselves. In this paper, I discuss the limitations of the standard relation of representation. I argue that more realistic pictures of informational systems can be provided by reference to information as an energetic process, following the categorial ontology of LIR. This approach enables naïve, anti-realist conceptions of anti-representationalism to be avoided, and enables an approach to both information and meaning in the same novel logical framework.

  1. Quantum algorithms and learning theory

    NARCIS (Netherlands)

    Arunachalam, S.

    2018-01-01

    This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem

  2. Framelets and wavelets algorithms, analysis, and applications

    CERN Document Server

    Han, Bin

    2017-01-01

    Marking a distinct departure from the perspectives of frame theory and discrete transforms, this book provides a comprehensive mathematical and algorithmic introduction to wavelet theory. As such, it can be used as either a textbook or reference guide. As a textbook for graduate mathematics students and beginning researchers, it offers detailed information on the basic theory of framelets and wavelets, complemented by self-contained elementary proofs, illustrative examples/figures, and supplementary exercises. Further, as an advanced reference guide for experienced researchers and practitioners in mathematics, physics, and engineering, the book addresses in detail a wide range of basic and advanced topics (such as multiwavelets/multiframelets in Sobolev spaces and directional framelets) in wavelet theory, together with systematic mathematical analysis, concrete algorithms, and recent developments in and applications of framelets and wavelets. Lastly, the book can also be used to teach on or study selected spe...

  3. Fixed Orientation Interconnection Problems: Theory, Algorithms and Applications

    DEFF Research Database (Denmark)

    Zachariasen, Martin

    Interconnection problems have natural applications in the design of integrated circuits (or chips). A modern chip consists of billions of transistors that are connected by metal wires on the surface of the chip. These metal wires are routed on a (fairly small) number of layers in such a way...... that electrically independent nets do not intersect each other. Traditional manufacturing technology limits the orientations of the wires to be either horizontal or vertical — and is known as Manhattan architecture. Over the last decade there has been a growing interest in general architectures, where more than two...... a significant step forward, both concerning theory and algorithms, for the fixed orientation Steiner tree problem. In addition, the work maintains a close link to applications and generalizations motivated by chip design....

  4. Information theory based approaches to cellular signaling.

    Science.gov (United States)

    Waltermann, Christian; Klipp, Edda

    2011-10-01

    Cells interact with their environment and they have to react adequately to internal and external changes such changes in nutrient composition, physical properties like temperature or osmolarity and other stresses. More specifically, they must be able to evaluate whether the external change is significant or just in the range of noise. Based on multiple external parameters they have to compute an optimal response. Cellular signaling pathways are considered as the major means of information perception and transmission in cells. Here, we review different attempts to quantify information processing on the level of individual cells. We refer to Shannon entropy, mutual information, and informal measures of signaling pathway cross-talk and specificity. Information theory in systems biology has been successfully applied to identification of optimal pathway structures, mutual information and entropy as system response in sensitivity analysis, and quantification of input and output information. While the study of information transmission within the framework of information theory in technical systems is an advanced field with high impact in engineering and telecommunication, its application to biological objects and processes is still restricted to specific fields such as neuroscience, structural and molecular biology. However, in systems biology dealing with a holistic understanding of biochemical systems and cellular signaling only recently a number of examples for the application of information theory have emerged. This article is part of a Special Issue entitled Systems Biology of Microorganisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems.

    Science.gov (United States)

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K

    2017-12-19

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  6. Assessment of the information content of patterns: an algorithm

    Science.gov (United States)

    Daemi, M. Farhang; Beurle, R. L.

    1991-12-01

    A preliminary investigation confirmed the possibility of assessing the translational and rotational information content of simple artificial images. The calculation is tedious, and for more realistic patterns it is essential to implement the method on a computer. This paper describes an algorithm developed for this purpose which confirms the results of the preliminary investigation. Use of the algorithm facilitates much more comprehensive analysis of the combined effect of continuous rotation and fine translation, and paves the way for analysis of more realistic patterns. Owing to the volume of calculation involved in these algorithms, extensive computing facilities were necessary. The major part of the work was carried out using an ICL 3900 series mainframe computer as well as other powerful workstations such as a RISC architecture MIPS machine.

  7. An information theory-based approach to modeling the information processing of NPP operators

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory

  8. Client-controlled case information: a general system theory perspective.

    Science.gov (United States)

    Fitch, Dale

    2004-07-01

    The author proposes a model for client control of case information via the World Wide Web built on principles of general system theory. It incorporates the client into the design, resulting in an information structure that differs from traditional human services information-sharing practices. Referencing general system theory, the concepts of controller and controlled system, as well as entropy and negentropy, are applied to the information flow and autopoietic behavior as they relate to the boundary-maintaining functions of today's organizations. The author's conclusions synthesize general system theory and human services values to lay the foundation for an information-sharing framework for human services in the 21st century.

  9. Realization of seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

    International Nuclear Information System (INIS)

    Wei Daxiu; Yang Xiaodong; Luo Jun; Sun Xianping; Zeng Xizhi; Liu Maili; Ding Shangwu

    2002-01-01

    Recent years, remarkable progresses in experimental realization of quantum information have been made, especially based on nuclear magnetic resonance (NMR) theory. In all quantum algorithms, Deutsch-Jozsa algorithm has been widely studied. It can be realized on NMR quantum computer and also can be simplified by using the Cirac's scheme. At first the principle of Deutsch-Jozsa quantum algorithm is analyzed, then the authors implement the seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

  10. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  11. Figuring Control in the Algorithmic Era

    DEFF Research Database (Denmark)

    Markham, Annette; Bossen, Claus

    Drawing on actor network theory, we follow how algorithms, information, selfhood and identity-for-others tangle in interesting and unexpected ways. Starting with simple moments in everyday life that might be described as having implications for ‘control,’ we focus attention on the ways in which t...

  12. Elaborations of grounded theory in information research: arenas/social worlds theory, discourse and situational analysis

    OpenAIRE

    Vasconcelos, A.C.; Sen, B.A.; Rosa, A.; Ellis, D.

    2012-01-01

    This paper explores elaborations of Grounded Theory in relation to Arenas/Social Worlds Theory. The notions of arenas and social worlds were present in early applications of Grounded Theory but have not been as much used or recognised as the general Grounded Theory approach, particularly in the information studies field. The studies discussed here are therefore very unusual in information research. The empirical contexts of these studies are those of (1) the role of discourse in the organisat...

  13. Fault-tolerant search algorithms reliable computation with unreliable information

    CERN Document Server

    Cicalese, Ferdinando

    2013-01-01

    Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

  14. Quantum: information theory: technological challenge

    International Nuclear Information System (INIS)

    Calixto, M.

    2001-01-01

    The new Quantum Information Theory augurs powerful machines that obey the entangled logic of the subatomic world. Parallelism, entanglement, teleportation, no-cloning and quantum cryptography are typical peculiarities of this novel way of understanding computation. (Author) 24 refs

  15. Client-Controlled Case Information: A General System Theory Perspective

    Science.gov (United States)

    Fitch, Dale

    2004-01-01

    The author proposes a model for client control of case information via the World Wide Web built on principles of general system theory. It incorporates the client into the design, resulting in an information structure that differs from traditional human services information-sharing practices. Referencing general system theory, the concepts of…

  16. Optimization and Control of Bilinear Systems Theory, Algorithms, and Applications

    CERN Document Server

    Pardalos, Panos M

    2008-01-01

    Covers developments in bilinear systems theory Focuses on the control of open physical processes functioning in a non-equilibrium mode Emphasis is on three primary disciplines: modern differential geometry, control of dynamical systems, and optimization theory Includes applications to the fields of quantum and molecular computing, control of physical processes, biophysics, superconducting magnetism, and physical information science

  17. Fast half-sibling population reconstruction: theory and algorithms.

    Science.gov (United States)

    Dexter, Daniel; Brown, Daniel G

    2013-07-12

    Kinship inference is the task of identifying genealogically related individuals. Kinship information is important for determining mating structures, notably in endangered populations. Although many solutions exist for reconstructing full sibling relationships, few exist for half-siblings. We consider the problem of determining whether a proposed half-sibling population reconstruction is valid under Mendelian inheritance assumptions. We show that this problem is NP-complete and provide a 0/1 integer program that identifies the minimum number of individuals that must be removed from a population in order for the reconstruction to become valid. We also present SibJoin, a heuristic-based clustering approach based on Mendelian genetics, which is strikingly fast. The software is available at http://github.com/ddexter/SibJoin.git+. Our SibJoin algorithm is reasonably accurate and thousands of times faster than existing algorithms. The heuristic is used to infer a half-sibling structure for a population which was, until recently, too large to evaluate.

  18. A molecular dynamics algorithm for simulation of field theories in the canonical ensemble

    International Nuclear Information System (INIS)

    Kogut, J.B.; Sinclair, D.K.

    1986-01-01

    We add a single scalar degree of freedom (''demon'') to the microcanonical ensemble which converts its molecular dynamics into a simulation method for the canonical ensemble (euclidean path integral) of the underlying field theory. This generalization of the microcanonical molecular dynamics algorithm simulates the field theory at fixed coupling with a completely deterministic procedure. We discuss the finite size effects of the method, the equipartition theorem and ergodicity. The method is applied to the planar model in two dimensions and SU(3) lattice gauge theory with four species of light, dynamical quarks in four dimensions. The method is much less sensitive to its discrete time step than conventional Langevin equation simulations of the canonical ensemble. The method is a straightforward generalization of a procedure introduced by S. Nose for molecular physics. (orig.)

  19. A Location-Based Business Information Recommendation Algorithm

    Directory of Open Access Journals (Sweden)

    Shudong Liu

    2015-01-01

    Full Text Available Recently, many researches on information (e.g., POI, ADs recommendation based on location have been done in both research and industry. In this paper, we firstly construct a region-based location graph (RLG, in which region node respectively connects with user node and business information node, and then we propose a location-based recommendation algorithm based on RLG, which can combine with user short-ranged mobility formed by daily activity and long-distance mobility formed by social network ties and sequentially can recommend local business information and long-distance business information to users. Moreover, it can combine user-based collaborative filtering with item-based collaborative filtering, and it can alleviate cold start problem which traditional recommender systems often suffer from. Empirical studies from large-scale real-world data from Yelp demonstrate that our method outperforms other methods on the aspect of recommendation accuracy.

  20. An informational theory of privacy

    NARCIS (Netherlands)

    Schottmuller, C.; Jann, Ole

    2016-01-01

    We develop a theory that explains how and when privacy can increase welfare. Without privacy, some individuals misrepresent their preferences, because they will otherwise be statistically discriminated against. This "chilling effect" hurts them individually, and impairs information aggregation. The

  1. Econophysics: from Game Theory and Information Theory to Quantum Mechanics

    Science.gov (United States)

    Jimenez, Edward; Moya, Douglas

    2005-03-01

    Rationality is the universal invariant among human behavior, universe physical laws and ordered and complex biological systems. Econophysics isboth the use of physical concepts in Finance and Economics, and the use of Information Economics in Physics. In special, we will show that it is possible to obtain the Quantum Mechanics principles using Information and Game Theory.

  2. Genetic algorithm to solve the problems of lectures and practicums scheduling

    Science.gov (United States)

    Syahputra, M. F.; Apriani, R.; Sawaluddin; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.

    2018-02-01

    Generally, the scheduling process is done manually. However, this method has a low accuracy level, along with possibilities that a scheduled process collides with another scheduled process. When doing theory class and practicum timetable scheduling process, there are numerous problems, such as lecturer teaching schedule collision, schedule collision with another schedule, practicum lesson schedules that collides with theory class, and the number of classrooms available. In this research, genetic algorithm is implemented to perform theory class and practicum timetable scheduling process. The algorithm will be used to process the data containing lists of lecturers, courses, and class rooms, obtained from information technology department at University of Sumatera Utara. The result of scheduling process using genetic algorithm is the most optimal timetable that conforms to available time slots, class rooms, courses, and lecturer schedules.

  3. Securing mobile ad hoc networks using danger theory-based artificial immune algorithm.

    Science.gov (United States)

    Abdelhaq, Maha; Alsaqour, Raed; Abdelhaq, Shawkat

    2015-01-01

    A mobile ad hoc network (MANET) is a set of mobile, decentralized, and self-organizing nodes that are used in special cases, such as in the military. MANET properties render the environment of this network vulnerable to different types of attacks, including black hole, wormhole and flooding-based attacks. Flooding-based attacks are one of the most dangerous attacks that aim to consume all network resources and thus paralyze the functionality of the whole network. Therefore, the objective of this paper is to investigate the capability of a danger theory-based artificial immune algorithm called the mobile dendritic cell algorithm (MDCA) to detect flooding-based attacks in MANETs. The MDCA applies the dendritic cell algorithm (DCA) to secure the MANET with additional improvements. The MDCA is tested and validated using Qualnet v7.1 simulation tool. This work also introduces a new simulation module for a flooding attack called the resource consumption attack (RCA) using Qualnet v7.1. The results highlight the high efficiency of the MDCA in detecting RCAs in MANETs.

  4. Data structures theory and practice

    CERN Document Server

    Berztiss, A T

    1971-01-01

    Computer Science and Applied Mathematics: Data Structures: Theory and Practice focuses on the processes, methodologies, principles, and approaches involved in data structures, including algorithms, decision trees, Boolean functions, lattices, and matrices. The book first offers information on set theory, functions, and relations, and graph theory. Discussions focus on linear formulas of digraphs, isomorphism of digraphs, basic definitions in the theory of digraphs, Boolean functions and forms, lattices, indexed sets, algebra of sets, and order pair and related concepts. The text then examines

  5. Information processing theory in the early design stages

    DEFF Research Database (Denmark)

    Cash, Philip; Kreye, Melanie

    2014-01-01

    suggestions for improvements and support. One theory that may be particularly applicable to the early design stages is Information Processing Theory (IPT) as it is linked to the design process with regard to the key concepts considered. IPT states that designers search for information if they perceive......, the new knowledge is shared between the design team to reduce ambiguity with regards to its meaning and to build a shared understanding – reducing perceived uncertainty. Thus, we propose that Information-Processing Theory is suitable to describe designer activity in the early design stages...... uncertainty with regard to the knowledge necessary to solve a design challenge. They then process this information and compare if the new knowledge they have gained covers the previous knowledge gap. In engineering design, uncertainty plays a key role, particularly in the early design stages which has been...

  6. Comment on Gallistel: behavior theory and information theory: some parallels.

    Science.gov (United States)

    Nevin, John A

    2012-05-01

    In this article, Gallistel proposes information theory as an approach to some enduring problems in the study of operant and classical conditioning. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. REALIZATION OF VISUAL TECHNIQUE DIDACTIC APPROACH IN ALGORITHMIC TRAINING OF STUDENTS THROUGH INFORMATION AND COMMUNICATION TECHNOLOGIES OF EDUCATIONAL ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Sergii Voloshynov

    2016-12-01

    Full Text Available The article examines the development of visual learning theory, states functions of accuracy and peculiarities of visual technique realization in modern studying process, it defines the concept of “Visual learning environment” and didactic role of interactive and multimedia visualization processes. Author examines the problem of determination of cognitive visualization potential in algorithmic training of students through information and communication technologies of educational environment. This article specifies functions of visual aids use and implementation features of the specified principle in modern educational process and proves the didactic role of interactive multimedia visualization process that stimulates cognitive activity of student and activates perceptive mechanism of teaching information. It analyzes problem of cognitive visualization potential capacity signification while training future marine personnel using informational communicative educational environment.

  8. An extension theory-based maximum power tracker using a particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang

    2014-01-01

    Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller

  9. A Moving Object Detection Algorithm Based on Color Information

    International Nuclear Information System (INIS)

    Fang, X H; Xiong, W; Hu, B J; Wang, L T

    2006-01-01

    This paper designed a new algorithm of moving object detection for the aim of quick moving object detection and orientation, which used a pixel and its neighbors as an image vector to represent that pixel and modeled different chrominance component pixel as a mixture of Gaussians, and set up different mixture model of Gauss for different YUV chrominance components. In order to make full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background

  10. Writing, Proofreading and Editing in Information Theory

    Directory of Open Access Journals (Sweden)

    J. Ricardo Arias-Gonzalez

    2018-05-01

    Full Text Available Information is a physical entity amenable to be described by an abstract theory. The concepts associated with the creation and post-processing of the information have not, however, been mathematically established, despite being broadly used in many fields of knowledge. Here, inspired by how information is managed in biomolecular systems, we introduce writing, entailing any bit string generation, and revision, as comprising proofreading and editing, in information chains. Our formalism expands the thermodynamic analysis of stochastic chains made up of material subunits to abstract strings of symbols. We introduce a non-Markovian treatment of operational rules over the symbols of the chain that parallels the physical interactions responsible for memory effects in material chains. Our theory underlies any communication system, ranging from human languages and computer science to gene evolution.

  11. Inverse problems with Poisson data: statistical regularization theory, applications and algorithms

    International Nuclear Information System (INIS)

    Hohage, Thorsten; Werner, Frank

    2016-01-01

    Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years. (topical review)

  12. Mathematical Foundations of Quantum Information and Computation and Its Applications to Nano- and Bio-systems

    CERN Document Server

    Ohya, Masanori

    2011-01-01

    This monograph provides a mathematical foundation  to  the theory of quantum information and computation, with applications to various open systems including nano and bio systems. It includes introductory material on algorithm, functional analysis, probability theory, information theory, quantum mechanics and quantum field theory. Apart from standard material on quantum information like quantum algorithm and teleportation, the authors discuss findings on the theory of entropy in C*-dynamical systems, space-time dependence of quantum entangled states, entangling operators, adaptive dynamics, relativistic quantum information, and a new paradigm for quantum computation beyond the usual quantum Turing machine. Also, some important applications of information theory to genetics and life sciences, as well as recent experimental and theoretical discoveries in quantum photosynthesis are described.

  13. An information theory framework for dynamic functional domain connectivity.

    Science.gov (United States)

    Vergara, Victor M; Miller, Robyn; Calhoun, Vince

    2017-06-01

    Dynamic functional network connectivity (dFNC) analyzes time evolution of coherent activity in the brain. In this technique dynamic changes are considered for the whole brain. This paper proposes an information theory framework to measure information flowing among subsets of functional networks call functional domains. Our method aims at estimating bits of information contained and shared among domains. The succession of dynamic functional states is estimated at the domain level. Information quantity is based on the probabilities of observing each dynamic state. Mutual information measurement is then obtained from probabilities across domains. Thus, we named this value the cross domain mutual information (CDMI). Strong CDMIs were observed in relation to the subcortical domain. Domains related to sensorial input, motor control and cerebellum form another CDMI cluster. Information flow among other domains was seldom found. Other methods of dynamic connectivity focus on whole brain dFNC matrices. In the current framework, information theory is applied to states estimated from pairs of multi-network functional domains. In this context, we apply information theory to measure information flow across functional domains. Identified CDMI clusters point to known information pathways in the basal ganglia and also among areas of sensorial input, patterns found in static functional connectivity. In contrast, CDMI across brain areas of higher level cognitive processing follow a different pattern that indicates scarce information sharing. These findings show that employing information theory to formally measured information flow through brain domains reveals additional features of functional connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Feature extraction algorithm for space targets based on fractal theory

    Science.gov (United States)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  15. Information theory, spectral geometry, and quantum gravity.

    Science.gov (United States)

    Kempf, Achim; Martin, Robert

    2008-01-18

    We show that there exists a deep link between the two disciplines of information theory and spectral geometry. This allows us to obtain new results on a well-known quantum gravity motivated natural ultraviolet cutoff which describes an upper bound on the spatial density of information. Concretely, we show that, together with an infrared cutoff, this natural ultraviolet cutoff beautifully reduces the path integral of quantum field theory on curved space to a finite number of ordinary integrations. We then show, in particular, that the subsequent removal of the infrared cutoff is safe.

  16. Foundations of digital signal processing theory, algorithms and hardware design

    CERN Document Server

    Gaydecki, Patrick

    2005-01-01

    An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

  17. Optimal design of hydrometric monitoring networks with dynamic components based on Information Theory

    Science.gov (United States)

    Alfonso, Leonardo; Chacon, Juan; Solomatine, Dimitri

    2016-04-01

    The EC-FP7 WeSenseIt project proposes the development of a Citizen Observatory of Water, aiming at enhancing environmental monitoring and forecasting with the help of citizens equipped with low-cost sensors and personal devices such as smartphones and smart umbrellas. In this regard, Citizen Observatories may complement the limited data availability in terms of spatial and temporal density, which is of interest, among other areas, to improve hydraulic and hydrological models. At this point, the following question arises: how can citizens, who are part of a citizen observatory, be optimally guided so that the data they collect and send is useful to improve modelling and water management? This research proposes a new methodology to identify the optimal location and timing of potential observations coming from moving sensors of hydrological variables. The methodology is based on Information Theory, which has been widely used in hydrometric monitoring design [1-4]. In particular, the concepts of Joint Entropy, as a measure of the amount of information that is contained in a set of random variables, which, in our case, correspond to the time series of hydrological variables captured at given locations in a catchment. The methodology presented is a step forward in the state of the art because it solves the multiobjective optimisation problem of getting simultaneously the minimum number of informative and non-redundant sensors needed for a given time, so that the best configuration of monitoring sites is found at every particular moment in time. To this end, the existing algorithms have been improved to make them efficient. The method is applied to cases in The Netherlands, UK and Italy and proves to have a great potential to complement the existing in-situ monitoring networks. [1] Alfonso, L., A. Lobbrecht, and R. Price (2010a), Information theory-based approach for location of monitoring water level gauges in polders, Water Resour. Res., 46(3), W03528 [2] Alfonso, L., A

  18. IMMAN: free software for information theory-based chemometric analysis.

    Science.gov (United States)

    Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

    2015-05-01

    The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA

  19. Financial markets theory equilibrium, efficiency and information

    CERN Document Server

    Barucci, Emilio

    2017-01-01

    This work, now in a thoroughly revised second edition, presents the economic foundations of financial markets theory from a mathematically rigorous standpoint and offers a self-contained critical discussion based on empirical results. It is the only textbook on the subject to include more than two hundred exercises, with detailed solutions to selected exercises. Financial Markets Theory covers classical asset pricing theory in great detail, including utility theory, equilibrium theory, portfolio selection, mean-variance portfolio theory, CAPM, CCAPM, APT, and the Modigliani-Miller theorem. Starting from an analysis of the empirical evidence on the theory, the authors provide a discussion of the relevant literature, pointing out the main advances in classical asset pricing theory and the new approaches designed to address asset pricing puzzles and open problems (e.g., behavioral finance). Later chapters in the book contain more advanced material, including on the role of information in financial markets, non-c...

  20. Information Theory - The Bridge Connecting Bounded Rational Game Theory and Statistical Physics

    Science.gov (United States)

    Wolpert, David H.

    2005-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality of all red-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. This paper shows that the same information theoretic mathematical structure, known as Product Distribution (PD) theory, addresses both issues. In this, PD theory not only provides a principle formulation of bounded rationality and a set of new types of mean field theory in statistical physics; it also shows that those topics are fundamentally one and the same.

  1. Studying the varied shapes of gold clusters by an elegant optimization algorithm that hybridizes the density functional tight-binding theory and the density functional theory

    Science.gov (United States)

    Yen, Tsung-Wen; Lim, Thong-Leng; Yoon, Tiem-Leong; Lai, S. K.

    2017-11-01

    We combined a new parametrized density functional tight-binding (DFTB) theory (Fihey et al. 2015) with an unbiased modified basin hopping (MBH) optimization algorithm (Yen and Lai 2015) and applied it to calculate the lowest energy structures of Au clusters. From the calculated topologies and their conformational changes, we find that this DFTB/MBH method is a necessary procedure for a systematic study of the structural development of Au clusters but is somewhat insufficient for a quantitative study. As a result, we propose an extended hybridized algorithm. This improved algorithm proceeds in two steps. In the first step, the DFTB theory is employed to calculate the total energy of the cluster and this step (through running DFTB/MBH optimization for given Monte-Carlo steps) is meant to efficiently bring the Au cluster near to the region of the lowest energy minimum since the cluster as a whole has explicitly considered the interactions of valence electrons with ions, albeit semi-quantitatively. Then, in the second succeeding step, the energy-minimum searching process will continue with a skilledly replacement of the energy function calculated by the DFTB theory in the first step by one calculated in the full density functional theory (DFT). In these subsequent calculations, we couple the DFT energy also with the MBH strategy and proceed with the DFT/MBH optimization until the lowest energy value is found. We checked that this extended hybridized algorithm successfully predicts the twisted pyramidal structure for the Au40 cluster and correctly confirms also the linear shape of C8 which our previous DFTB/MBH method failed to do so. Perhaps more remarkable is the topological growth of Aun: it changes from a planar (n =3-11) → an oblate-like cage (n =12-15) → a hollow-shape cage (n =16-18) and finally a pyramidal-like cage (n =19, 20). These varied forms of the cluster's shapes are consistent with those reported in the literature.

  2. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

    International Nuclear Information System (INIS)

    Sundararaman, Ravishankar; Goddard, William A. III; Arias, Tomas A.

    2017-01-01

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.

  3. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

    Science.gov (United States)

    Sundararaman, Ravishankar; Goddard, William A.; Arias, Tomas A.

    2017-03-01

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.

  4. Reference group theory with implications for information studies: a theoretical essay

    Directory of Open Access Journals (Sweden)

    E. Murell Dawson

    2001-01-01

    Full Text Available This article explores the role and implications of reference group theory in relation to the field of library and information science. Reference group theory is based upon the principle that people take the standards of significant others as a basis for making self-appraisals, comparisons, and choices regarding need and use of information. Research that applies concepts of reference group theory to various sectors of library and information studies can provide data useful in enhancing areas such as information-seeking research, special populations, and uses of information. Implications are promising that knowledge gained from like research can be beneficial in helping information professionals better understand the role theory plays in examining ways in which people manage their information and social worlds.

  5. IEEE International Symposium on Information Theory (ISIT): Abstracts of Papers, Held in Ann Arbor, Michigan on 6-9 October 1986.

    Science.gov (United States)

    1986-10-01

    code algorithms of Adler- Coopersmith -Hassner and Karabed-Marcus, which exploit techniques of symbolic dynamics to derive systematic code construction...pro- cedures for finite and infinite memory channels. (The paper of Adler- Coopersmith - Hassner received the 1985 Information Theory Group Paper Award...Research Center K69/802, 650 Harry Road, San Jose, CA 95120, USA. We continue here the work of Adler, Coopersmith , and Hassner (see IEEE-IT 29, 5-22

  6. Information Theory and Plasma Turbulence

    International Nuclear Information System (INIS)

    Dendy, R. O.

    2009-01-01

    Information theory, applied directly to measured signals, yields new perspectives on, and quantitative knowledge of, the physics of strongly nonlinear and turbulent phenomena in plasmas. It represents a new and productive element of the topical research programmes that use modern techniques to characterise strongly nonlinear signals from plasmas, and that address global plasma behaviour from a complex systems perspective. We here review some pioneering studies of mutual information in solar wind and magnetospheric plasmas, using techniques tested on standard complex systems.

  7. Quantum theory informational foundations and foils

    CERN Document Server

    Spekkens, Robert

    2016-01-01

    This book provides the first unified overview of the burgeoning research area at the interface between Quantum Foundations and Quantum Information.  Topics include: operational alternatives to quantum theory, information-theoretic reconstructions of the quantum formalism, mathematical frameworks for operational theories, and device-independent features of the set of quantum correlations. Powered by the injection of fresh ideas from the field of Quantum Information and Computation, the foundations of Quantum Mechanics are in the midst of a renaissance. The last two decades have seen an explosion of new results and research directions, attracting broad interest in the scientific community. The variety and number of different approaches, however, makes it challenging for a newcomer to obtain a big picture of the field and of its high-level goals. Here, fourteen original contributions from leading experts in the field cover some of the most promising research directions that have emerged in the new wave of quant...

  8. Wireless sensor placement for structural monitoring using information-fusing firefly algorithm

    Science.gov (United States)

    Zhou, Guang-Dong; Yi, Ting-Hua; Xie, Mei-Xi; Li, Hong-Nan

    2017-10-01

    Wireless sensor networks (WSNs) are promising technology in structural health monitoring (SHM) applications for their low cost and high efficiency. The limited wireless sensors and restricted power resources in WSNs highlight the significance of optimal wireless sensor placement (OWSP) during designing SHM systems to enable the most useful information to be captured and to achieve the longest network lifetime. This paper presents a holistic approach, including an optimization criterion and a solution algorithm, for optimally deploying self-organizing multi-hop WSNs on large-scale structures. The combination of information effectiveness represented by the modal independence and the network performance specified by the network connectivity and network lifetime is first formulated to evaluate the performance of wireless sensor configurations. Then, an information-fusing firefly algorithm (IFFA) is developed to solve the OWSP problem. The step sizes drawn from a Lévy distribution are adopted to drive fireflies toward brighter individuals. Following the movement with Lévy flights, information about the contributions of wireless sensors to the objective function as carried by the fireflies is fused and applied to move inferior wireless sensors to better locations. The reliability of the proposed approach is verified via a numerical example on a long-span suspension bridge. The results demonstrate that the evaluation criterion provides a good performance metric of wireless sensor configurations, and the IFFA outperforms the simple discrete firefly algorithm.

  9. Information theory applied to econophysics: stock market behaviors

    Science.gov (United States)

    Vogel, Eugenio E.; Saravia, Gonzalo

    2014-08-01

    The use of data compressor techniques has allowed to recognize magnetic transitions and their associated critical temperatures [E.E. Vogel, G. Saravia, V. Cortez, Physica A 391, 1591 (2012)]. In the present paper we introduce some new concepts associated to data recognition and extend the use of these techniques to econophysics to explore the variations of stock market indicators showing that information theory can help to recognize different regimes. Modifications and further developments to previously introduced data compressor wlzip are introduced yielding two measurements. Additionally, we introduce an algorithm that allows to tune the number of significant digits over which the data compression is due to act complementing, this with an appropriate method to round off the truncation. The application is done to IPSA, the main indicator of the Chilean Stock Market during the year 2010 due to availability of quality data and also to consider a rare effect: the earthquake of the 27th of February on that year which is as of now the sixth strongest earthquake ever recorded by instruments (8.8 Richter scale) according to United States Geological Survey. Along the year 2010 different regimes are recognized. Calm days show larger compression than agitated days allowing for classification and recognition. Then the focus turns onto selected days showing that it is possible to recognize different regimes with the data of the last hour (60 entries) allowing to determine actions in a safer way. The "day of the week" effect is weakly present but "the hour of the day" effect is clearly present; its causes and implications are discussed. This effect also establishes the influence of Asian, European and American stock markets over the smaller Chilean Stock Market. Then dynamical studies are conducted intended to search a system that can help to realize in real time about sudden variations of the market; it is found that information theory can be really helpful in this respect.

  10. An efficient biological pathway layout algorithm combining grid-layout and spring embedder for complicated cellular location information.

    Science.gov (United States)

    Kojima, Kaname; Nagasaki, Masao; Miyano, Satoru

    2010-06-18

    Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the spring embedder algorithm due to

  11. An Algorithmic Information Calculus for Causal Discovery and Reprogramming Systems

    KAUST Repository

    Zenil, Hector; Kiani, Narsis A.; Marabita, Francesco; Deng, Yue; Elias, Szabolcs; Schmidt, Angelika; Ball, Gordon; Tegner, Jesper

    2017-01-01

    . By applying sequences of controlled interventions to systems and networks, we estimate how changes in their algorithmic information content are reflected in positive/negative shifts towards and away from randomness. The strong connection between approximations

  12. Comparative analysis of different variants of the Uzawa algorithm in problems of the theory of elasticity for incompressible materials

    Directory of Open Access Journals (Sweden)

    Nikita E. Styopin

    2016-09-01

    Full Text Available Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.

  13. Experiments in Discourse Analysis Impact on Information Classification and Retrieval Algorithms.

    Science.gov (United States)

    Morato, Jorge; Llorens, J.; Genova, G.; Moreiro, J. A.

    2003-01-01

    Discusses the inclusion of contextual information in indexing and retrieval systems to improve results and the ability to carry out text analysis by means of linguistic knowledge. Presents research that investigated whether discourse variables have an impact on information and retrieval and classification algorithms. (Author/LRW)

  14. Applications of quantum information theory to quantum gravity

    International Nuclear Information System (INIS)

    Smolin, L.

    2005-01-01

    Full text: I describe work by and with Fotini Markopoulou and Olaf Dreyeron the application of quantum information theory to quantum gravity. A particular application to black hole physics is described, which treats the black hole horizon as an open system, in interaction with an environment, which are the degrees of freedom in the bulk spacetime. This allows us to elucidate which quantum states of a general horizon contribute to the entropy of a Schwarzchild black hole. This case serves as an example of how methods from quantum information theory may help to elucidate how the classical limit emerges from a background independent quantum theory of gravity. (author)

  15. Entropy, Information Theory, Information Geometry and Bayesian Inference in Data, Signal and Image Processing and Inverse Problems

    Directory of Open Access Journals (Sweden)

    Ali Mohammad-Djafari

    2015-06-01

    Full Text Available The main content of this review article is first to review the main inference tools using Bayes rule, the maximum entropy principle (MEP, information theory, relative entropy and the Kullback–Leibler (KL divergence, Fisher information and its corresponding geometries. For each of these tools, the precise context of their use is described. The second part of the paper is focused on the ways these tools have been used in data, signal and image processing and in the inverse problems, which arise in different physical sciences and engineering applications. A few examples of the applications are described: entropy in independent components analysis (ICA and in blind source separation, Fisher information in data model selection, different maximum entropy-based methods in time series spectral estimation and in linear inverse problems and, finally, the Bayesian inference for general inverse problems. Some original materials concerning the approximate Bayesian computation (ABC and, in particular, the variational Bayesian approximation (VBA methods are also presented. VBA is used for proposing an alternative Bayesian computational tool to the classical Markov chain Monte Carlo (MCMC methods. We will also see that VBA englobes joint maximum a posteriori (MAP, as well as the different expectation-maximization (EM algorithms as particular cases.

  16. Activity System Theory Approach to Healthcare Information System

    OpenAIRE

    Bai, Guohua

    2004-01-01

    Healthcare information system is a very complex system and has to be approached from systematic perspectives. This paper presents an Activity System Theory (ATS) approach by integrating system thinking and social psychology. First part of the paper, the activity system theory is presented, especially a recursive model of human activity system is introduced. A project ‘Integrated Mobile Information System for Diabetic Healthcare (IMIS)’ is then used to demonstrate a practical application of th...

  17. Efficient multitasking of the SU(3) lattice gauge theory algorithm on the CRAY X-MP

    International Nuclear Information System (INIS)

    Kuba, D.W.; Moriarty, K.J.M.

    1985-01-01

    The Monte Carlo lattice gauge theory algorithm with the Metropolis et.al. updating procedure is vectorized and multitasked on the four processor CRAY X-MP and results in a code with a link-update-time, in 64-bit arithmetic and 10 hits-per-link, of 11.0 μs on a 16 4 lattice, the fastest link-update-time so far achieved. The program calculates the Wilson loops of size up to L/2.L/2 for an L 4 lattice for SU(3) gauge theory. (orig./HSI)

  18. An introductory review of information theory in the context of computational neuroscience.

    Science.gov (United States)

    McDonnell, Mark D; Ikeda, Shiro; Manton, Jonathan H

    2011-07-01

    This article introduces several fundamental concepts in information theory from the perspective of their origins in engineering. Understanding such concepts is important in neuroscience for two reasons. Simply applying formulae from information theory without understanding the assumptions behind their definitions can lead to erroneous results and conclusions. Furthermore, this century will see a convergence of information theory and neuroscience; information theory will expand its foundations to incorporate more comprehensively biological processes thereby helping reveal how neuronal networks achieve their remarkable information processing abilities.

  19. A Two-Step Resume Information Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Chen

    2018-01-01

    Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

  20. Mean field theory of EM algorithm for Bayesian grey scale image restoration

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Tanaka, Kazuyuki

    2003-01-01

    The EM algorithm for the Bayesian grey scale image restoration is investigated in the framework of the mean field theory. Our model system is identical to the infinite range random field Q-Ising model. The maximum marginal likelihood method is applied to the determination of hyper-parameters. We calculate both the data-averaged mean square error between the original image and its maximizer of posterior marginal estimate, and the data-averaged marginal likelihood function exactly. After evaluating the hyper-parameter dependence of the data-averaged marginal likelihood function, we derive the EM algorithm which updates the hyper-parameters to obtain the maximum likelihood estimate analytically. The time evolutions of the hyper-parameters and so-called Q function are obtained. The relation between the speed of convergence of the hyper-parameters and the shape of the Q function is explained from the viewpoint of dynamics

  1. Improved Inverse Kinematics Algorithm Using Screw Theory for a Six-DOF Robot Manipulator

    Directory of Open Access Journals (Sweden)

    Qingcheng Chen

    2015-10-01

    Full Text Available Based on screw theory, a novel improved inverse-kinematics approach for a type of six-DOF serial robot, “Qianjiang I”, is proposed in this paper. The common kinematics model of the robot is based on the Denavit-Hartenberg (D-H notation method while its inverse kinematics has inefficient calculation and complicated solution, which cannot meet the demands of online real-time application. To solve this problem, this paper presents a new method to improve the efficiency of the inverse kinematics solution by introducing the screw theory. Unlike other methods, the proposed method only establishes two coordinates, namely the inertial coordinate and the tool coordinate; the screw motion of each link is carried out based on the inertial coordinate, ensuring definite geometric meaning. Furthermore, we adopt a new inverse kinematics algorithm, developing an improved sub-problem method along with Paden-Kahan sub-problems. This method has high efficiency and can be applied in real-time industrial operation. It is convenient to select the desired solutions directly from among multiple solutions by examining clear geometric meaning. Finally, the effectiveness and reliability performance of the new algorithm are analysed and verified in comparative experiments carried out on the six-DOF serial robot “Qianjiang I”.

  2. Density functional theory and evolution algorithm calculations of elastic properties of AlON

    Energy Technology Data Exchange (ETDEWEB)

    Batyrev, I. G.; Taylor, D. E.; Gazonas, G. A.; McCauley, J. W. [U.S. Army Research Laboratory, Aberdeen Proving Ground, Maryland 21005 (United States)

    2014-01-14

    Different models for aluminum oxynitride (AlON) were calculated using density functional theory and optimized using an evolutionary algorithm. Evolutionary algorithm and density functional theory (DFT) calculations starting from several models of AlON with different Al or O vacancy locations and different positions for the N atoms relative to the vacancy were carried out. The results show that the constant anion model [McCauley et al., J. Eur. Ceram. Soc. 29(2), 223 (2009)] with a random distribution of N atoms not adjacent to the Al vacancy has the lowest energy configuration. The lowest energy structure is in a reasonable agreement with experimental X-ray diffraction spectra. The optimized structure of a 55 atom unit cell was used to construct 220 and 440 atom models for simulation cells using DFT with a Gaussian basis set. Cubic elastic constant predictions were found to approach the experimentally determined AlON single crystal elastic constants as the model size increased from 55 to 440 atoms. The pressure dependence of the elastic constants found from simulated stress-strain relations were in overall agreement with experimental measurements of polycrystalline and single crystal AlON. Calculated IR intensity and Raman spectra are compared with available experimental data.

  3. Basic Knowledge for Market Principle: Approaches to the Price Coordination Mechanism by Using Optimization Theory and Algorithm

    Science.gov (United States)

    Aiyoshi, Eitaro; Masuda, Kazuaki

    On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.

  4. Algorithmic crystal chemistry: A cellular automata approach

    International Nuclear Information System (INIS)

    Krivovichev, S. V.

    2012-01-01

    Atomic-molecular mechanisms of crystal growth can be modeled based on crystallochemical information using cellular automata (a particular case of finite deterministic automata). In particular, the formation of heteropolyhedral layered complexes in uranyl selenates can be modeled applying a one-dimensional three-colored cellular automaton. The use of the theory of calculations (in particular, the theory of automata) in crystallography allows one to interpret crystal growth as a computational process (the realization of an algorithm or program with a finite number of steps).

  5. Geometrical methods in learning theory

    International Nuclear Information System (INIS)

    Burdet, G.; Combe, Ph.; Nencka, H.

    2001-01-01

    The methods of information theory provide natural approaches to learning algorithms in the case of stochastic formal neural networks. Most of the classical techniques are based on some extremization principle. A geometrical interpretation of the associated algorithms provides a powerful tool for understanding the learning process and its stability and offers a framework for discussing possible new learning rules. An illustration is given using sequential and parallel learning in the Boltzmann machine

  6. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  7. Advancing Theory? Landscape Archaeology and Geographical Information Systems

    Directory of Open Access Journals (Sweden)

    Di Hu

    2012-05-01

    Full Text Available This paper will focus on how Geographical Information Systems (GIS have been applied in Landscape Archaeology from the late 1980s to the present. GIS, a tool for organising and analysing spatial information, has exploded in popularity, but we still lack a systematic overview of how it has contributed to archaeological theory, specifically Landscape Archaeology. This paper will examine whether and how GIS has advanced archaeological theory through a historical review of its application in archaeology.

  8. Genetic algorithm for lattice gauge theory on SU(2) and U(1) on 4 dimensional lattice, how to hitchhike to thermal equilibrium state

    International Nuclear Information System (INIS)

    Yamaguchi, A.; Sugamoto, A.

    2000-01-01

    Applying Genetic Algorithm for the Lattice Gauge Theory is formed to be an effective method to minimize the action of gauge field on a lattice. In 4 dimensions, the critical point and the Wilson loop behaviour of SU(2) lattice gauge theory as well as the phase transition of U(1) theory have been studied. The proper coding methodi has been developed in order to avoid the increase of necessary memory and the overload of calculation for Genetic Algorithm. How hichhikers toward equilibrium appear against kidnappers is clarified

  9. Vector-Quantization using Information Theoretic Concepts

    DEFF Research Database (Denmark)

    Lehn-Schiøler, Tue; Hegde, Anant; Erdogmus, Deniz

    2005-01-01

    interpretation and relies on minimization of a well defined cost-function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact......The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen Self Organizing Map (SOM) and the Linde Buzo Gray (LBG) algorithm...... have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical...

  10. From the social learning theory to a social learning algorithm for global optimization

    OpenAIRE

    Gong, Yue-Jiao; Zhang, Jun; Li, Yun

    2014-01-01

    Traditionally, the Evolutionary Computation (EC) paradigm is inspired by Darwinian evolution or the swarm intelligence of animals. Bandura's Social Learning Theory pointed out that the social learning behavior of humans indicates a high level of intelligence in nature. We found that such intelligence of human society can be implemented by numerical computing and be utilized in computational algorithms for solving optimization problems. In this paper, we design a novel and generic optimization...

  11. Encoding color information for visual tracking: Algorithms and benchmark.

    Science.gov (United States)

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  12. Applying Information Processing Theory to Supervision: An Initial Exploration

    Science.gov (United States)

    Tangen, Jodi L.; Borders, L. DiAnne

    2017-01-01

    Although clinical supervision is an educational endeavor (Borders & Brown, [Borders, L. D., 2005]), many scholars neglect theories of learning in working with supervisees. The authors describe 1 learning theory--information processing theory (Atkinson & Shiffrin, 1968, 1971; Schunk, 2016)--and the ways its associated interventions may…

  13. Affect Theory and Autoethnography in Ordinary Information Systems

    DEFF Research Database (Denmark)

    Bødker, Mads; Chamberlain, Alan

    2016-01-01

    This paper uses philosophical theories of affect as a lens for exploring autoethnographic renderings of everyday experience with information technology. Affect theories, in the paper, denote a broad trend in post-humanistic philosophy that explores sensation and feeling as emergent and relational...

  14. Contributions to Persistence Theory

    Directory of Open Access Journals (Sweden)

    Du Dong

    2014-12-01

    Full Text Available Persistence theory discussed in this paper is an application of algebraic topology (Morse Theory [29] to Data Analysis, precisely to qualitative understanding of point cloud data, or PCD for short. PCD can be geometrized as a filtration of simplicial complexes (Vietoris-Rips complex [25] [36] and the homology changes of these complexes provide qualitative information about the data. Bar codes describe the changes in homology with coefficients in a fixed field. When the coefficient field is ℤ2, the calculation of bar codes is done by ELZ algorithm (named after H. Edelsbrunner, D. Letscher, and A. Zomorodian [20]. When the coefficient field is ℝ, we propose an algorithm based on the Hodge decomposition [17]. With Dan Burghelea and Tamal K. Dey we developed a persistence theory which involves level sets discussed in Section 4. We introduce and discuss new computable invariants, the “relevant level persistence numbers” and the “positive and negative bar codes”, and explain how they are related to the bar codes for level persistence. We provide enhancements and modifications of ELZ algorithm to calculate such invariants and illustrate them by examples.

  15. A Rolling Element Bearing Fault Diagnosis Approach Based on Multifractal Theory and Gray Relation Theory.

    Science.gov (United States)

    Li, Jingchao; Cao, Yunpeng; Ying, Yulong; Li, Shuying

    2016-01-01

    Bearing failure is one of the dominant causes of failure and breakdowns in rotating machinery, leading to huge economic loss. Aiming at the nonstationary and nonlinear characteristics of bearing vibration signals as well as the complexity of condition-indicating information distribution in the signals, a novel rolling element bearing fault diagnosis method based on multifractal theory and gray relation theory was proposed in the paper. Firstly, a generalized multifractal dimension algorithm was developed to extract the characteristic vectors of fault features from the bearing vibration signals, which can offer more meaningful and distinguishing information reflecting different bearing health status in comparison with conventional single fractal dimension. After feature extraction by multifractal dimensions, an adaptive gray relation algorithm was applied to implement an automated bearing fault pattern recognition. The experimental results show that the proposed method can identify various bearing fault types as well as severities effectively and accurately.

  16. An Agent-Based Framework for E-Commerce Information Retrieval Management Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Floarea NASTASE

    2009-01-01

    Full Text Available The paper addresses the issue of improving retrieval performance management for retrieval from document collections that exist on the Internet. It also comes with a solution that uses the benefits of the agent technology and genetic algorithms in the process of the information retrieving management. The most important paradigms of information retrieval are mentioned having the goal to make more evident the advantages of using the genetic algorithms based one. Within the paper, also a genetic algorithm that can be use for the proposed solution is detailed and a comparative description between the dynamic and static proposed solution is made. In the end, new future directions are shown based on elements presented in this paper. The future results look very encouraging.

  17. Quantum information theory and quantum statistics

    International Nuclear Information System (INIS)

    Petz, D.

    2008-01-01

    Based on lectures given by the author, this book focuses on providing reliable introductory explanations of key concepts of quantum information theory and quantum statistics - rather than on results. The mathematically rigorous presentation is supported by numerous examples and exercises and by an appendix summarizing the relevant aspects of linear analysis. Assuming that the reader is familiar with the content of standard undergraduate courses in quantum mechanics, probability theory, linear algebra and functional analysis, the book addresses graduate students of mathematics and physics as well as theoretical and mathematical physicists. Conceived as a primer to bridge the gap between statistical physics and quantum information, a field to which the author has contributed significantly himself, it emphasizes concepts and thorough discussions of the fundamental notions to prepare the reader for deeper studies, not least through the selection of well chosen exercises. (orig.)

  18. Certainty and Uncertainty in Quantum Information Processing

    OpenAIRE

    Rieffel, Eleanor G.

    2007-01-01

    This survey, aimed at information processing researchers, highlights intriguing but lesser known results, corrects misconceptions, and suggests research areas. Themes include: certainty in quantum algorithms; the "fewer worlds" theory of quantum mechanics; quantum learning; probability theory versus quantum mechanics.

  19. Should the model for risk-informed regulation be game theory rather than decision theory?

    Science.gov (United States)

    Bier, Vicki M; Lin, Shi-Woei

    2013-02-01

    Risk analysts frequently view the regulation of risks as being largely a matter of decision theory. According to this view, risk analysis methods provide information on the likelihood and severity of various possible outcomes; this information should then be assessed using a decision-theoretic approach (such as cost/benefit analysis) to determine whether the risks are acceptable, and whether additional regulation is warranted. However, this view ignores the fact that in many industries (particularly industries that are technologically sophisticated and employ specialized risk and safety experts), risk analyses may be done by regulated firms, not by the regulator. Moreover, those firms may have more knowledge about the levels of safety at their own facilities than the regulator does. This creates a situation in which the regulated firm has both the opportunity-and often also the motive-to provide inaccurate (in particular, favorably biased) risk information to the regulator, and hence the regulator has reason to doubt the accuracy of the risk information provided by regulated parties. Researchers have argued that decision theory is capable of dealing with many such strategic interactions as well as game theory can. This is especially true in two-player, two-stage games in which the follower has a unique best strategy in response to the leader's strategy, as appears to be the case in the situation analyzed in this article. However, even in such cases, we agree with Cox that game-theoretic methods and concepts can still be useful. In particular, the tools of mechanism design, and especially the revelation principle, can simplify the analysis of such games because the revelation principle provides rigorous assurance that it is sufficient to analyze only games in which licensees truthfully report their risk levels, making the problem more manageable. Without that, it would generally be necessary to consider much more complicated forms of strategic behavior (including

  20. Actor-network Theory and cartography of controversies in Information Science

    OpenAIRE

    LOURENÇO, Ramon Fernandes; TOMAÉL, Maria Inês

    2018-01-01

    Abstract The present study aims to discuss the interactions between the Actor-network Theory and the Cartography of Controversies method in Information Science research. A literature review was conducted on books, scholarly articles, and any other sources addressing the Theory-Actor Network and Cartography of Controversies. The understanding of the theoretical assumptions that guide the Network-Actor Theory allows examining important aspects to Information Science research, seeking to identif...

  1. The Philosophy of Information as an Underlying and Unifying Theory of Information Science

    Science.gov (United States)

    Tomic, Taeda

    2010-01-01

    Introduction: Philosophical analyses of theoretical principles underlying these sub-domains reveal philosophy of information as underlying meta-theory of information science. Method: Conceptual research on the knowledge sub-domains in information science and philosophy and analysis of their mutual connection. Analysis: Similarities between…

  2. Development of information preserving data compression algorithm for CT images

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio

    1989-01-01

    Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

  3. Probability and information theory, with applications to radar

    CERN Document Server

    Woodward, P M; Higinbotham, W

    1964-01-01

    Electronics and Instrumentation, Second Edition, Volume 3: Probability and Information Theory with Applications to Radar provides information pertinent to the development on research carried out in electronics and applied physics. This book presents the established mathematical techniques that provide the code in which so much of the mathematical theory of electronics and radar is expressed.Organized into eight chapters, this edition begins with an overview of the geometry of probability distributions in which moments play a significant role. This text then examines the mathematical methods in

  4. Information Theoretic Characterization of Physical Theories with Projective State Space

    Science.gov (United States)

    Zaopo, Marco

    2015-08-01

    Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.

  5. Quantum information theory mathematical foundation

    CERN Document Server

    Hayashi, Masahito

    2017-01-01

    This graduate textbook provides a unified view of quantum information theory. Clearly explaining the necessary mathematical basis, it merges key topics from both information-theoretic and quantum- mechanical viewpoints and provides lucid explanations of the basic results. Thanks to this unified approach, it makes accessible such advanced topics in quantum communication as quantum teleportation, superdense coding, quantum state transmission (quantum error-correction) and quantum encryption. Since the publication of the preceding book Quantum Information: An Introduction, there have been tremendous strides in the field of quantum information. In particular, the following topics – all of which are addressed here – made seen major advances: quantum state discrimination, quantum channel capacity, bipartite and multipartite entanglement, security analysis on quantum communication, reverse Shannon theorem and uncertainty relation. With regard to the analysis of quantum security, the present book employs an impro...

  6. Cuckoo search and firefly algorithm theory and applications

    CERN Document Server

    2014-01-01

    Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book.  Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others.   This book can serve as an ideal reference for both graduates and researchers in computer scienc...

  7. Graphical Model Theory for Wireless Sensor Networks

    International Nuclear Information System (INIS)

    Davis, William B.

    2002-01-01

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm

  8. Entropy and information causality in general probabilistic theories

    International Nuclear Information System (INIS)

    Barnum, Howard; Leifer, Matthew; Spekkens, Robert; Barrett, Jonathan; Clark, Lisa Orloff; Stepanik, Nicholas; Wilce, Alex; Wilke, Robin

    2010-01-01

    We investigate the concept of entropy in probabilistic theories more general than quantum mechanics, with particular reference to the notion of information causality (IC) recently proposed by Pawlowski et al (2009 arXiv:0905.2292). We consider two entropic quantities, which we term measurement and mixing entropy. In the context of classical and quantum theory, these coincide, being given by the Shannon and von Neumann entropies, respectively; in general, however, they are very different. In particular, while measurement entropy is easily seen to be concave, mixing entropy need not be. In fact, as we show, mixing entropy is not concave whenever the state space is a non-simplicial polytope. Thus, the condition that measurement and mixing entropies coincide is a strong constraint on possible theories. We call theories with this property monoentropic. Measurement entropy is subadditive, but not in general strongly subadditive. Equivalently, if we define the mutual information between two systems A and B by the usual formula I(A: B)=H(A)+H(B)-H(AB), where H denotes the measurement entropy and AB is a non-signaling composite of A and B, then it can happen that I(A:BC)< I(A:B). This is relevant to IC in the sense of Pawlowski et al: we show that any monoentropic non-signaling theory in which measurement entropy is strongly subadditive, and also satisfies a version of the Holevo bound, is informationally causal, and on the other hand we observe that Popescu-Rohrlich boxes, which violate IC, also violate strong subadditivity. We also explore the interplay between measurement and mixing entropy and various natural conditions on theories that arise in quantum axiomatics.

  9. Edge Detection Algorithm Based on Fuzzy Logic Theory for a Local Vision System of Robocup Humanoid League

    Directory of Open Access Journals (Sweden)

    Andrea K. Perez-Hernandez

    2013-06-01

    Full Text Available At this paper we shown the development of an algorithm to perform edges extraction based on fuzzy logic theory. This method allows recognizing landmarks on the game field for Humanoid League of RoboCup. The proposed algorithm describes the creation of a fuzzy inference system that permit evaluate the existent relationship between image pixels, finding variations on grey levels of related neighbor pixels. Subsequently, it shows an implementation of OTSU method to binarize an image that was obtained from fuzzy process and so generate an image containing only extracted edges, validating the algorithm with Humanoid League images. Later, we analyze obtained results that evidence a good performance of algorithm, considering that this proposal only takes an extra 35% processing time that will be required by traditional methods, whereas extracted edges are 52% less noise susceptible.

  10. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    Science.gov (United States)

    Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

    2010-01-01

    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

  11. Exploring a Theory Describing the Physics of Information Systems, Characterizing the Phenomena of Complex Information Systems

    National Research Council Canada - National Science Library

    Harmon, Scott

    2001-01-01

    This project accomplished all of its objectives: document a theory of information physics, conduct a workshop on planing experiments to test this theory, and design experiments that validate this theory...

  12. Generation Expansion Planning in pool market: A hybrid modified game theory and improved genetic algorithm

    International Nuclear Information System (INIS)

    Shayanfar, H.A.; Lahiji, A. Saliminia; Aghaei, J.; Rabiee, A.

    2009-01-01

    Unlike the traditional policy, Generation Expansion Planning (GEP) problem in competitive framework is complicated. In the new policy, each Generation Company (GENCO) decides to invest in such a way that obtains as much profit as possible. This paper presents a new hybrid algorithm to determine GEP in a Pool market. The proposed algorithm is divided in two programming levels: master and slave. In the master level a Modified Game Theory (MGT) is proposed to evaluate the contrast of GENCOs by the Independent System Operator (ISO). In the slave level, an Improved Genetic Algorithm (IGA) method is used to find the best solution of each GENCO for decision-making of investment. The validity of the proposed method is examined in the case study including three GENCOs with multi-type of power plants. The results show that the presented method is both satisfactory and consistent with expectation. (author)

  13. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    Science.gov (United States)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical

  14. Mathematics Education as a Proving-Ground for Information-Processing Theories.

    Science.gov (United States)

    Greer, Brian, Ed.; Verschaffel, Lieven, Ed.

    1990-01-01

    Five papers discuss the current and potential contributions of information-processing theory to our understanding of mathematical thinking as those contributions affect the practice of mathematics education. It is concluded that information-processing theories need to be supplemented in various ways to more adequately reflect the complexity of…

  15. Crossover Improvement for the Genetic Algorithm in Information Retrieval.

    Science.gov (United States)

    Vrajitoru, Dana

    1998-01-01

    In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…

  16. Filtration Algorithms of Untrustworthy Analogous Information in APCS at TPP and NPP

    Directory of Open Access Journals (Sweden)

    V. I. Nazarov

    2012-01-01

    Full Text Available The paper considers filtration algorithms of untrustworthy analogous information in APCS at TTP and NPP that make it possible to identify credibility of information transmitted through communication channels in the form of signals and which are continuously changeable in the regime of real time.

  17. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    Science.gov (United States)

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.

  18. Glowworm swarm optimization theory, algorithms, and applications

    CERN Document Server

    Kaipa, Krishnanand N

    2017-01-01

    This book provides a comprehensive account of the glowworm swarm optimization (GSO) algorithm, including details of the underlying ideas, theoretical foundations, algorithm development, various applications, and MATLAB programs for the basic GSO algorithm. It also discusses several research problems at different levels of sophistication that can be attempted by interested researchers. The generality of the GSO algorithm is evident in its application to diverse problems ranging from optimization to robotics. Examples include computation of multiple optima, annual crop planning, cooperative exploration, distributed search, multiple source localization, contaminant boundary mapping, wireless sensor networks, clustering, knapsack, numerical integration, solving fixed point equations, solving systems of nonlinear equations, and engineering design optimization. The book is a valuable resource for researchers as well as graduate and undergraduate students in the area of swarm intelligence and computational intellige...

  19. Optimized Bayesian dynamic advising theory and algorithms

    CERN Document Server

    Karny, Miroslav

    2006-01-01

    Written by one of the world's leading groups in the area of Bayesian identification, control, and decision making, this book provides the theoretical and algorithmic basis of optimized probabilistic advising. Starting from abstract ideas and formulations, and culminating in detailed algorithms, the book comprises a unified treatment of an important problem of the design of advisory systems supporting supervisors of complex processes. It introduces the theoretical and algorithmic basis of developed advising, relying on novel and powerful combination black-box modelling by dynamic mixture models

  20. Recoverability in quantum information theory

    Science.gov (United States)

    Wilde, Mark

    The fact that the quantum relative entropy is non-increasing with respect to quantum physical evolutions lies at the core of many optimality theorems in quantum information theory and has applications in other areas of physics. In this work, we establish improvements of this entropy inequality in the form of physically meaningful remainder terms. One of the main results can be summarized informally as follows: if the decrease in quantum relative entropy between two quantum states after a quantum physical evolution is relatively small, then it is possible to perform a recovery operation, such that one can perfectly recover one state while approximately recovering the other. This can be interpreted as quantifying how well one can reverse a quantum physical evolution. Our proof method is elementary, relying on the method of complex interpolation, basic linear algebra, and the recently introduced Renyi generalization of a relative entropy difference. The theorem has a number of applications in quantum information theory, which have to do with providing physically meaningful improvements to many known entropy inequalities. This is based on arXiv:1505.04661, now accepted for publication in Proceedings of the Royal Society A. I acknowledge support from startup funds from the Department of Physics and Astronomy at LSU, the NSF under Award No. CCF-1350397, and the DARPA Quiness Program through US Army Research Office award W31P4Q-12-1-0019.

  1. A novel gene network inference algorithm using predictive minimum description length approach.

    Science.gov (United States)

    Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang

    2010-05-28

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the

  2. USING INFORMATION THEORY TO DEFINE A SUSTAINABILITY INDEX

    Science.gov (United States)

    Information theory has many applications in Ecology and Environmental science, such as a biodiversity indicator, as a measure of evolution, a measure of distance from thermodynamic equilibrium, and as a measure of system organization. Fisher Information, in particular, provides a...

  3. Response to Patrick Love's "Informal Theory": A Rejoinder

    Science.gov (United States)

    Evans, Nancy J.; Guido, Florence M.

    2012-01-01

    This rejoinder to Patrick Love's article, "Informal Theory: The Ignored Link in Theory-to-Practice," which appears earlier in this issue of the "Journal of College Student Development", was written at the invitation of the Editor. In the critique, we point out the weaknesses of many of Love's arguments and propositions. We provide an alternative…

  4. An Effective Tri-Clustering Algorithm Combining Expression Data with Gene Regulation Information

    Directory of Open Access Journals (Sweden)

    Ao Li

    2009-04-01

    Full Text Available Motivation: Bi-clustering algorithms aim to identify sets of genes sharing similar expression patterns across a subset of conditions. However direct interpretation or prediction of gene regulatory mechanisms may be difficult as only gene expression data is used. Information about gene regulators may also be available, most commonly about which transcription factors may bind to the promoter region and thus control the expression level of a gene. Thus a method to integrate gene expression and gene regulation information is desirable for clustering and analyzing. Methods: By incorporating gene regulatory information with gene expression data, we define regulated expression values (REV as indicators of how a gene is regulated by a specific factor. Existing bi-clustering methods are extended to a three dimensional data space by developing a heuristic TRI-Clustering algorithm. An additional approach named Automatic Boundary Searching algorithm (ABS is introduced to automatically determine the boundary threshold. Results: Results based on incorporating ChIP-chip data representing transcription factor-gene interactions show that the algorithms are efficient and robust for detecting tri-clusters. Detailed analysis of the tri-cluster extracted from yeast sporulation REV data shows genes in this cluster exhibited significant differences during the middle and late stages. The implicated regulatory network was then reconstructed for further study of defined regulatory mechanisms. Topological and statistical analysis of this network demonstrated evidence of significant changes of TF activities during the different stages of yeast sporulation, and suggests this approach might be a general way to study regulatory networks undergoing transformations.

  5. Structural information theory and visual form

    NARCIS (Netherlands)

    Leeuwenberg, E.L.J.; Kaernbach, C.; Schroeger, E.; Mueller, H.

    2003-01-01

    The paper attends to basic characteristics of visual form as approached by Structural information theory, or SIT, (Leeuwenberg, Van der Helm and Van Lier). The introduction provides a global survey of this approach. The main part of the paper focuses on three characteristics of SIT. Each one is made

  6. Consensus algorithm in smart grid and communication networks

    Science.gov (United States)

    Alfagee, Husain Abdulaziz

    On a daily basis, consensus theory attracts more and more researches from different areas of interest, to apply its techniques to solve technical problems in a way that is faster, more reliable, and even more precise than ever before. A power system network is one of those fields that consensus theory employs extensively. The use of the consensus algorithm to solve the Economic Dispatch and Load Restoration Problems is a good example. Instead of a conventional central controller, some researchers have explored an algorithm to solve the above mentioned problems, in a distribution manner, using the consensus algorithm, which is based on calculation methods, i.e., non estimation methods, for updating the information consensus matrix. Starting from this point of solving these types of problems mentioned, specifically, in a distribution fashion, using the consensus algorithm, we have implemented a new advanced consensus algorithm. It is based on the adaptive estimation techniques, such as the Gradient Algorithm and the Recursive Least Square Algorithm, to solve the same problems. This advanced work was tested on different case studies that had formerly been explored, as seen in references 5, 7, and 18. Three and five generators, or agents, with different topologies, correspond to the Economic Dispatch Problem and the IEEE 16-Bus power system corresponds to the Load Restoration Problem. In all the cases we have studied, the results met our expectations with extreme accuracy, and completely matched the results of the previous researchers. There is little question that this research proves the capability and dependability of using the consensus algorithm, based on the estimation methods as the Gradient Algorithm and the Recursive Least Square Algorithm to solve such power problems.

  7. Information dynamics algorithm for detecting communities in networks

    Science.gov (United States)

    Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro

    2012-11-01

    The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.

  8. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  9. Learning-based traffic signal control algorithms with neighborhood information sharing: An application for sustainable mobility

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhu, Feng [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering; Ukkusuri, Satish V. [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering

    2017-10-04

    Here, this research applies R-Markov Average Reward Technique based reinforcement learning (RL) algorithm, namely RMART, for vehicular signal control problem leveraging information sharing among signal controllers in connected vehicle environment. We implemented the algorithm in a network of 18 signalized intersections and compare the performance of RMART with fixed, adaptive, and variants of the RL schemes. Results show significant improvement in system performance for RMART algorithm with information sharing over both traditional fixed signal timing plans and real time adaptive control schemes. Additionally, the comparison with reinforcement learning algorithms including Q learning and SARSA indicate that RMART performs better at higher congestion levels. Further, a multi-reward structure is proposed that dynamically adjusts the reward function with varying congestion states at the intersection. Finally, the results from test networks show significant reduction in emissions (CO, CO2, NOx, VOC, PM10) when RL algorithms are implemented compared to fixed signal timings and adaptive schemes.

  10. Theory of affine projection algorithms for adaptive filtering

    CERN Document Server

    Ozeki, Kazuhiko

    2016-01-01

    This book focuses on theoretical aspects of the affine projection algorithm (APA) for adaptive filtering. The APA is a natural generalization of the classical, normalized least-mean-squares (NLMS) algorithm. The book first explains how the APA evolved from the NLMS algorithm, where an affine projection view is emphasized. By looking at those adaptation algorithms from such a geometrical point of view, we can find many of the important properties of the APA, e.g., the improvement of the convergence rate over the NLMS algorithm especially for correlated input signals. After the birth of the APA in the mid-1980s, similar algorithms were put forward by other researchers independently from different perspectives. This book shows that they are variants of the APA, forming a family of APAs. Then it surveys research on the convergence behavior of the APA, where statistical analyses play important roles. It also reviews developments of techniques to reduce the computational complexity of the APA, which are important f...

  11. A general rough-surface inversion algorithm: Theory and application to SAR data

    Science.gov (United States)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  12. A new efficient algorithm for computing the imprecise reliability of monotone systems

    International Nuclear Information System (INIS)

    Utkin, Lev V.

    2004-01-01

    Reliability analysis of complex systems by partial information about reliability of components and by different conditions of independence of components may be carried out by means of the imprecise probability theory which provides a unified framework (natural extension, lower and upper previsions) for computing the system reliability. However, the application of imprecise probabilities to reliability analysis meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, an efficient simplified algorithm to solve and decompose the optimization problems is proposed in the paper. This algorithm allows us to practically implement reliability analysis of monotone systems under partial and heterogeneous information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example illustrates the algorithm

  13. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  14. An algorithm for high order strong coupling expansions: The mass gap in 3d pure Z2 lattice gauge theory

    International Nuclear Information System (INIS)

    Decker, K.; Hamburg Univ.

    1985-12-01

    An efficient description of all clusters contributing to the strong coupling expansion of the mass gap in three-dimensional pure Z 2 lattice gauge theory is presented. This description is correct to all orders in the strong coupling expansion and is chosen in such a way that it remains valid in four dimensions for gauge group Z 2 . Relying on this description an algorithm has been constructed which generates and processes all the contributing graphs to the exact strong coupling expansion of the mass gap in the three-dimensional model in a fully automatic fashion. A major component of this algorithm can also be used to generate exact strong coupling expansions for the free energy logZ. The algorithm is correct to any order; thus the order of these expansions is only limited by the available computing power. The presentation of the algorithm is such that it can serve as a guide-line for the construction of a generalized one which would also generate exact strong coupling expansions for the masses of low-lying excited states of four-dimensional pure Yang-Mills theories. (orig.)

  15. A THEORY OF MAXIMIZING SENSORY INFORMATION

    NARCIS (Netherlands)

    Hateren, J.H. van

    1992-01-01

    A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power

  16. Information theory and stochastics for multiscale nonlinear systems

    CERN Document Server

    Majda, Andrew J; Grote, Marcus J

    2005-01-01

    This book introduces mathematicians to the fascinating emerging mathematical interplay between ideas from stochastics and information theory and important practical issues in studying complex multiscale nonlinear systems. It emphasizes the serendipity between modern applied mathematics and applications where rigorous analysis, the development of qualitative and/or asymptotic models, and numerical modeling all interact to explain complex phenomena. After a brief introduction to the emerging issues in multiscale modeling, the book has three main chapters. The first chapter is an introduction to information theory with novel applications to statistical mechanics, predictability, and Jupiter's Red Spot for geophysical flows. The second chapter discusses new mathematical issues regarding fluctuation-dissipation theorems for complex nonlinear systems including information flow, various approximations, and illustrates applications to various mathematical models. The third chapter discusses stochastic modeling of com...

  17. Distributed parameter estimation in unreliable sensor networks via broadcast gossip algorithms.

    Science.gov (United States)

    Wang, Huiwei; Liao, Xiaofeng; Wang, Zidong; Huang, Tingwen; Chen, Guo

    2016-01-01

    In this paper, we present an asynchronous algorithm to estimate the unknown parameter under an unreliable network which allows new sensors to join and old sensors to leave, and can tolerate link failures. Each sensor has access to partially informative measurements when it is awakened. In addition, the proposed algorithm can avoid the interference among messages and effectively reduce the accumulated measurement and quantization errors. Based on the theory of stochastic approximation, we prove that our proposed algorithm almost surely converges to the unknown parameter. Finally, we present a numerical example to assess the performance and the communication cost of the algorithm. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. ALGORITHMIC support for THE System Wide Information Management concept

    OpenAIRE

    2016-01-01

    The theoretical problems of computer support for the "System Wide Information Management" concept, which was proposed by experts of the International Civil Aviation Organization, are discussed. Within the framework of its provisions certain new requirements for all initial stages of air traffic management preceding the direct aircrafts control are formulated. Algorithmic instruments for ensuring a conflictlessness of a summary plan for the use of airspace during the plan’s implementation are ...

  19. Geometrical identification of quantum and information theories

    International Nuclear Information System (INIS)

    Caianiello, E.R.

    1983-01-01

    The interrelation of quantum and information theories is investigation on the base of the conception of cross-entropy. It is assumed that ''complex information geometry'' may serve as a tool for ''technological transfer'' from one research field to the other which is not connected directly with the first one. It is pointed out that the ''infinitesimal distance'' ds 2 and ''infinitesimal cross-entropy'' dHsub(c) coincide

  20. FAST-PT II: an algorithm to calculate convolution integrals of general tensor quantities in cosmological perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.; Hirata, Christopher M., E-mail: fang.307@osu.edu, E-mail: blazek@berkeley.edu, E-mail: mcewen.24@osu.edu, E-mail: hirata.10@osu.edu [Center for Cosmology and AstroParticle Physics, Department of Physics, The Ohio State University, 191 W Woodruff Ave, Columbus OH 43210 (United States)

    2017-02-01

    Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies in the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.

  1. Modeling Routinization in Games: An Information Theory Approach

    DEFF Research Database (Denmark)

    Wallner, Simon; Pichlmair, Martin; Hecher, Michael

    2015-01-01

    Routinization is the result of practicing until an action stops being a goal-directed process. This paper formulates a definition of routinization in games based on prior research in the fields of activity theory and practice theory. Routinization is analyzed using the formal model of discrete......-time, discrete-space Markov chains and information theory to measure the actual error between the dynamically trained models and the player interaction. Preliminary research supports the hypothesis that Markov chains can be effectively used to model routinization in games. A full study design is presented...

  2. Combinatorial optimization theory and algorithms

    CERN Document Server

    Korte, Bernhard

    2018-01-01

    This comprehensive textbook on combinatorial optimization places special emphasis on theoretical results and algorithms with provably good performance, in contrast to heuristics. It is based on numerous courses on combinatorial optimization and specialized topics, mostly at graduate level. This book reviews the fundamentals, covers the classical topics (paths, flows, matching, matroids, NP-completeness, approximation algorithms) in detail, and proceeds to advanced and recent topics, some of which have not appeared in a textbook before. Throughout, it contains complete but concise proofs, and also provides numerous exercises and references. This sixth edition has again been updated, revised, and significantly extended. Among other additions, there are new sections on shallow-light trees, submodular function maximization, smoothed analysis of the knapsack problem, the (ln 4+ɛ)-approximation for Steiner trees, and the VPN theorem. Thus, this book continues to represent the state of the art of combinatorial opti...

  3. Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.

    Science.gov (United States)

    Khennak, Ilyes; Drias, Habiba

    2017-02-01

    With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.

  4. Spacecraft TT&C and information transmission theory and technologies

    CERN Document Server

    Liu, Jiaxing

    2015-01-01

    Spacecraft TT&C and Information Transmission Theory and Technologies introduces the basic theory of spacecraft TT&C (telemetry, track and command) and information transmission. Combining TT&C and information transmission, the book presents several technologies for continuous wave radar including measurements for range, range rate and angle, analog and digital information transmissions, telecommand, telemetry, remote sensing and spread spectrum TT&C. For special problems occurred in the channels for TT&C and information transmission, the book represents radio propagation features and its impact on orbit measurement accuracy, and the effects caused by rain attenuation, atmospheric attenuation and multi-path effect, and polarization composition technology. This book can benefit researchers and engineers in the field of spacecraft TT&C and communication systems. Liu Jiaxing is a professor at The 10th Institute of China Electronics Technology Group Corporation.

  5. Towards an Information Theory of Complex Networks

    CERN Document Server

    Dehmer, Matthias; Mehler, Alexander

    2011-01-01

    For over a decade, complex networks have steadily grown as an important tool across a broad array of academic disciplines, with applications ranging from physics to social media. A tightly organized collection of carefully-selected papers on the subject, Towards an Information Theory of Complex Networks: Statistical Methods and Applications presents theoretical and practical results about information-theoretic and statistical models of complex networks in the natural sciences and humanities. The book's major goal is to advocate and promote a combination of graph-theoretic, information-theoreti

  6. Anticipated detection of favorable periods for wind energy production by means of information theory

    Science.gov (United States)

    Vogel, Eugenio; Saravia, Gonzalo; Kobe, Sigismund; Schumann, Rolf; Schuster, Rolf

    Managing the electric power produced by different sources requires mixing the different response times they present. Thus, for instance, coal burning presents large time lags until operational conditions are reached while hydroelectric generation can react in a matter of some seconds or few minutes to reach the desired productivity. Wind energy production (WEP) can be instantaneously fed to the network to save fuels with low thermal inertia (gas burning for instance), but this source presents sudden variations within few hours. We report here for the first time a method based on information theory to handle WEP. This method has been successful in detecting dynamical changes in magnetic transitions and variations of stock markets. An algorithm called wlzip based on information recognition is used to recognize the information content of a time series. We make use of publically available energy data in Germany to simulate real applications. After a calibration process the system can recognize directly on the WEP data the onset of favorable periods of a desired strength. Optimization can lead to a few hours of anticipation which is enough to control the mixture of WEP with other energy sources, thus saving fuels.

  7. Automated Physico-Chemical Cell Model Development through Information Theory

    Energy Technology Data Exchange (ETDEWEB)

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  8. Finding an information concept suited for a universal theory of information.

    Science.gov (United States)

    Brier, Søren

    2015-12-01

    The view argued in this article is that if we want to define a universal concept of information covering subjective experiential and meaningful cognition - as well as intersubjective meaningful communication in nature, technology, society and life worlds - then the main problem is to decide, which epistemological, ontological and philosophy of science framework the concept of information should be based on and integrated in. All the ontological attempts to create objective concepts of information result in concepts that cannot encompass meaning and experience of embodied living and social systems. There is no conclusive evidence that the core of reality across nature, culture, life and mind is purely either mathematical, logical or of a computational nature. Therefore the core of the information concept should not only be based only on pure logical or mathematical rationality. We need to include interpretation, signification and meaning construction in our transdisciplinary framework for information as a basic aspect of reality alongside the physical, chemical and molecular biological. Dretske defines information as the content of new, true, meaningful, and understandable knowledge. According to this widely held definition information in a transdisciplinary theory cannot be 'objective', but has to be relativized in relation to the receiver's knowledge, as also proposed by Floridi. It is difficult to produce a quantitative statement independently of a qualitative analysis based on some sort of relation to the human condition as a semiotic animal. I therefore alternatively suggest to build information theories based on semiotics from the basic relations of embodied living systems meaningful cognition and communication. I agree with Peircean biosemiotics that all information must be part of real relational sign-processes manifesting as tokens. Copyright © 2015. Published by Elsevier Ltd.

  9. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  10. Advanced algorithms for information science

    International Nuclear Information System (INIS)

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-01-01

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression

  11. Advanced algorithms for information science

    Energy Technology Data Exchange (ETDEWEB)

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.

  12. The Foundation Role for Theories of Agency in Understanding Information Systems Design

    Directory of Open Access Journals (Sweden)

    Robert Johnston

    2002-11-01

    Full Text Available In this paper we argue that theories of agency form a foundation upon which we can build a deeper understanding of information systems design. We do so by firstly recognising that information systems are part of purposeful sociotechnical systems and that consequently theories of agency may help in understanding them. We then present two alternative theories of agency (deliberative and situational, mainly drawn from the robotics and artificial intelligence disciplines, and in doing so, we note that existing information system design methods and ontological studies of those methods implicitly adhere to the deliberative theory of agency. We also note that while there are advantages in specific circumstances from utilising the situated theory of agency in designing complex systems, because of their differing ontological commitments, such systems would be difficult to analyse and evaluate using ontologies currently used in information systems. We then provide evidence that such situational information systems can indeed exist, by giving a specific example (the Kanban system, which has emerged from manufacturing practice. We conclude that information systems are likely to benefit from creating design approaches supporting the production of situational systems.

  13. The informationally-complete quantum theory

    OpenAIRE

    Chen, Zeng-Bing

    2014-01-01

    Quantum mechanics is a cornerstone of our current understanding of nature and extremely successful in describing physics covering a huge range of scales. However, its interpretation remains controversial since the early days of quantum mechanics. What does a quantum state really mean? Is there any way out of the so-called quantum measurement problem? Here we present an informationally-complete quantum theory (ICQT) and the trinary property of nature to beat the above problems. We assume that ...

  14. Research on parallel algorithm for sequential pattern mining

    Science.gov (United States)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  15. Optimization of multicast optical networks with genetic algorithm

    Science.gov (United States)

    Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng

    2007-11-01

    In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.

  16. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  17. Towards integrating control and information theories from information-theoretic measures to control performance limitations

    CERN Document Server

    Fang, Song; Ishii, Hideaki

    2017-01-01

    This book investigates the performance limitation issues in networked feedback systems. The fact that networked feedback systems consist of control and communication devices and systems calls for the integration of control theory and information theory. The primary contributions of this book lie in two aspects: the newly-proposed information-theoretic measures and the newly-discovered control performance limitations. We first propose a number of information notions to facilitate the analysis. Using those notions, classes of performance limitations of networked feedback systems, as well as state estimation systems, are then investigated. In general, the book presents a unique, cohesive treatment of performance limitation issues of networked feedback systems via an information-theoretic approach. This book is believed to be the first to treat the aforementioned subjects systematically and in a unified manner, offering a unique perspective differing from existing books.

  18. Towards an Information Retrieval Theory of Everything

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Lammerink, J.M.W.; Katoen, Joost P.; Kok, J.N.; van de Pol, Jan Cornelis; Raamsdonk, F.

    2009-01-01

    I present three well-known probabilistic models of information retrieval in tutorial style: The binary independence probabilistic model, the language modeling approach, and Google's page rank. Although all three models are based on probability theory, they are very different in nature. Each model

  19. An Interactive Personalized Recommendation System Using the Hybrid Algorithm Model

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2017-10-01

    Full Text Available With the rapid development of e-commerce, the contradiction between the disorder of business information and customer demand is increasingly prominent. This study aims to make e-commerce shopping more convenient, and avoid information overload, by an interactive personalized recommendation system using the hybrid algorithm model. The proposed model first uses various recommendation algorithms to get a list of original recommendation results. Combined with the customer’s feedback in an interactive manner, it then establishes the weights of corresponding recommendation algorithms. Finally, the synthetic formula of evidence theory is used to fuse the original results to obtain the final recommendation products. The recommendation performance of the proposed method is compared with that of traditional methods. The results of the experimental study through a Taobao online dress shop clearly show that the proposed method increases the efficiency of data mining in the consumer coverage, the consumer discovery accuracy and the recommendation recall. The hybrid recommendation algorithm complements the advantages of the existing recommendation algorithms in data mining. The interactive assigned-weight method meets consumer demand better and solves the problem of information overload. Meanwhile, our study offers important implications for e-commerce platform providers regarding the design of product recommendation systems.

  20. Understanding family health information seeking: a test of the theory of motivated information management.

    Science.gov (United States)

    Hovick, Shelly R

    2014-01-01

    Although a family health history can be used to assess disease risk and increase health prevention behaviors, research suggests that few people have collected family health information. Guided by the Theory of Motivated Information Management, this study seeks to understand the barriers to and facilitators of interpersonal information seeking about family health history. Individuals who were engaged to be married (N = 306) were surveyed online and in person to understand how factors such as uncertainty, expectations for an information search, efficacy, and anxiety influence decisions and strategies for obtaining family health histories. The results supported the Theory of Motivated Information Management by demonstrating that individuals who experienced uncertainty discrepancies regarding family heath history had greater intention to seek information from family members when anxiety was low, outcome expectancy was high, and communication efficacy was positive. Although raising uncertainty about family health history may be an effective tool for health communicators to increase communication among family members, low-anxiety situations may be optimal for information seeking. Health communication messages must also build confidence in people's ability to communicate with family to obtain the needed health information.

  1. Comparing cosmic web classifiers using information theory

    Energy Technology Data Exchange (ETDEWEB)

    Leclercq, Florent [Institute of Cosmology and Gravitation (ICG), University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth PO1 3FX (United Kingdom); Lavaux, Guilhem; Wandelt, Benjamin [Institut d' Astrophysique de Paris (IAP), UMR 7095, CNRS – UPMC Université Paris 6, Sorbonne Universités, 98bis boulevard Arago, F-75014 Paris (France); Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: lavaux@iap.fr, E-mail: j.jasche@tum.de, E-mail: wandelt@iap.fr [Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  2. Comparing cosmic web classifiers using information theory

    International Nuclear Information System (INIS)

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin; Jasche, Jens

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  3. Computational invariant theory

    CERN Document Server

    Derksen, Harm

    2015-01-01

    This book is about the computational aspects of invariant theory. Of central interest is the question how the invariant ring of a given group action can be calculated. Algorithms for this purpose form the main pillars around which the book is built. There are two introductory chapters, one on Gröbner basis methods and one on the basic concepts of invariant theory, which prepare the ground for the algorithms. Then algorithms for computing invariants of finite and reductive groups are discussed. Particular emphasis lies on interrelations between structural properties of invariant rings and computational methods. Finally, the book contains a chapter on applications of invariant theory, covering fields as disparate as graph theory, coding theory, dynamical systems, and computer vision. The book is intended for postgraduate students as well as researchers in geometry, computer algebra, and, of course, invariant theory. The text is enriched with numerous explicit examples which illustrate the theory and should be ...

  4. Information systems theory

    CERN Document Server

    Dwivedi, Yogesh K; Schneberger, Scott L

    2011-01-01

    The overall mission of this book is to provide a comprehensive understanding and coverage of the various theories and models used in IS research. Specifically, it aims to focus on the following key objectives: To describe the various theories and models applicable to studying IS/IT management issues. To outline and describe, for each of the various theories and models, independent and dependent constructs, reference discipline/originating area, originating author(s), seminal articles, level of analysis (i.e. firm, individual, industry) and links with other theories. To provide a critical revie

  5. Consensus based on learning game theory with a UAV rendezvous application

    Directory of Open Access Journals (Sweden)

    Zhongjie Lin

    2015-02-01

    Full Text Available Multi-agent cooperation problems are becoming more and more attractive in both civilian and military applications. In multi-agent cooperation problems, different network topologies will decide different manners of cooperation between agents. A centralized system will directly control the operation of each agent with information flow from a single centre, while in a distributed system, agents operate separately under certain communication protocols. In this paper, a systematic distributed optimization approach will be established based on a learning game algorithm. The convergence of the algorithm will be proven under the game theory framework. Two typical consensus problems will be analyzed with the proposed algorithm. The contributions of this work are threefold. First, the designed algorithm inherits the properties in learning game theory for problem simplification and proof of convergence. Second, the behaviour of learning endows the algorithm with robustness and autonomy. Third, with the proposed algorithm, the consensus problems will be analyzed from a novel perspective.

  6. Genetic Algorithm and Graph Theory Based Matrix Factorization Method for Online Friend Recommendation

    Directory of Open Access Journals (Sweden)

    Qu Li

    2014-01-01

    Full Text Available Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.

  7. The use of information theory in evolutionary biology.

    Science.gov (United States)

    Adami, Christoph

    2012-05-01

    Information is a key concept in evolutionary biology. Information stored in a biological organism's genome is used to generate the organism and to maintain and control it. Information is also that which evolves. When a population adapts to a local environment, information about this environment is fixed in a representative genome. However, when an environment changes, information can be lost. At the same time, information is processed by animal brains to survive in complex environments, and the capacity for information processing also evolves. Here, I review applications of information theory to the evolution of proteins and to the evolution of information processing in simulated agents that adapt to perform a complex task. © 2012 New York Academy of Sciences.

  8. A Modified Spatiotemporal Fusion Algorithm Using Phenological Information for Predicting Reflectance of Paddy Rice in Southern China

    Directory of Open Access Journals (Sweden)

    Mengxue Liu

    2018-05-01

    Full Text Available Satellite data for studying surface dynamics in heterogeneous landscapes are missing due to frequent cloud contamination, low temporal resolution, and technological difficulties in developing satellites. A modified spatiotemporal fusion algorithm for predicting the reflectance of paddy rice is presented in this paper. The algorithm uses phenological information extracted from a moderate-resolution imaging spectroradiometer enhanced vegetation index time series to improve the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM. The algorithm is tested with satellite data on Yueyang City, China. The main contribution of the modified algorithm is the selection of similar neighborhood pixels by using phenological information to improve accuracy. Results show that the modified algorithm performs better than ESTARFM in visual inspection and quantitative metrics, especially for paddy rice. This modified algorithm provides not only new ideas for the improvement of spatiotemporal data fusion method, but also technical support for the generation of remote sensing data with high spatial and temporal resolution.

  9. Information filtering via weighted heat conduction algorithm

    Science.gov (United States)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  10. Diagnostic information system dynamics in the evaluation of machine learning algorithms for the supervision of energy efficiency of district heating-supplied buildings

    International Nuclear Information System (INIS)

    Kiluk, Sebastian

    2017-01-01

    supplied from the same district heating network. The authors analyzed the evolution of a series of information systems originating from the same knowledge discovery algorithm applied to a sequence of energy consumption-related data. Specifically, the rough sets theory was applied to describe the knowledge base and measure the uncertainty of machine learning predictions of current classification based on a past knowledge base. Fluctuations of diagnostic class membership were identified and provided for the differentiation between returning and novel fault detections, thus introducing the qualities of information system uncertainty and its sustainability. The usability of the new method was demonstrated in the comparison of results for exemplary data mining algorithms implemented on real data from over one thousand buildings.

  11. Information theory and its application to optical communication

    NARCIS (Netherlands)

    Willems, F.M.J.

    2017-01-01

    The lecture focusses on the foundations of communication which were developed within the field of information theory. Enumerative shaping techniques and the so-called squareroot transform will be discussed in detail.

  12. Information carriers and (reading them through) information theory in quantum chemistry.

    Science.gov (United States)

    Geerlings, Paul; Borgoo, Alex

    2011-01-21

    This Perspective discusses the reduction of the electronic wave function via the second-order reduced density matrix to the electron density ρ(r), which is the key ingredient in density functional theory (DFT) as a basic carrier of information. Simplifying further, the 1-normalized density function turns out to contain essentially the same information as ρ(r) and is even of preferred use as an information carrier when discussing the periodic properties along Mendeleev's table where essentially the valence electrons are at stake. The Kullback-Leibler information deficiency turns out to be the most interesting choice to obtain information on the differences in ρ(r) or σ(r) between two systems. To put it otherwise: when looking for the construction of a functional F(AB) = F[ζ(A)(r),ζ(B)(r)] for extracting differences in information from an information carrier ζ(r) (i.e. ρ(r), σ(r)) for two systems A and B the Kullback-Leibler information measure ΔS is a particularly adequate choice. Examples are given, varying from atoms, to molecules and molecular interactions. Quantum similarity of atoms indicates that the shape function based KL information deficiency is the most appropriate tool to retrieve periodicity in the Periodic Table. The dissimilarity of enantiomers for which different information measures are presented at global and local (i.e. molecular and atomic) level leads to an extension of Mezey's holographic density theorem and shows numerical evidence that in a chiral molecule the whole molecule is pervaded by chirality. Finally Kullback-Leibler information profiles are discussed for intra- and intermolecular proton transfer reactions and a simple S(N)2 reaction indicating that the theoretical information profile can be used as a companion to the energy based Hammond postulate to discuss the early or late transition state character of a reaction. All in all this Perspective's answer is positive to the question of whether an even simpler carrier of

  13. Brain activity and cognition: a connection from thermodynamics and information theory.

    Science.gov (United States)

    Collell, Guillem; Fauquet, Jordi

    2015-01-01

    The connection between brain and mind is an important scientific and philosophical question that we are still far from completely understanding. A crucial point to our work is noticing that thermodynamics provides a convenient framework to model brain activity, whereas cognition can be modeled in information-theoretical terms. In fact, several models have been proposed so far from both approaches. A second critical remark is the existence of deep theoretical connections between thermodynamics and information theory. In fact, some well-known authors claim that the laws of thermodynamics are nothing but principles in information theory. Unlike in physics or chemistry, a formalization of the relationship between information and energy is currently lacking in neuroscience. In this paper we propose a framework to connect physical brain and cognitive models by means of the theoretical connections between information theory and thermodynamics. Ultimately, this article aims at providing further insight on the formal relationship between cognition and neural activity.

  14. Brain activity and cognition: a connection from thermodynamics and information theory

    Science.gov (United States)

    Collell, Guillem; Fauquet, Jordi

    2015-01-01

    The connection between brain and mind is an important scientific and philosophical question that we are still far from completely understanding. A crucial point to our work is noticing that thermodynamics provides a convenient framework to model brain activity, whereas cognition can be modeled in information-theoretical terms. In fact, several models have been proposed so far from both approaches. A second critical remark is the existence of deep theoretical connections between thermodynamics and information theory. In fact, some well-known authors claim that the laws of thermodynamics are nothing but principles in information theory. Unlike in physics or chemistry, a formalization of the relationship between information and energy is currently lacking in neuroscience. In this paper we propose a framework to connect physical brain and cognitive models by means of the theoretical connections between information theory and thermodynamics. Ultimately, this article aims at providing further insight on the formal relationship between cognition and neural activity. PMID:26136709

  15. Nonequilibrium thermodynamics and information theory: basic concepts and relaxing dynamics

    International Nuclear Information System (INIS)

    Altaner, Bernhard

    2017-01-01

    Thermodynamics is based on the notions of energy and entropy. While energy is the elementary quantity governing physical dynamics, entropy is the fundamental concept in information theory. In this work, starting from first principles, we give a detailed didactic account on the relations between energy and entropy and thus physics and information theory. We show that thermodynamic process inequalities, like the second law, are equivalent to the requirement that an effective description for physical dynamics is strongly relaxing. From the perspective of information theory, strongly relaxing dynamics govern the irreversible convergence of a statistical ensemble towards the maximally non-commital probability distribution that is compatible with thermodynamic equilibrium parameters. In particular, Markov processes that converge to a thermodynamic equilibrium state are strongly relaxing. Our framework generalizes previous results to arbitrary open and driven systems, yielding novel thermodynamic bounds for idealized and real processes. (paper)

  16. Information structures in economics studies in the theory of markets with imperfect information

    CERN Document Server

    Nermuth, Manfred

    1982-01-01

    This book is intended as a contribution to the theory of markets with imperfect information. The subject being nearly limitless, only certain selected topics are discussed. These are outlined in the Introduction (Ch. 0). The remainder of the book is divided into three parts. All results of economic significance are contained in Parts II & III. Part I introduces the main tools for the analysis, in particular the concept of an information structure. Although most of the material presented in Part I is not original, it is hoped that the detailed and self-contained exposition will help the reader to understand not only the following pages, but also the existing technical and variegated literature on markets with imperfect information. The mathematical prerequisites needed, but not explained in the text rarely go beyond elementary calculus and probability theory. Whenever more advanced concepts are used, I have made an effort to give an intuitive explanation as well, so that the argument can also be followed o...

  17. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  18. Analytical implications of using practice theory in workplace information literacy research

    DEFF Research Database (Denmark)

    Moring, Camilla Elisabeth; Lloyd, Annemaree

    2013-01-01

    Introduction: This paper considers practice theory and the analytical implications of using this theoretical approach in information literacy research. More precisely the aim of the paper is to discuss the translation of practice theoretical assumptions into strategies that frame the analytical...... focus and interest when researching workplace information literacy. Two practice theoretical perspectives are selected, one by Theodore Schatzki and one by Etienne Wenger, and their general commonalities and differences are analysed and discussed. Analysis: The two practice theories and their main ideas...... of what constitute practices, how practices frame social life and the central concepts used to explain this, are presented. Then the application of the theories within workplace information literacy research is briefly explored. Results and Conclusion: The two theoretical perspectives share some...

  19. Hand and goods judgment algorithm based on depth information

    Science.gov (United States)

    Li, Mingzhu; Zhang, Jinsong; Yan, Dan; Wang, Qin; Zhang, Ruiqi; Han, Jing

    2016-03-01

    A tablet computer with a depth camera and a color camera is loaded on a traditional shopping cart. The inside information of the shopping cart is obtained by two cameras. In the shopping cart monitoring field, it is very important for us to determine whether the customer with goods in or out of the shopping cart. This paper establishes a basic framework for judging empty hand, it includes the hand extraction process based on the depth information, process of skin color model building based on WPCA (Weighted Principal Component Analysis), an algorithm for judging handheld products based on motion and skin color information, statistical process. Through this framework, the first step can ensure the integrity of the hand information, and effectively avoids the influence of sleeve and other debris, the second step can accurately extract skin color and eliminate the similar color interference, light has little effect on its results, it has the advantages of fast computation speed and high efficiency, and the third step has the advantage of greatly reducing the noise interference and improving the accuracy.

  20. Look-ahead fermion algorithm

    International Nuclear Information System (INIS)

    Grady, M.

    1986-01-01

    I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs

  1. Algorithms, Interfaces, and the Circulation of Information: Interrogating the Epistemological Challenges of Facebook

    Directory of Open Access Journals (Sweden)

    Jannick Schou

    2016-05-01

    Full Text Available As social and political life increasingly takes place on social network sites, new epistemological questions have emerged. How can information disseminated through new media be understood and disentangled? How can potential hidden agendas or sources be identified? And what mechanisms govern what and how information is presented to the user? By drawing on existing research on the algorithms and interfaces underlying social network sites, this paper provides a discussion of Facebook and the epistemological challenges, potentials, and questions raised by the platform. The paper specifically discusses the ways in which interfaces shape how information can be accessed and processed by different kinds of users as well as the role of algorithms in pre-selecting what appears as representable information. A key argument of the paper is that Facebook, as a complex socio-technical network of human and non-human actors, has profound epistemological implications for how information can be accessed, understood, and circulated. In this sense, the user’s potential acquisition of information is shaped and conditioned by the technological structure of the platform. Building on these arguments, the paper suggests that new epistemological challenges deserve more scholarly attention, as they hold wide implications for both researchers and users

  2. An Experimental Evaluation of the DQ-DHT Algorithm in a Grid Information Service

    Science.gov (United States)

    Papadakis, Harris; Trunfio, Paolo; Talia, Domenico; Fragopoulou, Paraskevi

    DQ-DHT is a resource discovery algorithm that combines the Dynamic Querying (DQ) technique used in unstructured peer-to-peer networks with an algorithm for efficient broadcast over a Distributed Hash Table (DHT). Similarly to DQ, DQ-DHT dynamically controls the query propagation on the basis of the desired number of results and the popularity of the resource to be located. Differently from DQ, DQ-DHT exploits the structural properties of a DHT to avoid message duplications, thus reducing the amount of network traffic generated by each query. The goal of this paper is to evaluate experimentally the amount of traffic generated by DQ-DHT compared to the DQ algorithm in a Grid infrastructure. A prototype of a Grid information service, which can use both DQ and DQ-DHT as resource discovery algorithm, has been implemented and deployed on the Grid'5000 infrastructure for evaluation. The experimental results presented in this paper show that DQ-DHT significantly reduces the amount of network traffic generated during the discovery process compared to the original DQ algorithm.

  3. Multimedia information retrieval theory and techniques

    CERN Document Server

    Raieli, Roberto

    2013-01-01

    Novel processing and searching tools for the management of new multimedia documents have developed. Multimedia Information Retrieval (MMIR) is an organic system made up of Text Retrieval (TR); Visual Retrieval (VR); Video Retrieval (VDR); and Audio Retrieval (AR) systems. So that each type of digital document may be analysed and searched by the elements of language appropriate to its nature, search criteria must be extended. Such an approach is known as the Content Based Information Retrieval (CBIR), and is the core of MMIR. This novel content-based concept of information handling needs to be integrated with more traditional semantics. Multimedia Information Retrieval focuses on the tools of processing and searching applicable to the content-based management of new multimedia documents. Translated from Italian by Giles Smith, the book is divided in to two parts. Part one discusses MMIR and related theories, and puts forward new methodologies; part two reviews various experimental and operating MMIR systems, a...

  4. Information theoretic analysis of edge detection in visual communication

    Science.gov (United States)

    Jiang, Bo; Rahman, Zia-ur

    2010-08-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.

  5. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  6. Information theory perspective on network robustness

    International Nuclear Information System (INIS)

    Schieber, Tiago A.; Carpi, Laura; Frery, Alejandro C.; Rosso, Osvaldo A.; Pardalos, Panos M.; Ravetti, Martín G.

    2016-01-01

    A crucial challenge in network theory is the study of the robustness of a network when facing a sequence of failures. In this work, we propose a dynamical definition of network robustness based on Information Theory, that considers measurements of the structural changes caused by failures of the network's components. Failures are defined here as a temporal process defined in a sequence. Robustness is then evaluated by measuring dissimilarities between topologies after each time step of the sequence, providing a dynamical information about the topological damage. We thoroughly analyze the efficiency of the method in capturing small perturbations by considering different probability distributions on networks. In particular, we find that distributions based on distances are more consistent in capturing network structural deviations, as better reflect the consequences of the failures. Theoretical examples and real networks are used to study the performance of this methodology. - Highlights: • A novel methodology to measure the robustness of a network to component failure or targeted attacks is proposed. • The use of the network's distance PDF allows a precise analysis. • The method provides a dynamic robustness profile showing the response of the topology to each failure event. • The measure is capable to detect network's critical elements.

  7. An information theory of image gathering

    Science.gov (United States)

    Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.

  8. Cultural-Historical Activity Theory and Domain Analysis: Metatheoretical Implications for Information Science

    Science.gov (United States)

    Wang, Lin

    2013-01-01

    Background: Cultural-historical activity theory is an important theory in modern psychology. In recent years, it has drawn more attention from related disciplines including information science. Argument: This paper argues that activity theory and domain analysis which uses the theory as one of its bases could bring about some important…

  9. An improved algorithm for information hiding based on features of Arabic text: A Unicode approach

    Directory of Open Access Journals (Sweden)

    A.A. Mohamed

    2014-07-01

    Full Text Available Steganography means how to hide secret information in a cover media, so that other individuals fail to realize their existence. Due to the lack of data redundancy in the text file in comparison with other carrier files, text steganography is a difficult problem to solve. In this paper, we proposed a new promised steganographic algorithm for Arabic text based on features of Arabic text. The focus is on more secure algorithm and high capacity of the carrier. Our extensive experiments using the proposed algorithm resulted in a high capacity of the carrier media. The embedding capacity rate ratio of the proposed algorithm is high. In addition, our algorithm can resist traditional attacking methods since it makes the changes in carrier text as minimum as possible.

  10. Algorithmic Information Dynamics of Persistent Patterns and Colliding Particles in the Game of Life

    KAUST Repository

    Zenil, Hector

    2018-02-18

    We demonstrate the way to apply and exploit the concept of \\\\textit{algorithmic information dynamics} in the characterization and classification of dynamic and persistent patterns, motifs and colliding particles in, without loss of generalization, Conway\\'s Game of Life (GoL) cellular automaton as a case study. We analyze the distribution of prevailing motifs that occur in GoL from the perspective of algorithmic probability. We demonstrate how the tools introduced are an alternative to computable measures such as entropy and compression algorithms which are often nonsensitive to small changes and features of non-statistical nature in the study of evolving complex systems and their emergent structures.

  11. Improving Accuracy of Dempster-Shafer Theory Based Anomaly Detection Systems

    Directory of Open Access Journals (Sweden)

    Ling Zou

    2014-07-01

    Full Text Available While the Dempster-Shafer theory of evidence has been widely used in anomaly detection, there are some issues with them. Dempster-Shafer theory of evidence trusts evidences equally which does not hold in distributed-sensor ADS. Moreover, evidences are dependent with each other sometimes which will lead to false alert. We propose improving by incorporating two algorithms. Features selection algorithm employs Gaussian Graphical Models to discover correlation between some candidate features. A group of suitable ADS were selected to detect and detection result were send to the fusion engine. Information gain is applied to set weight for every feature on Weights estimated algorithm. A weighted Dempster-Shafer theory of evidence combined the detection results to achieve a better accuracy. We evaluate our detection prototype through a set of experiments that were conducted with standard benchmark Wisconsin Breast Cancer Dataset and real Internet traffic. Evaluations on the Wisconsin Breast Cancer Dataset show that our prototype can find the correlation in nine features and improve the detection rate without affecting the false positive rate. Evaluations on Internet traffic show that Weights estimated algorithm can improve the detection performance significantly.

  12. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Sera White

    2012-04-01

    This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.

  13. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  14. Finding an Information Concept Suited for a Universal Theory of Information

    DEFF Research Database (Denmark)

    Brier, Søren

    2015-01-01

    . There is no conclusive evidence that the core of reality across nature, culture, life and mind is purely either mathematical, logical or of a computational nature. Therefore the core of the information concept should not only be based only on pure logical or mathematical rationality. We need to include interpretation...... definition information in a transdisciplinary theory cannot be ‘objective’, but has to be relativized in relation to the receiver's knowledge, as also proposed by Floridi. It is difficult to produce a quantitative statement independently of a qualitative analysis based on some sort of relation to the human...

  15. Information theory of open fragmenting systems

    International Nuclear Information System (INIS)

    Gulminelli, F.; Juillet, O.; Chomaz, Ph.; Ison, M. J.; Dorso, C. O.

    2007-01-01

    An information theory description of finite systems explicitly evolving in time is presented. We impose a MaxEnt variational principle on the Shannon entropy at a given time while the constraints are set at a former time. The resulting density matrix contains explicit time odd components in the form of collective flows. As a specific application we consider the dynamics of the expansion in connection with heavy ion experiments. Lattice gas and classical molecular dynamics simulations are shown

  16. Turing’s algorithmic lens: From computability to complexity theory

    Directory of Open Access Journals (Sweden)

    Díaz, Josep

    2013-12-01

    Full Text Available The decidability question, i.e., whether any mathematical statement could be computationally proven true or false, was raised by Hilbert and remained open until Turing answered it in the negative. Then, most efforts in theoretical computer science turned to complexity theory and the need to classify decidable problems according to their difficulty. Among others, the classes P (problems solvable in polynomial time and NP (problems solvable in non-deterministic polynomial time were defined, and one of the most challenging scientific quests of our days arose: whether P = NP. This still open question has implications not only in computer science, mathematics and physics, but also in biology, sociology and economics, and it can be seen as a direct consequence of Turing’s way of looking through the algorithmic lens at different disciplines to discover how pervasive computation is.La cuestión de la decidibilidad, es decir, si es posible demostrar computacionalmente que una expresión matemática es verdadera o falsa, fue planteada por Hilbert y permaneció abierta hasta que Turing la respondió de forma negativa. Establecida la no-decidibilidad de las matemáticas, los esfuerzos en informática teórica se centraron en el estudio de la complejidad computacional de los problemas decidibles. En este artículo presentamos una breve introducción a las clases P (problemas resolubles en tiempo polinómico y NP (problemas resolubles de manera no determinista en tiempo polinómico, al tiempo que exponemos la dificultad de establecer si P = NP y las consecuencias que se derivarían de que ambas clases de problemas fueran iguales. Esta cuestión tiene implicaciones no solo en los campos de la informática, las matemáticas y la física, sino también para la biología, la sociología y la economía. La idea seminal del estudio de la complejidad computacional es consecuencia directa del modo en que Turing abordaba problemas en diferentes ámbitos mediante lo

  17. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter; Cohen, Albert; Dahmen, Wolfgang; DeVore, Ronald

    2014-01-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  18. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  19. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  20. Immersive Algorithms: Better Visualization with Less Information

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li

    2017-01-01

    Visualizing algorithms, such as drawings, slideshow presentations, animations, videos, and software tools, is a key concept to enhance and support student learning. A typical visualization of an algorithm show the data and then perform computation on the data. For instance, a standard visualization...

  1. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  2. Using institutional theory with sensemaking theory: a case study of information system implementation in healthcare

    DEFF Research Database (Denmark)

    Jensen, Tina Blegind; Kjærgaard, Annemette; Svejvig, Per

    2009-01-01

    Institutional theory has proven to be a central analytical perspective for investigating the role of social and historical structures of information systems (IS) implementation. However, it does not explicitly account for how organisational actors make sense of and enact technologies in their local...... context. We address this limitation by exploring the potential of using institutional theory with sensemaking theory to study IS implementation in organisations. We argue that each theoretical perspective has its own explanatory power and that a combination of the two facilitates a much richer...... interpretation of IS implementation by linking macro- and micro-levels of analysis. To illustrate this, we report from an empirical study of the implementation of an Electronic Patient Record (EPR) system in a clinical setting. Using key constructs from the two theories, our findings address the phenomenon...

  3. Search algorithms, hidden labour and information control

    Directory of Open Access Journals (Sweden)

    Paško Bilić

    2016-06-01

    Full Text Available The paper examines some of the processes of the closely knit relationship between Google’s ideologies of neutrality and objectivity and global market dominance. Neutrality construction comprises an important element sustaining the company’s economic position and is reflected in constant updates, estimates and changes to utility and relevance of search results. Providing a purely technical solution to these issues proves to be increasingly difficult without a human hand in steering algorithmic solutions. Search relevance fluctuates and shifts through continuous tinkering and tweaking of the search algorithm. The company also uses third parties to hire human raters for performing quality assessments of algorithmic updates and adaptations in linguistically and culturally diverse global markets. The adaptation process contradicts the technical foundations of the company and calculations based on the initial Page Rank algorithm. Annual market reports, Google’s Search Quality Rating Guidelines, and reports from media specialising in search engine optimisation business are analysed. The Search Quality Rating Guidelines document provides a rare glimpse into the internal architecture of search algorithms and the notions of utility and relevance which are presented and structured as neutral and objective. Intertwined layers of ideology, hidden labour of human raters, advertising revenues, market dominance and control are discussed throughout the paper.

  4. Planting contemporary practice theory in the garden of information science

    NARCIS (Netherlands)

    Huizing, A.; Cavanagh, M.

    2011-01-01

    Introduction. The purpose of this paper is to introduce to information science in a coherent fashion the core premises of contemporary practice theory, and thus to engage the information research community in further debate and discussion. Method. Contemporary practice-based approaches are

  5. Biologically inspired information theory: Adaptation through construction of external reality models by living systems.

    Science.gov (United States)

    Nakajima, Toshiyuki

    2015-12-01

    Higher animals act in the world using their external reality models to cope with the uncertain environment. Organisms that have not developed such information-processing organs may also have external reality models built in the form of their biochemical, physiological, and behavioral structures, acquired by natural selection through successful models constructed internally. Organisms subject to illusions would fail to survive in the material universe. How can organisms, or living systems in general, determine the external reality from within? This paper starts with a phenomenological model, in which the self constitutes a reality model developed through the mental processing of phenomena. Then, the it-from-bit concept is formalized using a simple mathematical model. For this formalization, my previous work on an algorithmic process is employed to constitute symbols referring to the external reality, called the inverse causality, with additional improvements to the previous work. Finally, as an extension of this model, the cognizers system model is employed to describe the self as one of many material entities in a world, each of which acts as a subject by responding to the surrounding entities. This model is used to propose a conceptual framework of information theory that can deal with both the qualitative (semantic) and quantitative aspects of the information involved in biological processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. An algorithm for gluinos on the lattice

    International Nuclear Information System (INIS)

    Montvay, I.

    1995-10-01

    Luescher's local bosonic algorithm for Monte Carlo simulations of quantum field theories with fermions is applied to the simulation of a possibly supersymmetric Yang-Mills theory with a Majorana fermion in the adjoint representation. Combined with a correction step in a two-step polynomial approximation scheme, the obtained algorithm seems to be promising and could be competitive with more conventional algorithms based on discretized classical (''molecular dynamics'') equations of motion. The application of the considered polynomial approximation scheme to optimized hopping parameter expansions is also discussed. (orig.)

  7. Grounded theory for radiotherapy practitioners: Informing clinical practice

    International Nuclear Information System (INIS)

    Walsh, N.A.

    2010-01-01

    Radiotherapy practitioners may be best placed to undertake qualitative research within the context of cancer, due to specialist knowledge of radiation treatment and sensitivity to radiotherapy patient's needs. The grounded theory approach to data collection and analysis is a unique method of identifying a theory directly based on data collected within a clinical context. Research for radiotherapy practitioners is integral to role expansion within the government's directive for evidence-based practice. Due to the paucity of information on qualitative research undertaken by radiotherapy radiographers, this article aims to assess the potential impact of qualitative research on radiotherapy patient and service outcomes.

  8. Product-oriented design theory for digital information services: A literature review.

    NARCIS (Netherlands)

    Wijnhoven, Alphonsus B.J.M.; Kraaijenbrink, Jeroen

    2008-01-01

    Purpose – The purpose of this paper is to give a structured literature review, design concepts, and research propositions related to a product-oriented design theory for information services. Information services facilitate the exchange of information goods with or without transforming these goods.

  9. Critical theory as an approach to the ethics of information security.

    Science.gov (United States)

    Stahl, Bernd Carsten; Doherty, Neil F; Shaw, Mark; Janicke, Helge

    2014-09-01

    Information security can be of high moral value. It can equally be used for immoral purposes and have undesirable consequences. In this paper we suggest that critical theory can facilitate a better understanding of possible ethical issues and can provide support when finding ways of addressing them. The paper argues that critical theory has intrinsic links to ethics and that it is possible to identify concepts frequently used in critical theory to pinpoint ethical concerns. Using the example of UK electronic medical records the paper demonstrates that a critical lens can highlight issues that traditional ethical theories tend to overlook. These are often linked to collective issues such as social and organisational structures, which philosophical ethics with its typical focus on the individual does not tend to emphasise. The paper suggests that this insight can help in developing ways of researching and innovating responsibly in the area of information security.

  10. Metal artifact reduction algorithm based on model images and spatial information

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)

    2011-10-01

    Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.

  11. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  12. A demand response modeling for residential consumers in smart grid environment using game theory based energy scheduling algorithm

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-06-01

    Full Text Available In this paper, demand response modeling scheme is proposed for residential consumers using game theory algorithm as Generalized Tit for Tat (GTFT Dominant Game based Energy Scheduler. The methodology is established as a work flow domain model between the utility and the user considering the smart grid framework. It exhibits an algorithm which schedules load usage by creating several possible tariffs for consumers such that demand is never raised. This can be done both individually and among multiple users of a community. The uniqueness behind the demand response proposed is that, the tariff is calculated for all hours and the load during the peak hours which can be rescheduled is shifted based on the Peak Average Ratio. To enable the vitality of the work simulation results of a general case of three domestic consumers are modeled extended to a comparative performance and evaluation with other algorithms and inference is analyzed.

  13. A trust evaluation algorithm for wireless sensor networks based on node behaviors and D-S evidence theory.

    Science.gov (United States)

    Feng, Renjian; Xu, Xiaofeng; Zhou, Xiang; Wan, Jiangwen

    2011-01-01

    For wireless sensor networks (WSNs), many factors, such as mutual interference of wireless links, battlefield applications and nodes exposed to the environment without good physical protection, result in the sensor nodes being more vulnerable to be attacked and compromised. In order to address this network security problem, a novel trust evaluation algorithm defined as NBBTE (Node Behavioral Strategies Banding Belief Theory of the Trust Evaluation Algorithm) is proposed, which integrates the approach of nodes behavioral strategies and modified evidence theory. According to the behaviors of sensor nodes, a variety of trust factors and coefficients related to the network application are established to obtain direct and indirect trust values through calculating weighted average of trust factors. Meanwhile, the fuzzy set method is applied to form the basic input vector of evidence. On this basis, the evidence difference is calculated between the indirect and direct trust values, which link the revised D-S evidence combination rule to finally synthesize integrated trust value of nodes. The simulation results show that NBBTE can effectively identify malicious nodes and reflects the characteristic of trust value that 'hard to acquire and easy to lose'. Furthermore, it is obvious that the proposed scheme has an outstanding advantage in terms of illustrating the real contribution of different nodes to trust evaluation.

  14. AUTHENTICATION ALGORITHM FOR PARTICIPANTS OF INFORMATION INTEROPERABILITY IN PROCESS OF OPERATING SYSTEM REMOTE LOADING ON THIN CLIENT

    Directory of Open Access Journals (Sweden)

    Y. A. Gatchin

    2016-05-01

    Full Text Available Subject of Research.This paper presents solution of authentication problem for all components of information interoperabilityin process of operation system network loading on thin client from terminal server. System Definition. In the proposed solution operation system integrity check is made by hardware-software module, including USB-token with protected memory for secure storage of cryptographic keys and loader. The key requirement for the solution is mutual authentication of four participants: terminal server, thin client, token and user. We have created two algorithms for the problem solution. The first of the designed algorithms compares the encrypted one-time password (random number with the reference value stored in the memory of the token and updates this number in case of successful authentication. The second algorithm uses the public and private keys of the token and the server. As a result of cryptographic transformation, participants are authenticated and the secure channel is formed between the token, thin client and terminal server. Main Results. Additional research was carried out to find out if the designed algorithms meet the necessary requirements. Criteria used included applicability in a multi-access terminal system architecture, potential threats evaluation and overall system security. According to analysis results, it is recommended to use the algorithm based on PKI due to its high scalability and usability. High level of data security is proved as a result of asymmetric cryptography application with the guarantee that participants' private keys are never sent in the authentication process. Practical Relevance. The designed PKI-based algorithm allows solving the problem with the use of cryptographic algorithms according to state standard even in its absence on asymmetric cryptography. Thus, it can be applied in the State Information Systems with increased requirements to information security.

  15. Prolegomena to a theory of nuclear information exchange

    International Nuclear Information System (INIS)

    Van Nuffelen, Dominique

    1997-01-01

    From the researcher's point of view, the communications with the agricultural populations in case of radiological emergency can not be anything else but the application of a theory of nuclear information exchange among social groups. Consequently, it is essentially necessary to work out such a theory, the prolegomena of which are exposed in this paper. It describes an experiment conducted at 'Service de protection contre les radiations ionisantes' - Belgium (SPRI), and proposes an investigation within the scientific knowledge in this matter. The available empirical and theoretical data allow formulating pragmatic recommendations, among which the principal one is the necessity of creating in normal radiological situation of a number of scenarios of messages adapted to the agricultural populations. The author points out that in order to be perfectly adapted these scenarios must been negotiated between the emitter and receiver. If this condition is satisfied the information in case of nuclear emergency will really be an exchange of knowledge between experts and the agricultural population i.e. a 'communication'

  16. A density distribution algorithm for bone incorporating local orthotropy, modal analysis and theories of cellular solids.

    Science.gov (United States)

    Impelluso, Thomas J

    2003-06-01

    An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.

  17. Approach to estimation of level of information security at enterprise based on genetic algorithm

    Science.gov (United States)

    V, Stepanov L.; V, Parinov A.; P, Korotkikh L.; S, Koltsov A.

    2018-05-01

    In the article, the way of formalization of different types of threats of information security and vulnerabilities of an information system of the enterprise and establishment is considered. In a type of complexity of ensuring information security of application of any new organized system, the concept and decisions in the sphere of information security are expedient. One of such approaches is the method of a genetic algorithm. For the enterprises of any fields of activity, the question of complex estimation of the level of security of information systems taking into account the quantitative and qualitative factors characterizing components of information security is relevant.

  18. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  19. Nonequilibrium thermodynamics and information theory: basic concepts and relaxing dynamics

    Science.gov (United States)

    Altaner, Bernhard

    2017-11-01

    Thermodynamics is based on the notions of energy and entropy. While energy is the elementary quantity governing physical dynamics, entropy is the fundamental concept in information theory. In this work, starting from first principles, we give a detailed didactic account on the relations between energy and entropy and thus physics and information theory. We show that thermodynamic process inequalities, like the second law, are equivalent to the requirement that an effective description for physical dynamics is strongly relaxing. From the perspective of information theory, strongly relaxing dynamics govern the irreversible convergence of a statistical ensemble towards the maximally non-commital probability distribution that is compatible with thermodynamic equilibrium parameters. In particular, Markov processes that converge to a thermodynamic equilibrium state are strongly relaxing. Our framework generalizes previous results to arbitrary open and driven systems, yielding novel thermodynamic bounds for idealized and real processes. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Bernhard Altaner was selected by the Editorial Board of J. Phys. A as an Emerging Talent.

  20. Hesitant fuzzy sets theory

    CERN Document Server

    Xu, Zeshui

    2014-01-01

    This book provides the readers with a thorough and systematic introduction to hesitant fuzzy theory. It presents the most recent research results and advanced methods in the field. These includes: hesitant fuzzy aggregation techniques, hesitant fuzzy preference relations, hesitant fuzzy measures, hesitant fuzzy clustering algorithms and hesitant fuzzy multi-attribute decision making methods. Since its introduction by Torra and Narukawa in 2009, hesitant fuzzy sets have become more and more popular and have been used for a wide range of applications, from decision-making problems to cluster analysis, from medical diagnosis to personnel appraisal and information retrieval. This book offers a comprehensive report on the state-of-the-art in hesitant fuzzy sets theory and applications, aiming at becoming a reference guide for both researchers and practitioners in the area of fuzzy mathematics and other applied research fields (e.g. operations research, information science, management science and engineering) chara...

  1. A Refined Self-Tuning Filter-Based Instantaneous Power Theory Algorithm for Indirect Current Controlled Three-Level Inverter-Based Shunt Active Power Filters under Non-sinusoidal Source Voltage Conditions

    Directory of Open Access Journals (Sweden)

    Yap Hoon

    2017-02-01

    Full Text Available In this paper, a refined reference current generation algorithm based on instantaneous power (pq theory is proposed, for operation of an indirect current controlled (ICC three-level neutral-point diode clamped (NPC inverter-based shunt active power filter (SAPF under non-sinusoidal source voltage conditions. SAPF is recognized as one of the most effective solutions to current harmonics due to its flexibility in dealing with various power system conditions. As for its controller, pq theory has widely been applied to generate the desired reference current due to its simple implementation features. However, the conventional dependency on self-tuning filter (STF in generating reference current has significantly limited mitigation performance of SAPF. Besides, the conventional STF-based pq theory algorithm is still considered to possess needless features which increase computational complexity. Furthermore, the conventional algorithm is mostly designed to suit operation of direct current controlled (DCC SAPF which is incapable of handling switching ripples problems, thereby leading to inefficient mitigation performance. Therefore, three main improvements are performed which include replacement of STF with mathematical-based fundamental real power identifier, removal of redundant features, and generation of sinusoidal reference current. To validate effectiveness and feasibility of the proposed algorithm, simulation work in MATLAB-Simulink and laboratory test utilizing a TMS320F28335 digital signal processor (DSP are performed. Both simulation and experimental findings demonstrate superiority of the proposed algorithm over the conventional algorithm.

  2. Decrease of Fisher information and the information geometry of evolution equations for quantum mechanical probability amplitudes.

    Science.gov (United States)

    Cafaro, Carlo; Alsing, Paul M

    2018-04-01

    The relevance of the concept of Fisher information is increasing in both statistical physics and quantum computing. From a statistical mechanical standpoint, the application of Fisher information in the kinetic theory of gases is characterized by its decrease along the solutions of the Boltzmann equation for Maxwellian molecules in the two-dimensional case. From a quantum mechanical standpoint, the output state in Grover's quantum search algorithm follows a geodesic path obtained from the Fubini-Study metric on the manifold of Hilbert-space rays. Additionally, Grover's algorithm is specified by constant Fisher information. In this paper, we present an information geometric characterization of the oscillatory or monotonic behavior of statistically parametrized squared probability amplitudes originating from special functional forms of the Fisher information function: constant, exponential decay, and power-law decay. Furthermore, for each case, we compute both the computational speed and the availability loss of the corresponding physical processes by exploiting a convenient Riemannian geometrization of useful thermodynamical concepts. Finally, we briefly comment on the possibility of using the proposed methods of information geometry to help identify a suitable trade-off between speed and thermodynamic efficiency in quantum search algorithms.

  3. Decrease of Fisher information and the information geometry of evolution equations for quantum mechanical probability amplitudes

    Science.gov (United States)

    Cafaro, Carlo; Alsing, Paul M.

    2018-04-01

    The relevance of the concept of Fisher information is increasing in both statistical physics and quantum computing. From a statistical mechanical standpoint, the application of Fisher information in the kinetic theory of gases is characterized by its decrease along the solutions of the Boltzmann equation for Maxwellian molecules in the two-dimensional case. From a quantum mechanical standpoint, the output state in Grover's quantum search algorithm follows a geodesic path obtained from the Fubini-Study metric on the manifold of Hilbert-space rays. Additionally, Grover's algorithm is specified by constant Fisher information. In this paper, we present an information geometric characterization of the oscillatory or monotonic behavior of statistically parametrized squared probability amplitudes originating from special functional forms of the Fisher information function: constant, exponential decay, and power-law decay. Furthermore, for each case, we compute both the computational speed and the availability loss of the corresponding physical processes by exploiting a convenient Riemannian geometrization of useful thermodynamical concepts. Finally, we briefly comment on the possibility of using the proposed methods of information geometry to help identify a suitable trade-off between speed and thermodynamic efficiency in quantum search algorithms.

  4. Informational and linguistic analysis of large genomic sequence collections via efficient Hadoop cluster algorithms.

    Science.gov (United States)

    Ferraro Petrillo, Umberto; Roscigno, Gianluca; Cattaneo, Giuseppe; Giancarlo, Raffaele

    2018-06-01

    Information theoretic and compositional/linguistic analysis of genomes have a central role in bioinformatics, even more so since the associated methodologies are becoming very valuable also for epigenomic and meta-genomic studies. The kernel of those methods is based on the collection of k-mer statistics, i.e. how many times each k-mer in {A,C,G,T}k occurs in a DNA sequence. Although this problem is computationally very simple and efficiently solvable on a conventional computer, the sheer amount of data available now in applications demands to resort to parallel and distributed computing. Indeed, those type of algorithms have been developed to collect k-mer statistics in the realm of genome assembly. However, they are so specialized to this domain that they do not extend easily to the computation of informational and linguistic indices, concurrently on sets of genomes. Following the well-established approach in many disciplines, and with a growing success also in bioinformatics, to resort to MapReduce and Hadoop to deal with 'Big Data' problems, we present KCH, the first set of MapReduce algorithms able to perform concurrently informational and linguistic analysis of large collections of genomic sequences on a Hadoop cluster. The benchmarking of KCH that we provide indicates that it is quite effective and versatile. It is also competitive with respect to the parallel and distributed algorithms highly specialized to k-mer statistics collection for genome assembly problems. In conclusion, KCH is a much needed addition to the growing number of algorithms and tools that use MapReduce for bioinformatics core applications. The software, including instructions for running it over Amazon AWS, as well as the datasets are available at http://www.di-srv.unisa.it/KCH. umberto.ferraro@uniroma1.it. Supplementary data are available at Bioinformatics online.

  5. Graph theory

    CERN Document Server

    Gould, Ronald

    2012-01-01

    This introduction to graph theory focuses on well-established topics, covering primary techniques and including both algorithmic and theoretical problems. The algorithms are presented with a minimum of advanced data structures and programming details. This thoroughly corrected 1988 edition provides insights to computer scientists as well as advanced undergraduates and graduate students of topology, algebra, and matrix theory. Fundamental concepts and notation and elementary properties and operations are the first subjects, followed by examinations of paths and searching, trees, and networks. S

  6. Integrating soil information into canopy sensor algorithms for improved corn nitrogen rate recommendation

    Science.gov (United States)

    Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...

  7. Majorization arrow in quantum-algorithm design

    International Nuclear Information System (INIS)

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  8. Using Information Theory to Assess the Communicative Capacity of Circulating MicroRNA

    OpenAIRE

    Finn, Nnenna A.; Searles, Charles D.

    2013-01-01

    The discovery of extracellular microRNAs (miRNAs) and their transport modalities (i.e. microparticles, exosomes, proteins and lipoproteins) has sparked theories regarding their role in intercellular communication. Here, we assessed the information transfer capacity of different miRNA transport modalities in human serum by utilizing basic principles of information theory. Zipf Statistics were calculated for each of the miRNA transport modalities identified in human serum. Our analyses revealed...

  9. Information theory, animal communication, and the search for extraterrestrial intelligence

    Science.gov (United States)

    Doyle, Laurance R.; McCowan, Brenda; Johnston, Simon; Hanser, Sean F.

    2011-02-01

    We present ongoing research in the application of information theory to animal communication systems with the goal of developing additional detectors and estimators for possible extraterrestrial intelligent signals. Regardless of the species, for intelligence (i.e., complex knowledge) to be transmitted certain rules of information theory must still be obeyed. We demonstrate some preliminary results of applying information theory to socially complex marine mammal species (bottlenose dolphins and humpback whales) as well as arboreal squirrel monkeys, because they almost exclusively rely on vocal signals for their communications, producing signals which can be readily characterized by signal analysis. Metrics such as Zipf's Law and higher-order information-entropic structure are emerging as indicators of the communicative complexity characteristic of an "intelligent message" content within these animals' signals, perhaps not surprising given these species' social complexity. In addition to human languages, for comparison we also apply these metrics to pulsar signals—perhaps (arguably) the most "organized" of stellar systems—as an example of astrophysical systems that would have to be distinguished from an extraterrestrial intelligence message by such information theoretic filters. We also look at a message transmitted from Earth (Arecibo Observatory) that contains a lot of meaning but little information in the mathematical sense we define it here. We conclude that the study of non-human communication systems on our own planet can make a valuable contribution to the detection of extraterrestrial intelligence by providing quantitative general measures of communicative complexity. Studying the complex communication systems of other intelligent species on our own planet may also be one of the best ways to deprovincialize our thinking about extraterrestrial communication systems in general.

  10. An application of information theory to stochastic classical gravitational fields

    Science.gov (United States)

    Angulo, J.; Angulo, J. C.; Angulo, J. M.

    2018-06-01

    The objective of this study lies on the incorporation of the concepts developed in the Information Theory (entropy, complexity, etc.) with the aim of quantifying the variation of the uncertainty associated with a stochastic physical system resident in a spatiotemporal region. As an example of application, a relativistic classical gravitational field has been considered, with a stochastic behavior resulting from the effect induced by one or several external perturbation sources. One of the key concepts of the study is the covariance kernel between two points within the chosen region. Using this concept and the appropriate criteria, a methodology is proposed to evaluate the change of uncertainty at a given spatiotemporal point, based on available information and efficiently applying the diverse methods that Information Theory provides. For illustration, a stochastic version of the Einstein equation with an added Gaussian Langevin term is analyzed.

  11. Informal Risk Perceptions and Formal Theory

    International Nuclear Information System (INIS)

    Cayford, Jerry

    2001-01-01

    Economists have argued persuasively that our goals are wider than just risk minimization, and that they include a prudent weighing of costs and benefits. This economic line of thought recognizes that our policy goals are complex. As we widen the range of goals we are willing to entertain, though, we need to check that the methods we customarily employ are appropriate for the tasks to which we customarily apply them. This paper examines some economic methods of risk assessment, in light of the question of what our policy goals are and should be. Once the question of goals is open, more complexities than just cost intrude: what the public wants and why begs to be addressed. This leads us to the controversial issue of public risk perceptions. We have now examined a number of procedures that experts use to make public policy decisions. Behind all these issues is always the question of social welfare: what actions can we take, what policies should we embrace, to make the world a better place? In many cases, the public and the experts disagree about what the right choice is. In the first section, we saw a possible defense of the experts based on democratic theory: the people's participation, and even their will, can be legitimately set aside in the pursuit of their true interests. If this defense is to work, a great deal of weight rests on the question of the people's interests and the competence and integrity of the experts' pursuit of it. But at the same time, social preferences are ill-defined, and so are not good candidates for rational actor theory. Both the prescriptive legitimacy claim and the very workings of formal theory we have seen to depend on informal, qualitative, political judgments. Unfortunately, we have also seen a steady pattern of expert reliance on technical procedures even when they were manifestly unsuited to the task. The experts seem so intent on excluding informal thought that they would prefer even a bad quantitative process to a qualitative

  12. Informal Risk Perceptions and Formal Theory

    Energy Technology Data Exchange (ETDEWEB)

    Cayford, Jerry [Resources for the Future, Washington, DC (United States)

    2001-07-01

    Economists have argued persuasively that our goals are wider than just risk minimization, and that they include a prudent weighing of costs and benefits. This economic line of thought recognizes that our policy goals are complex. As we widen the range of goals we are willing to entertain, though, we need to check that the methods we customarily employ are appropriate for the tasks to which we customarily apply them. This paper examines some economic methods of risk assessment, in light of the question of what our policy goals are and should be. Once the question of goals is open, more complexities than just cost intrude: what the public wants and why begs to be addressed. This leads us to the controversial issue of public risk perceptions. We have now examined a number of procedures that experts use to make public policy decisions. Behind all these issues is always the question of social welfare: what actions can we take, what policies should we embrace, to make the world a better place? In many cases, the public and the experts disagree about what the right choice is. In the first section, we saw a possible defense of the experts based on democratic theory: the people's participation, and even their will, can be legitimately set aside in the pursuit of their true interests. If this defense is to work, a great deal of weight rests on the question of the people's interests and the competence and integrity of the experts' pursuit of it. But at the same time, social preferences are ill-defined, and so are not good candidates for rational actor theory. Both the prescriptive legitimacy claim and the very workings of formal theory we have seen to depend on informal, qualitative, political judgments. Unfortunately, we have also seen a steady pattern of expert reliance on technical procedures even when they were manifestly unsuited to the task. The experts seem so intent on excluding informal thought that they would prefer even a bad quantitative process to

  13. Actor Network Theory Approach and its Application in Investigating Agricultural Climate Information System

    Directory of Open Access Journals (Sweden)

    Maryam Sharifzadeh

    2013-03-01

    Full Text Available Actor network theory as a qualitative approach to study complex social factors and process of socio-technical interaction provides new concepts and ideas to understand socio-technical nature of information systems. From the actor network theory viewpoint, agricultural climate information system is a network consisting of actors, actions and information related processes (production, transformation, storage, retrieval, integration, diffusion and utilization, control and management, and system mechanisms (interfaces and networks. Analysis of such systemsembody the identification of basic components and structure of the system (nodes –thedifferent sources of information production, extension, and users, and the understanding of how successfully the system works (interaction and links – in order to promote climate knowledge content and improve system performance to reach agricultural development. The present research attempted to introduce actor network theory as research framework based on network view of agricultural climate information system.

  14. Information Processing Theories and the Education of the Gifted.

    Science.gov (United States)

    Rawl, Ruth K.; O'Tuel, Frances S.

    1983-01-01

    The basic assumptions of information processing theories in cognitive psychology are reviewed, and the application of this approach to problem solving in gifted education is considered. Specific implications are cited on problem selection and instruction giving. (CL)

  15. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems.

    Science.gov (United States)

    D'Onofrio, David J; Abel, David L; Johnson, Donald E

    2012-03-14

    The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

  16. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems

    Directory of Open Access Journals (Sweden)

    D'Onofrio David J

    2012-03-01

    Full Text Available Abstract The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI. PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

  17. Application of fuzzy C-Means Algorithm for Determining Field of Interest in Information System Study STTH Medan

    Science.gov (United States)

    Rahman Syahputra, Edy; Agustina Dalimunthe, Yulia; Irvan

    2017-12-01

    Many students are confused in choosing their own field of specialization, ultimately choosing areas of specialization that are incompatible with a variety of reasons such as just following a friend or because of the area of interest of many choices without knowing whether they have Competencies in the chosen field of interest. This research aims to apply Clustering method with Fuzzy C-means algorithm to classify students in the chosen interest field. The Fuzzy C-Means algorithm is one of the easiest and often used algorithms in data grouping techniques because it makes efficient estimates and does not require many parameters. Several studies have led to the conclusion that the Fuzzy C-Means algorithm can be used to group data based on certain attributes. In this research will be used Fuzzy C-Means algorithm to classify student data based on the value of core subjects in the selection of specialization field. This study also tested the accuracy of the Fuzzy C-Means algorithm in the determination of interest area. The study was conducted on the STT-Harapan Medan Information System Study program, and the object of research is the value of all students of STT-Harapan Medan Information System Study Program 2012. From this research, it is expected to get the specialization field, according to the students' ability based on the prerequisite principal value.

  18. Imperishable Networks: Complexity Theory and Communication Networking-Bridging the Gap Between Algorithmic Information Theory and Communication Networking

    National Research Council Canada - National Science Library

    Bush, Stephen

    2003-01-01

    ... other. Our goal has been to reduce the requirement and dependence upon detailed a priori information about known attacks and detect novel attacks by computing vulnerability and detecting anomalous behavior...

  19. Theory of Neural Information Processing Systems

    International Nuclear Information System (INIS)

    Galla, Tobias

    2006-01-01

    It is difficult not to be amazed by the ability of the human brain to process, to structure and to memorize information. Even by the toughest standards the behaviour of this network of about 10 11 neurons qualifies as complex, and both the scientific community and the public take great interest in the growing field of neuroscience. The scientific endeavour to learn more about the function of the brain as an information processing system is here a truly interdisciplinary one, with important contributions from biology, computer science, physics, engineering and mathematics as the authors quite rightly point out in the introduction of their book. The role of the theoretical disciplines here is to provide mathematical models of information processing systems and the tools to study them. These models and tools are at the centre of the material covered in the book by Coolen, Kuehn and Sollich. The book is divided into five parts, providing basic introductory material on neural network models as well as the details of advanced techniques to study them. A mathematical appendix complements the main text. The range of topics is extremely broad, still the presentation is concise and the book well arranged. To stress the breadth of the book let me just mention a few keywords here: the material ranges from the basics of perceptrons and recurrent network architectures to more advanced aspects such as Bayesian learning and support vector machines; Shannon's theory of information and the definition of entropy are discussed, and a chapter on Amari's information geometry is not missing either. Finally the statistical mechanics chapters cover Gardner theory and the replica analysis of the Hopfield model, not without being preceded by a brief introduction of the basic concepts of equilibrium statistical physics. The book also contains a part on effective theories of the macroscopic dynamics of neural networks. Many dynamical aspects of neural networks are usually hard to find in the

  20. Nonlinear closed-loop control theory

    International Nuclear Information System (INIS)

    Perez, R.B.; Otaduy, P.J.; Abdalla, M.

    1992-01-01

    Traditionally, the control of nuclear power plants has been implemented by the use of proportional-integral (PI) control systems. PI controllers are both simple and, within their calibration range, highly reliable. However, PIs provide little performance information that could be used to diagnose out-of-range events or the nature of unanticipated transients that may occur in the plant. To go beyond the PI controller, the new control algorithms must deal with the physical system nonlinearities and with the reality of uncertain dynamics terms in its mathematical model. The tool to develop a new kind of control algorithm is provided by Optimal Control Theory. In this theory, a norm is minimized which incorporates the constraint that the model equations should be satisfied at all times by means of the Lagrange multipliers. Optimal control algorithms consist of two sets of coupled equations: (1) the model equations, integrated forward in time; and (2) the equations for the Lagrange multipliers (adjoints), integrated backwards in time. There are two challenges: dealing with large sets of coupled nonlinear equations and with a two-point boundary value problem that must be solved iteratively. In this paper, the rigorous conversion of the two-point boundary value problem into an initial value problem is presented. In addition, the incorporation into the control algorithm of ''real world'' constraints such as sensors and actuators, dynamic response functions and time lags introduced by the digitalization of analog signals is presented. (Author)

  1. Two- and three-dimensional nonlocal density functional theory for inhomogeneous fluids. 1. Algorithms and parallelization

    International Nuclear Information System (INIS)

    Frink, L.J.D.; Salinger, A.G.

    2000-01-01

    Fluids adsorbed near surfaces, near macromolecules, and in porous materials are inhomogeneous, exhibiting spatially varying density distributions. This inhomogeneity in the fluid plays an important role in controlling a wide variety of complex physical phenomena including wetting, self-assembly, corrosion, and molecular recognition. One of the key methods for studying the properties of inhomogeneous fluids in simple geometries has been density functional theory (DFT). However, there has been a conspicuous lack of calculations in complex two- and three-dimensional geometries. The computational difficulty arises from the need to perform nested integrals that are due to nonlocal terms in the free energy functional. These integral equations are expensive both in evaluation time and in memory requirements; however, the expense can be mitigated by intelligent algorithms and the use of parallel computers. This paper details the efforts to develop efficient numerical algorithms so that nonlocal DFT calculations in complex geometries that require two or three dimensions can be performed. The success of this implementation will enable the study of solvation effects at heterogeneous surfaces, in zeolites, in solvated (bio)polymers, and in colloidal suspensions

  2. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  3. Theory of information warfare: basic framework, methodology and conceptual apparatus

    Directory of Open Access Journals (Sweden)

    Олександр Васильович Курбан

    2015-11-01

    Full Text Available It is conducted a comprehensive theoretical study and determine the basic provisions of the modern theory of information warfare in on-line social networks. Three basic blocks, which systematized the theoretical and methodological basis of the topic, are established. There are information and psychological war, social off-line and on-line network. According to the three blocks, theoretical concepts are defined and methodological substantiation of information processes within the information warfare in the social on-line networks is formed

  4. Autonomous intelligent vehicles theory, algorithms, and implementation

    CERN Document Server

    Cheng, Hong

    2011-01-01

    Here is the latest on intelligent vehicles, covering object and obstacle detection and recognition and vehicle motion control. Includes a navigation approach using global views; introduces algorithms for lateral and longitudinal motion control and more.

  5. Final Summary: Genre Theory in Information Studies

    DEFF Research Database (Denmark)

    Andersen, Jack

    2015-01-01

    Purpose This chapter offers a re-description of knowledge organization in light of genre and activity theory. Knowledge organization needs a new description in order to account for those activities and practices constituting and causing concrete knowledge organization activity. Genre and activity...... informing and shaping concrete forms of knowledge organization activity. With this, we are able to understand how knowledge organization activity also contributes to construct genre and activity systems and not only aid them....

  6. Year 7 Students, Information Literacy, and Transfer: A Grounded Theory

    Science.gov (United States)

    Herring, James E.

    2011-01-01

    This study examined the views of year 7 students, teacher librarians, and teachers in three state secondary schools in rural New South Wales, Australia, on information literacy and transfer. The aims of the study included the development of a grounded theory in relation to information literacy and transfer in these schools. The study's perspective…

  7. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    Science.gov (United States)

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  8. A Trust Evaluation Algorithm for Wireless Sensor Networks Based on Node Behaviors and D-S Evidence Theory

    Directory of Open Access Journals (Sweden)

    Jiangwen Wan

    2011-01-01

    Full Text Available For wireless sensor networks (WSNs, many factors, such as mutual interference of wireless links, battlefield applications and nodes exposed to the environment without good physical protection, result in the sensor nodes being more vulnerable to be attacked and compromised. In order to address this network security problem, a novel trust evaluation algorithm defined as NBBTE (Node Behavioral Strategies Banding Belief Theory of the Trust Evaluation Algorithm is proposed, which integrates the approach of nodes behavioral strategies and modified evidence theory. According to the behaviors of sensor nodes, a variety of trust factors and coefficients related to the network application are established to obtain direct and indirect trust values through calculating weighted average of trust factors. Meanwhile, the fuzzy set method is applied to form the basic input vector of evidence. On this basis, the evidence difference is calculated between the indirect and direct trust values, which link the revised D-S evidence combination rule to finally synthesize integrated trust value of nodes. The simulation results show that NBBTE can effectively identify malicious nodes and reflects the characteristic of trust value that ‘hard to acquire and easy to lose’. Furthermore, it is obvious that the proposed scheme has an outstanding advantage in terms of illustrating the real contribution of different nodes to trust evaluation.

  9. Observational information for f(T) theories and dark torsion

    Energy Technology Data Exchange (ETDEWEB)

    Bengochea, Gabriel R., E-mail: gabriel@iafe.uba.a [Instituto de Astronomia y Fisica del Espacio (IAFE), CC 67, Suc. 28, 1428 Buenos Aires (Argentina)

    2011-01-17

    In the present work we analyze and compare the information coming from different observational data sets in the context of a sort of f(T) theories. We perform a joint analysis with measurements of the most recent type Ia supernovae (SNe Ia), Baryon Acoustic Oscillation (BAO), Cosmic Microwave Background radiation (CMB), Gamma-Ray Bursts data (GRBs) and Hubble parameter observations (OHD) to constraint the only new parameter these theories have. It is shown that when the new combined BAO/CMB parameter is used to put constraints, the result is different from previous works. We also show that when we include Observational Hubble Data (OHD) the simpler {Lambda}CDM model is excluded to one sigma level, leading the effective equation of state of these theories to be of phantom type. Also, analyzing a tension criterion for SNe Ia and other observational sets, we obtain more consistent and better suited data sets to work with these theories.

  10. Quantum entanglement in non-local games, graph parameters and zero-error information theory

    NARCIS (Netherlands)

    Scarpa, G.

    2013-01-01

    We study quantum entanglement and some of its applications in graph theory and zero-error information theory. In Chapter 1 we introduce entanglement and other fundamental concepts of quantum theory. In Chapter 2 we address the question of how much quantum correlations generated by entanglement can

  11. Study of On-Ramp PI Controller Based on Dural Group QPSO with Different Well Centers Algorithm

    Directory of Open Access Journals (Sweden)

    Tao Wu

    2015-01-01

    Full Text Available A novel quantum-behaved particle swarm optimization (QPSO algorithm, dual-group QPSO with different well centers (DWC-QPSO algorithm, is proposed by constructing the master-slave subswarms. The new algorithm was applied in the parameter optimization of on-ramp traffic PI controller combining with nonlinear feedback theory. With the critical information contained in the searching space and results of the basic QPSO algorithm, this algorithm avoids the rapid disappearance of swarm diversity and enhances the global searching ability through collaboration between subswarms. Experiment results on an on-ramp traffic control simulation show that DWC-QPSO can be well applied in the study of on-ramp traffic PI controller and the comparison results illustrate that DWC-QPSO outperforms other evolutionary algorithms with enhancement in both adaptability and stability.

  12. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  13. Hybrid iterative phase retrieval algorithm based on fusion of intensity information in three defocused planes.

    Science.gov (United States)

    Zeng, Fa; Tan, Qiaofeng; Yan, Yingbai; Jin, Guofan

    2007-10-01

    Study of phase retrieval technology is quite meaningful, for its wide applications related to many domains, such as adaptive optics, detection of laser quality, precise measurement of optical surface, and so on. Here a hybrid iterative phase retrieval algorithm is proposed, based on fusion of the intensity information in three defocused planes. First the conjugate gradient algorithm is adapted to achieve a coarse solution of phase distribution in the input plane; then the iterative angular spectrum method is applied in succession for better retrieval result. This algorithm is still applicable even when the exact shape and size of the aperture in the input plane are unknown. Moreover, this algorithm always exhibits good convergence, i.e., the retrieved results are insensitive to the chosen positions of the three defocused planes and the initial guess of complex amplitude in the input plane, which has been proved by both simulations and further experiments.

  14. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  15. Text Mining Applications and Theory

    CERN Document Server

    Berry, Michael W

    2010-01-01

    Text Mining: Applications and Theory presents the state-of-the-art algorithms for text mining from both the academic and industrial perspectives.  The contributors span several countries and scientific domains: universities, industrial corporations, and government laboratories, and demonstrate the use of techniques from machine learning, knowledge discovery, natural language processing and information retrieval to design computational models for automated text analysis and mining. This volume demonstrates how advancements in the fields of applied mathematics, computer science, machine learning

  16. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  17. Star pattern recognition algorithm aided by inertial information

    Science.gov (United States)

    Liu, Bao; Wang, Ke-dong; Zhang, Chao

    2011-08-01

    Star pattern recognition is one of the key problems of the celestial navigation. The traditional star pattern recognition approaches, such as the triangle algorithm and the star angular distance algorithm, are a kind of all-sky matching method whose recognition speed is slow and recognition success rate is not high. Therefore, the real time and reliability of CNS (Celestial Navigation System) is reduced to some extent, especially for the maneuvering spacecraft. However, if the direction of the camera optical axis can be estimated by other navigation systems such as INS (Inertial Navigation System), the star pattern recognition can be fulfilled in the vicinity of the estimated direction of the optical axis. The benefits of the INS-aided star pattern recognition algorithm include at least the improved matching speed and the improved success rate. In this paper, the direction of the camera optical axis, the local matching sky, and the projection of stars on the image plane are estimated by the aiding of INS firstly. Then, the local star catalog for the star pattern recognition is established in real time dynamically. The star images extracted in the camera plane are matched in the local sky. Compared to the traditional all-sky star pattern recognition algorithms, the memory of storing the star catalog is reduced significantly. Finally, the INS-aided star pattern recognition algorithm is validated by simulations. The results of simulations show that the algorithm's computation time is reduced sharply and its matching success rate is improved greatly.

  18. Cognition to Collaboration: User-Centric Approach and Information Behaviour Theories/Models

    Directory of Open Access Journals (Sweden)

    Alperen M Aydin

    2016-12-01

    Full Text Available Aim/Purpose: The objective of this paper is to review the vast literature of user-centric in-formation science and inform about the emerging themes in information behaviour science. Background:\tThe paradigmatic shift from system-centric to user-centric approach facilitates research on the cognitive and individual information processing. Various information behaviour theories/models emerged. Methodology: Recent information behaviour theories and models are presented. Features, strengths and weaknesses of the models are discussed through the analysis of the information behaviour literature. Contribution: This paper sheds light onto the weaknesses in earlier information behaviour models and stresses (and advocates the need for research on social information behaviour. Findings: Prominent information behaviour models deal with individual information behaviour. People live in a social world and sort out most of their daily or work problems in groups. However, only seven papers discuss social information behaviour (Scopus search. Recommendations for Practitioners\t: ICT tools used for inter-organisational sharing should be redesigned for effective information-sharing during disaster/emergency times. Recommendation for Researchers: There are scarce sources on social side of the information behaviour, however, most of the work tasks are carried out in groups/teams. Impact on Society: In dynamic work contexts like disaster management and health care settings, collaborative information-sharing may result in decreasing the losses. Future Research: A fieldwork will be conducted in disaster management context investigating the inter-organisational information-sharing.

  19. The application of foraging theory to the information searching behaviour of general practitioners.

    Science.gov (United States)

    Dwairy, Mai; Dowell, Anthony C; Stahl, Jean-Claude

    2011-08-23

    General Practitioners (GPs) employ strategies to identify and retrieve medical evidence for clinical decision making which take workload and time constraints into account. Optimal Foraging Theory (OFT) initially developed to study animal foraging for food is used to explore the information searching behaviour of General Practitioners. This study is the first to apply foraging theory within this context.Study objectives were: 1. To identify the sequence and steps deployed in identifiying and retrieving evidence for clinical decision making. 2. To utilise Optimal Foraging Theory to assess the effectiveness and efficiency of General Practitioner information searching. GPs from the Wellington region of New Zealand were asked to document in a pre-formatted logbook the steps and outcomes of an information search linked to their clinical decision making, and fill in a questionnaire about their personal, practice and information-searching backgrounds. A total of 115/155 eligible GPs returned a background questionnaire, and 71 completed their information search logbook. GPs spent an average of 17.7 minutes addressing their search for clinical information. Their preferred information sources were discussions with colleagues (38% of sources) and books (22%). These were the two most profitable information foraging sources (15.9 min and 9.5 min search time per answer, compared to 34.3 minutes in databases). GPs nearly always accessed another source when unsuccessful (95% after 1st source), and frequently when successful (43% after 2nd source). Use of multiple sources accounted for 41% of searches, and increased search success from 70% to 89%. By consulting in foraging terms the most 'profitable' sources of information (colleagues, books), rapidly switching sources when unsuccessful, and frequently double checking, GPs achieve an efficient trade-off between maximizing search success and information reliability, and minimizing searching time. As predicted by foraging theory, GPs

  20. The use of anatomical information for molecular image reconstruction algorithms: Attention/Scatter correction, motion compensation, and noise reduction

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)

    2016-03-15

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.

  1. Finding Commonalities: Social Information Processing and Domain Theory in the Study of Aggression

    Science.gov (United States)

    Nucci, Larry

    2004-01-01

    The Arsenio and Lemerise (this issue) proposal integrating social information processing (SIP) and domain theory to study children's aggression is evaluated from a domain theory perspective. Basic tenets of domain theory rendering it compatible with SIP are discussed as well as points of divergence. Focus is directed to the proposition that…

  2. Information richness in construction projects: A critical social theory

    NARCIS (Netherlands)

    Adriaanse, Adriaan Maria; Voordijk, Johannes T.; Greenwood, David

    2002-01-01

    Two important factors influencing the communication in construction projects are the interests of the people involved and the language spoken by the people involved. The objective of the paper is to analyse these factors by using recent insights in the information richness theory. The critical

  3. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  4. Pangenesis as a source of new genetic information. The history of a now disproven theory.

    Science.gov (United States)

    Bergman, Gerald

    2006-01-01

    Evolution is based on natural selection of existing biological phenotypic traits. Natural selection can only eliminate traits. It cannot create new ones, requiring a theory to explain the origin of new genetic information. The theory of pangenesis was a major attempt to explain the source of new genetic information required to produce phenotypic variety. This theory, advocated by Darwin as the main source of genetic variety, has now been empirically disproved. It is currently a theory mainly of interest to science historians.

  5. Quantum information theory. Mathematical foundation. 2. ed.

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Masahito [Nagoya Univ. (Japan). Graduate School of Mathematics

    2017-07-01

    This graduate textbook provides a unified view of quantum information theory. Clearly explaining the necessary mathematical basis, it merges key topics from both information-theoretic and quantum- mechanical viewpoints and provides lucid explanations of the basic results. Thanks to this unified approach, it makes accessible such advanced topics in quantum communication as quantum teleportation, superdense coding, quantum state transmission (quantum error-correction) and quantum encryption. Since the publication of the preceding book Quantum Information: An Introduction, there have been tremendous strides in the field of quantum information. In particular, the following topics - all of which are addressed here - made seen major advances: quantum state discrimination, quantum channel capacity, bipartite and multipartite entanglement, security analysis on quantum communication, reverse Shannon theorem and uncertainty relation. With regard to the analysis of quantum security, the present book employs an improved method for the evaluation of leaked information and identifies a remarkable relation between quantum security and quantum coherence. Taken together, these two improvements allow a better analysis of quantum state transmission. In addition, various types of the newly discovered uncertainty relation are explained. Presenting a wealth of new developments, the book introduces readers to the latest advances and challenges in quantum information. To aid in understanding, each chapter is accompanied by a set of exercises and solutions.

  6. Quantum information theory. Mathematical foundation. 2. ed.

    International Nuclear Information System (INIS)

    Hayashi, Masahito

    2017-01-01

    This graduate textbook provides a unified view of quantum information theory. Clearly explaining the necessary mathematical basis, it merges key topics from both information-theoretic and quantum- mechanical viewpoints and provides lucid explanations of the basic results. Thanks to this unified approach, it makes accessible such advanced topics in quantum communication as quantum teleportation, superdense coding, quantum state transmission (quantum error-correction) and quantum encryption. Since the publication of the preceding book Quantum Information: An Introduction, there have been tremendous strides in the field of quantum information. In particular, the following topics - all of which are addressed here - made seen major advances: quantum state discrimination, quantum channel capacity, bipartite and multipartite entanglement, security analysis on quantum communication, reverse Shannon theorem and uncertainty relation. With regard to the analysis of quantum security, the present book employs an improved method for the evaluation of leaked information and identifies a remarkable relation between quantum security and quantum coherence. Taken together, these two improvements allow a better analysis of quantum state transmission. In addition, various types of the newly discovered uncertainty relation are explained. Presenting a wealth of new developments, the book introduces readers to the latest advances and challenges in quantum information. To aid in understanding, each chapter is accompanied by a set of exercises and solutions.

  7. Research on Kalman Filtering Algorithm for Deformation Information Series of Similar Single-Difference Model

    Institute of Scientific and Technical Information of China (English)

    L(U) Wei-cai; XU Shao-quan

    2004-01-01

    Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.

  8. Defining information need in health - assimilating complex theories derived from information science.

    Science.gov (United States)

    Ormandy, Paula

    2011-03-01

    Key policy drivers worldwide include optimizing patients' roles in managing their care; focusing services around patients' needs and preferences; and providing information to support patients' contributions and choices. The term information need penetrates many policy documents. Information need is espoused as the foundation from which to develop patient-centred or patient-led services. Yet there is no clear definition as to what the term means or how patients' information needs inform and shape information provision and patient care. The assimilation of complex theories originating from information science has much to offer considerations of patient information need within the context of health care. Health-related research often focuses on the content of information patients prefer, not why they need information. This paper extends and applies knowledge of information behaviour to considerations of information need in health, exposing a working definition for patient information need that reiterates the importance of considering the patient's goals and understanding the patient's context/situation. A patient information need is defined as 'recognition that their knowledge is inadequate to satisfy a goal, within the context/situation that they find themselves at a specific point in the time'. This typifies the key concepts of national/international health policy, the centrality and importance of the patient. The proposed definition of patient information need provides a conceptual framework to guide health-care practitioners on what to consider and why when meeting the information needs of patients in practice. This creates a solid foundation from which to inform future research. © 2010 The Author. Health Expectations © 2010 Blackwell Publishing Ltd.

  9. Online dating in Japan: a test of social information processing theory.

    Science.gov (United States)

    Farrer, James; Gavin, Jeff

    2009-08-01

    This study examines the experiences of past and present members of a popular Japanese online dating site in order to explore the extent to which Western-based theories of computer-mediated communication (CMC) and the development of online relationships are relevant to the Japanese online dating experience. Specifically, it examines whether social information processing theory (SIPT) is applicable to Japanese online dating interactions, and how and to what extent Japanese daters overcome the limitations of CMC through the use of contextual and other cues. Thirty-six current members and 27 former members of Match.com Japan completed an online survey. Using issue-based procedures for grounded theory analysis, we found strong support for SIPT. Japanese online daters adapt their efforts to present and acquire social information using the cues that the online dating platform provides, although many of these cues are specific to Japanese social context.

  10. A re-examination of information seeking behaviour in the context of activity theory

    Directory of Open Access Journals (Sweden)

    Wilson T.D.

    2006-01-01

    Full Text Available Introduction. Activity theory, developed in the USSR as a Marxist alternative to Western psychology, has been applied widely in educational studies and increasingly in human-computer interaction research. Argument. The key elements of activity theory, Motivation, Goal, Activity, Tools, Object, Outcome, Rules, Community and Division of labour are all directly applicable to the conduct of information behaviour research. An activity-theoretical approach to information behaviour research would provide a sound basis for the elaboration of contextual issues, for the discovering of organizational and other contradictions that affect information behaviour. It may be used to aid the design and analysis of investigations. Elaboration. The basic ideas of activity theory are outlined and an attempt is made to harmonize different perspectives. A contrast is made between an activity system perspective and an activity process perspective and a diagrammatic representation of the process perspective is offered. Conclusion. Activity theory is not a predictive theory but a conceptual framework within which different theoretical perspectives may be employed. Typically, it is suggested that several methods of data collection should be employed and that the time frame for investigation should be long enough for the full range of contextual issues to emerge. Activity theory offers not only a useful conceptual framework, but also a coherent terminology to be shared by researchers, and a rapidly developing body of literature in associated disciplines.

  11. An Emerging Theory for Evidence Based Information Literacy Instruction in School Libraries, Part 1: Building a Foundation

    Directory of Open Access Journals (Sweden)

    Carol A. Gordon

    2009-06-01

    Full Text Available Objective – Part I of this paper aims to create a framework for an emerging theory of evidence based information literacy instruction. In order to ground this framework in existing theory, a holistic perspective views inquiry as a learning process that synthesizes information searching and knowledge building. An interdisciplinary approach is taken to relate user-centric information behavior theory and constructivist learning theory that supports this synthesis. The substantive theories that emerge serve as a springboard for emerging theory. A second objective of this paper is to define evidence based information literacy instruction by assessing the suitability of performance based assessment and action research as tools of evidence based practice.Methods – An historical review of research grounded in user-centered information behavior theory and constructivist learning theory establishes a body of existing substantive theory that supports emerging theory for evidence based information literacy instruction within an information-to-knowledge approach. A focused review of the literature presents supporting research for an evidence based pedagogy that is performance assessment based, i.e., information users are immersed in real-world tasks that include formative assessments. An analysis of the meaning of action research in terms of its purpose and methodology establishes its suitability for structuring an evidence based pedagogy. Supporting research tests a training model for school librarians and educators which integrates performance based assessment, as well as action research. Results – Findings of an historical analysis of information behavior theory and constructivist teaching practices, and a literature review that explores teaching models for evidence based information literacy instruction, point to two elements of evidence based information literacy instruction: the micro level of information searching behavior and the macro level of

  12. Information Architecture without Internal Theory: An Inductive Design Process.

    Science.gov (United States)

    Haverty, Marsha

    2002-01-01

    Suggests that information architecture design is primarily an inductive process, partly because it lacks internal theory and partly because it is an activity that supports emergent phenomena (user experiences) from basic design components. Suggests a resemblance to Constructive Induction, a design process that locates the best representational…

  13. Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets

    Directory of Open Access Journals (Sweden)

    Shen Lu

    2013-04-01

    Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.

  14. A Novel Dynamic Algorithm for IT Outsourcing Risk Assessment Based on Transaction Cost Theory

    Directory of Open Access Journals (Sweden)

    Guodong Cong

    2015-01-01

    Full Text Available With the great risk exposed in IT outsourcing, how to assess IT outsourcing risk becomes a critical issue. However, most of approaches to date need to further adapt to the particular complexity of IT outsourcing risk for either falling short in subjective bias, inaccuracy, or efficiency. This paper proposes a dynamic algorithm of risk assessment. It initially forwards extended three layers (risk factors, risks, and risk consequences of transferring mechanism based on transaction cost theory (TCT as the framework of risk analysis, which bridges the interconnection of components in three layers with preset transferring probability and impact. Then, it establishes an equation group between risk factors and risk consequences, which assures the “attribution” more precisely to track the specific sources that lead to certain loss. Namely, in each phase of the outsourcing lifecycle, both the likelihood and the loss of each risk factor and those of each risk are acquired through solving equation group with real data of risk consequences collected. In this “reverse” way, risk assessment becomes a responsive and interactive process with real data instead of subjective estimation, which improves the accuracy and alleviates bias in risk assessment. The numerical case proves the effectiveness of the algorithm compared with the approach forwarded by other references.

  15. Quantum information and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Reimpell, Michael

    2008-07-01

    This thesis is concerned with convex optimization problems in quantum information theory. It features an iterative algorithm for optimal quantum error correcting codes, a postprocessing method for incomplete tomography data, a method to estimate the amount of entanglement in witness experiments, and it gives necessary and sufficient criteria for the existence of retrodiction strategies for a generalized mean king problem. (orig.)

  16. Quantum information and convex optimization

    International Nuclear Information System (INIS)

    Reimpell, Michael

    2008-01-01

    This thesis is concerned with convex optimization problems in quantum information theory. It features an iterative algorithm for optimal quantum error correcting codes, a postprocessing method for incomplete tomography data, a method to estimate the amount of entanglement in witness experiments, and it gives necessary and sufficient criteria for the existence of retrodiction strategies for a generalized mean king problem. (orig.)

  17. The Algorithm of Link Prediction on Social Network

    Directory of Open Access Journals (Sweden)

    Liyan Dong

    2013-01-01

    Full Text Available At present, most link prediction algorithms are based on the similarity between two entities. Social network topology information is one of the main sources to design the similarity function between entities. But the existing link prediction algorithms do not apply the network topology information sufficiently. For lack of traditional link prediction algorithms, we propose two improved algorithms: CNGF algorithm based on local information and KatzGF algorithm based on global information network. For the defect of the stationary of social network, we also provide the link prediction algorithm based on nodes multiple attributes information. Finally, we verified these algorithms on DBLP data set, and the experimental results show that the performance of the improved algorithm is superior to that of the traditional link prediction algorithm.

  18. Some ideas for learning CP-theories

    OpenAIRE

    Fierens, Daan

    2008-01-01

    Causal Probabilistic logic (CP-logic) is a language for describing complex probabilistic processes. In this talk we consider the problem of learning CP-theories from data. We briefly discuss three possible approaches. First, we review the existing algorithm by Meert et al. Second, we show how simple CP-theories can be learned by using the learning algorithm for Logical Bayesian Networks and converting the result into a CP-theory. Third, we argue that for learning more complex CP-theories, an ...

  19. Learning algorithms and automatic processing of languages; Algorithmes a apprentissage et traitement automatique des langues

    Energy Technology Data Exchange (ETDEWEB)

    Fluhr, Christian Yves Andre

    1977-06-15

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts.

  20. Information content of ozone retrieval algorithms

    Science.gov (United States)

    Rodgers, C.; Bhartia, P. K.; Chu, W. P.; Curran, R.; Deluisi, J.; Gille, J. C.; Hudson, R.; Mateer, C.; Rusch, D.; Thomas, R. J.

    1989-01-01

    The algorithms are characterized that were used for production processing by the major suppliers of ozone data to show quantitatively: how the retrieved profile is related to the actual profile (This characterizes the altitude range and vertical resolution of the data); the nature of systematic errors in the retrieved profiles, including their vertical structure and relation to uncertain instrumental parameters; how trends in the real ozone are reflected in trends in the retrieved ozone profile; and how trends in other quantities (both instrumental and atmospheric) might appear as trends in the ozone profile. No serious deficiencies were found in the algorithms used in generating the major available ozone data sets. As the measurements are all indirect in someway, and the retrieved profiles have different characteristics, data from different instruments are not directly comparable.

  1. Implementation of Rivest Shamir Adleman Algorithm (RSA) and Vigenere Cipher In Web Based Information System

    Science.gov (United States)

    Aryanti, Aryanti; Mekongga, Ikhthison

    2018-02-01

    Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA) and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA) and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

  2. Quantum information theory with Gaussian systems

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, O.

    2006-04-06

    This thesis applies ideas and concepts from quantum information theory to systems of continuous-variables such as the quantum harmonic oscillator. The focus is on three topics: the cloning of coherent states, Gaussian quantum cellular automata and Gaussian private channels. Cloning was investigated both for finite-dimensional and for continuous-variable systems. We construct a private quantum channel for the sequential encryption of coherent states with a classical key, where the key elements have finite precision. For the case of independent one-mode input states, we explicitly estimate this precision, i.e. the number of key bits needed per input state, in terms of these parameters. (orig.)

  3. Quantum information theory with Gaussian systems

    International Nuclear Information System (INIS)

    Krueger, O.

    2006-01-01

    This thesis applies ideas and concepts from quantum information theory to systems of continuous-variables such as the quantum harmonic oscillator. The focus is on three topics: the cloning of coherent states, Gaussian quantum cellular automata and Gaussian private channels. Cloning was investigated both for finite-dimensional and for continuous-variable systems. We construct a private quantum channel for the sequential encryption of coherent states with a classical key, where the key elements have finite precision. For the case of independent one-mode input states, we explicitly estimate this precision, i.e. the number of key bits needed per input state, in terms of these parameters. (orig.)

  4. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  5. NLSE: Parameter-Based Inversion Algorithm

    Science.gov (United States)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  6. Robust consensus algorithm for multi-agent systems with exogenous disturbances under convergence conditions

    Science.gov (United States)

    Jiang, Yulian; Liu, Jianchang; Tan, Shubin; Ming, Pingsong

    2014-09-01

    In this paper, a robust consensus algorithm is developed and sufficient conditions for convergence to consensus are proposed for a multi-agent system (MAS) with exogenous disturbances subject to partial information. By utilizing H∞ robust control, differential game theory and a design-based approach, the consensus problem of the MAS with exogenous bounded interference is resolved and the disturbances are restrained, simultaneously. Attention is focused on designing an H∞ robust controller (the robust consensus algorithm) based on minimisation of our proposed rational and individual cost functions according to goals of the MAS. Furthermore, sufficient conditions for convergence of the robust consensus algorithm are given. An example is employed to demonstrate that our results are effective and more capable to restrain exogenous disturbances than the existing literature.

  7. Feminist Praxis, Critical Theory and Informal Hierarchies

    Directory of Open Access Journals (Sweden)

    Eva Giraud

    2015-05-01

    Full Text Available This article draws on my experiences teaching across two undergraduate media modules in a UK research-intensive institution to explore tactics for combatting both institutional and informal hierarchies within university teaching contexts. Building on Sara Motta’s (2012 exploration of implementing critical pedagogic principles at postgraduate level in an elite university context, I discuss additional tactics for combatting these hierarchies in undergraduate settings, which were developed by transferring insights derived from informal workshops led by the University of Nottingham’s Feminism and Teaching network into the classroom. This discussion is framed in relation to the concepts of “cyborg pedagogies” and “political semiotics of articulation,” derived from the work of Donna Haraway, in order to theorize how these tactics can engender productive relationships between radical pedagogies and critical theory.

  8. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    Science.gov (United States)

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  9. Unified treatment algorithm for the management of crotaline snakebite in the United States: results of an evidence-informed consensus workshop

    Directory of Open Access Journals (Sweden)

    Kerns William P

    2011-02-01

    Full Text Available Abstract Background Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists. Methods A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm. Results A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy. Conclusions Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes.

  10. Genetic algorithms: Theory and applications in the safety domain

    International Nuclear Information System (INIS)

    Marseguerra, M.; Zio, E.

    2001-01-01

    This work illustrates the fundamentals underlying optimization by genetic algorithms. All the steps of the procedure are sketched in details for both the traditional breeding algorithm as well as for more sophisticated breeding procedures. The necessity of affine transforming the fitness function, object of the optimization, is discussed in detail, together with the transformation itself. Procedures for the inducement of species and niches are also presented. The theoretical aspects of the work are corroborated by a demonstration of the potential of genetic algorithm optimization procedures on three different case studies. The first case study deals with the design of the pressure stages of a natural gas pipeline system; the second one treats a reliability allocation problem in system configuration design; the last case regards the selection of maintenance and repair strategies for the logistic management of a risky plant. (author)

  11. Using theories of behaviour change to inform interventions for addictive behaviours.

    Science.gov (United States)

    Webb, Thomas L; Sniehotta, Falko F; Michie, Susan

    2010-11-01

    This paper reviews a set of theories of behaviour change that are used outside the field of addiction and considers their relevance for this field. Ten theories are reviewed in terms of (i) the main tenets of each theory, (ii) the implications of the theory for promoting change in addictive behaviours and (iii) studies in the field of addiction that have used the theory. An augmented feedback loop model based on Control Theory is used to organize the theories and to show how different interventions might achieve behaviour change. Briefly, each theory provided the following recommendations for intervention: Control Theory: prompt behavioural monitoring, Goal-Setting Theory: set specific and challenging goals, Model of Action Phases: form 'implementation intentions', Strength Model of Self-Control: bolster self-control resources, Social Cognition Models (Protection Motivation Theory, Theory of Planned Behaviour, Health Belief Model): modify relevant cognitions, Elaboration Likelihood Model: consider targets' motivation and ability to process information, Prototype Willingness Model: change perceptions of the prototypical person who engages in behaviour and Social Cognitive Theory: modify self-efficacy. There are a range of theories in the field of behaviour change that can be applied usefully to addiction, each one pointing to a different set of modifiable determinants and/or behaviour change techniques. Studies reporting interventions should describe theoretical basis, behaviour change techniques and mode of delivery accurately so that effective interventions can be understood and replicated. © 2010 The Authors. Journal compilation © 2010 Society for the Study of Addiction.

  12. Towards a Robuster Interpretive Parsing: learning from overt forms in Optimality Theory

    NARCIS (Netherlands)

    Biró, T.

    2013-01-01

    The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this

  13. Testing components of Rothbard’s theory with the current information system

    Directory of Open Access Journals (Sweden)

    Aurelian Virgil BĂLUŢĂ

    2016-03-01

    Full Text Available The concept of aggression against property rights of individuals generates a series of developments that allow solutions and options to problems and dilemmas of today's economy: the dynamics of the tax system, focusing attention on shaping the budget with macro-economic calculations, the protection of competition, and customs policy in the modern era. The confidence in theory in general, especially in economic theory, is based on the logical and methodological validation of scientific reasoning and moral aspects. Transforming the theory into a means of changing the society can only be made when a theory is experimentally validated. The economic theory needs confirmation from specialized disciplines such as statistics and accounting. It is possible and necessary for the advantages of radical liberal thinking to be reflected in every company’s bookkeeping and in public statistics. As an example, the paper presents the way some components of Rothbard's theory are reflect in the accounting and statistics information system.

  14. The application of foraging theory to the information searching behaviour of general practitioners

    Directory of Open Access Journals (Sweden)

    Dowell Anthony C

    2011-08-01

    Full Text Available Abstract Background General Practitioners (GPs employ strategies to identify and retrieve medical evidence for clinical decision making which take workload and time constraints into account. Optimal Foraging Theory (OFT initially developed to study animal foraging for food is used to explore the information searching behaviour of General Practitioners. This study is the first to apply foraging theory within this context. Study objectives were: 1. To identify the sequence and steps deployed in identifiying and retrieving evidence for clinical decision making. 2. To utilise Optimal Foraging Theory to assess the effectiveness and efficiency of General Practitioner information searching. Methods GPs from the Wellington region of New Zealand were asked to document in a pre-formatted logbook the steps and outcomes of an information search linked to their clinical decision making, and fill in a questionnaire about their personal, practice and information-searching backgrounds. Results A total of 115/155 eligible GPs returned a background questionnaire, and 71 completed their information search logbook. GPs spent an average of 17.7 minutes addressing their search for clinical information. Their preferred information sources were discussions with colleagues (38% of sources and books (22%. These were the two most profitable information foraging sources (15.9 min and 9.5 min search time per answer, compared to 34.3 minutes in databases. GPs nearly always accessed another source when unsuccessful (95% after 1st source, and frequently when successful (43% after 2nd source. Use of multiple sources accounted for 41% of searches, and increased search success from 70% to 89%. Conclusions By consulting in foraging terms the most 'profitable' sources of information (colleagues, books, rapidly switching sources when unsuccessful, and frequently double checking, GPs achieve an efficient trade-off between maximizing search success and information reliability, and

  15. A finite state, finite memory minimum principle, part 2. [a discussion of game theory, signaling, stochastic processes, and control theory

    Science.gov (United States)

    Sandell, N. R., Jr.; Athans, M.

    1975-01-01

    The development of the theory of the finite - state, finite - memory (FSFM) stochastic control problem is discussed. The sufficiency of the FSFM minimum principle (which is in general only a necessary condition) was investigated. By introducing the notion of a signaling strategy as defined in the literature on games, conditions under which the FSFM minimum principle is sufficient were determined. This result explicitly interconnects the information structure of the FSFM problem with its optimality conditions. The min-H algorithm for the FSFM problem was studied. It is demonstrated that a version of the algorithm always converges to a particular type of local minimum termed a person - by - person extremal.

  16. New approaches in mathematical biology: Information theory and molecular machines

    International Nuclear Information System (INIS)

    Schneider, T.

    1995-01-01

    My research uses classical information theory to study genetic systems. Information theory was founded by Claude Shannon in the 1940's and has had an enormous impact on communications engineering and computer sciences. Shannon found a way to measure information. This measure can be used to precisely characterize the sequence conservation at nucleic-acid binding sites. The resulting methods, by completely replacing the use of ''consensus sequences'', provide better models for molecular biologists. An excess of conservation led us to do experimental work on bacteriophage T7 promoters and the F plasmid IncD repeats. The wonderful fidelity of telephone communications and compact disk (CD) music can be traced directly to Shannon's channel capacity theorem. When rederived for molecular biology, this theorem explains the surprising precision of many molecular events. Through connections with the Second Law of Thermodyanmics and Maxwell's Demon, this approach also has implications for the development of technology at the molecular level. Discussions of these topics are held on the internet news group bionet.info-theo. (author). (Abstract only)

  17. Automatic bounding estimation in modified NLMS algorithm

    International Nuclear Information System (INIS)

    Shahtalebi, K.; Doost-Hoseini, A.M.

    2002-01-01

    Modified Normalized Least Mean Square algorithm, which is a sign form of Nlm based on set-membership (S M) theory in the class of optimal bounding ellipsoid (OBE) algorithms, requires a priori knowledge of error bounds that is unknown in most applications. In a special but popular case of measurement noise, a simple algorithm has been proposed. With some simulation examples the performance of algorithm is compared with Modified Normalized Least Mean Square

  18. Hybrid attribute-based recommender system for learning material using genetic algorithm and a multidimensional information model

    Directory of Open Access Journals (Sweden)

    Mojtaba Salehi

    2013-03-01

    Full Text Available In recent years, the explosion of learning materials in the web-based educational systems has caused difficulty of locating appropriate learning materials to learners. A personalized recommendation is an enabling mechanism to overcome information overload occurred in the new learning environments and deliver suitable materials to learners. Since users express their opinions based on some specific attributes of items, this paper proposes a hybrid recommender system for learning materials based on their attributes to improve the accuracy and quality of recommendation. The presented system has two main modules: explicit attribute-based recommender and implicit attribute-based recommender. In the first module, weights of implicit or latent attributes of materials for learner are considered as chromosomes in genetic algorithm then this algorithm optimizes the weights according to historical rating. Then, recommendation is generated by Nearest Neighborhood Algorithm (NNA using the optimized weight vectors implicit attributes that represent the opinions of learners. In the second, preference matrix (PM is introduced that can model the interests of learner based on explicit attributes of learning materials in a multidimensional information model. Then, a new similarity measure between PMs is introduced and recommendations are generated by NNA. The experimental results show that our proposed method outperforms current algorithms on accuracy measures and can alleviate some problems such as cold-start and sparsity.

  19. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  20. Medical image registration by combining global and local information: a chain-type diffeomorphic demons algorithm

    International Nuclear Information System (INIS)

    Liu, Xiaozheng; Yuan, Zhenming; Zhu, Junming; Xu, Dongrong

    2013-01-01

    The demons algorithm is a popular algorithm for non-rigid image registration because of its computational efficiency and simple implementation. The deformation forces of the classic demons algorithm were derived from image gradients by considering the deformation to decrease the intensity dissimilarity between images. However, the methods using the difference of image intensity for medical image registration are easily affected by image artifacts, such as image noise, non-uniform imaging and partial volume effects. The gradient magnitude image is constructed from the local information of an image, so the difference in a gradient magnitude image can be regarded as more reliable and robust for these artifacts. Then, registering medical images by considering the differences in both image intensity and gradient magnitude is a straightforward selection. In this paper, based on a diffeomorphic demons algorithm, we propose a chain-type diffeomorphic demons algorithm by combining the differences in both image intensity and gradient magnitude for medical image registration. Previous work had shown that the classic demons algorithm can be considered as an approximation of a second order gradient descent on the sum of the squared intensity differences. By optimizing the new dissimilarity criteria, we also present a set of new demons forces which were derived from the gradients of the image and gradient magnitude image. We show that, in controlled experiments, this advantage is confirmed, and yields a fast convergence. (paper)

  1. Quantum: information theory: technological challenge; Computacion Cuantica: un reto tecnologico

    Energy Technology Data Exchange (ETDEWEB)

    Calixto, M.

    2001-07-01

    The new Quantum Information Theory augurs powerful machines that obey the entangled logic of the subatomic world. Parallelism, entanglement, teleportation, no-cloning and quantum cryptography are typical peculiarities of this novel way of understanding computation. (Author) 24 refs.

  2. One improved LSB steganography algorithm

    Science.gov (United States)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  3. Comparison of Predictive Contract Mechanisms from an Information Theory Perspective

    OpenAIRE

    Zhang, Xin; Ward, Tomas; McLoone, Seamus

    2012-01-01

    Inconsistency arises across a Distributed Virtual Environment due to network latency induced by state changes communications. Predictive Contract Mechanisms (PCMs) combat this problem through reducing the amount of messages transmitted in return for perceptually tolerable inconsistency. To date there are no methods to quantify the efficiency of PCMs in communicating this reduced state information. This article presents an approach derived from concepts in information theory for a dee...

  4. Russian and Chinese Information Warfare: Theory and Practice

    Science.gov (United States)

    2004-06-01

    Integral neurolinguistic programming •Placing essential programs into the conscious or sub- conscious mind •Subconscious suggestions that modify human...Generators of special rays •Optical systems • Neurolinguistic programming •Computer psychotechnology •The mass media •Audiovisual effects •Special effects...Information Warfare: Theory and Practice 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e

  5. Informal Theory: The Ignored Link in Theory-to-Practice

    Science.gov (United States)

    Love, Patrick

    2012-01-01

    Applying theory to practice in student affairs is dominated by the assumption that formal theory is directly applied to practice. Among the problems with this assumption is that many practitioners believe they must choose between their lived experiences and formal theory, and that graduate students are taught that their experience "does not…

  6. Intuitive theories of information: beliefs about the value of redundancy.

    Science.gov (United States)

    Soll, J B

    1999-03-01

    In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.

  7. Testing a Fourier Accelerated Hybrid Monte Carlo Algorithm

    OpenAIRE

    Catterall, S.; Karamov, S.

    2001-01-01

    We describe a Fourier Accelerated Hybrid Monte Carlo algorithm suitable for dynamical fermion simulations of non-gauge models. We test the algorithm in supersymmetric quantum mechanics viewed as a one-dimensional Euclidean lattice field theory. We find dramatic reductions in the autocorrelation time of the algorithm in comparison to standard HMC.

  8. Matching theory

    CERN Document Server

    Plummer, MD

    1986-01-01

    This study of matching theory deals with bipartite matching, network flows, and presents fundamental results for the non-bipartite case. It goes on to study elementary bipartite graphs and elementary graphs in general. Further discussed are 2-matchings, general matching problems as linear programs, the Edmonds Matching Algorithm (and other algorithmic approaches), f-factors and vertex packing.

  9. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Xu

    2014-01-01

    Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  10. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    Science.gov (United States)

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  11. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  12. Theory and Algorithms for Global/Local Design Optimization

    National Research Council Canada - National Science Library

    Haftka, Raphael T

    2004-01-01

    ... the component and overall design as well as on exploration of global optimization algorithms. In the former category, heuristic decomposition was followed with proof that it solves the original problem...

  13. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  14. Linear programming algorithms and applications

    CERN Document Server

    Vajda, S

    1981-01-01

    This text is based on a course of about 16 hours lectures to students of mathematics, statistics, and/or operational research. It is intended to introduce readers to the very wide range of applicability of linear programming, covering problems of manage­ ment, administration, transportation and a number of other uses which are mentioned in their context. The emphasis is on numerical algorithms, which are illustrated by examples of such modest size that the solutions can be obtained using pen and paper. It is clear that these methods, if applied to larger problems, can also be carried out on automatic (electronic) computers. Commercially available computer packages are, in fact, mainly based on algorithms explained in this book. The author is convinced that the user of these algorithms ought to be knowledgeable about the underlying theory. Therefore this volume is not merely addressed to the practitioner, but also to the mathematician who is interested in relatively new developments in algebraic theory and in...

  15. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  16. Implementation of Rivest Shamir Adleman Algorithm (RSA and Vigenere Cipher In Web Based Information System

    Directory of Open Access Journals (Sweden)

    Aryanti Aryanti

    2018-01-01

    Full Text Available Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

  17. Frame Works: Using Metaphor in Theory and Practice in Information Literacy

    Science.gov (United States)

    Holliday, Wendy

    2017-01-01

    The ACRL Framework for Information Literacy in Higher Education generated a large amount of discourse during its development and adoption. All of this discourse is rich in metaphoric language that can be used as a tool for critical reflection on teaching and learning, information literacy, and the nature and role of theory in the practice of…

  18. Set Theory Correlation Free Algorithm for HRRR Target Tracking

    National Research Council Canada - National Science Library

    Blasch, Erik

    1999-01-01

    .... Recently a few fusionists including Mahler 1 and Mori 2 are using a set theory approach for a unified data fusion theory which is a correlation free paradigm 3 This paper uses the set theory approach...

  19. The future (and past) of quantum theory after the Higgs boson: a quantum-informational viewpoint.

    Science.gov (United States)

    Plotnitsky, Arkady

    2016-05-28

    Taking as its point of departure the discovery of the Higgs boson, this article considers quantum theory, including quantum field theory, which predicted the Higgs boson, through the combined perspective of quantum information theory and the idea of technology, while also adopting anon-realistinterpretation, in 'the spirit of Copenhagen', of quantum theory and quantum phenomena themselves. The article argues that the 'events' in question in fundamental physics, such as the discovery of the Higgs boson (a particularly complex and dramatic, but not essentially different, case), are made possible by the joint workings of three technologies: experimental technology, mathematical technology and, more recently, digital computer technology. The article will consider the role of and the relationships among these technologies, focusing on experimental and mathematical technologies, in quantum mechanics (QM), quantum field theory (QFT) and finite-dimensional quantum theory, with which quantum information theory has been primarily concerned thus far. It will do so, in part, by reassessing the history of quantum theory, beginning with Heisenberg's discovery of QM, in quantum-informational and technological terms. This history, the article argues, is defined by the discoveries of increasingly complex configurations of observed phenomena and the emergence of the increasingly complex mathematical formalism accounting for these phenomena, culminating in the standard model of elementary-particle physics, defining the current state of QFT. © 2016 The Author(s).

  20. The application of information theory for the research of aging and aging-related diseases.

    Science.gov (United States)

    Blokh, David; Stambler, Ilia

    2017-10-01

    This article reviews the application of information-theoretical analysis, employing measures of entropy and mutual information, for the study of aging and aging-related diseases. The research of aging and aging-related diseases is particularly suitable for the application of information theory methods, as aging processes and related diseases are multi-parametric, with continuous parameters coexisting alongside discrete parameters, and with the relations between the parameters being as a rule non-linear. Information theory provides unique analytical capabilities for the solution of such problems, with unique advantages over common linear biostatistics. Among the age-related diseases, information theory has been used in the study of neurodegenerative diseases (particularly using EEG time series for diagnosis and prediction), cancer (particularly for establishing individual and combined cancer biomarkers), diabetes (mainly utilizing mutual information to characterize the diseased and aging states), and heart disease (mainly for the analysis of heart rate variability). Few works have employed information theory for the analysis of general aging processes and frailty, as underlying determinants and possible early preclinical diagnostic measures for aging-related diseases. Generally, the use of information-theoretical analysis permits not only establishing the (non-linear) correlations between diagnostic or therapeutic parameters of interest, but may also provide a theoretical insight into the nature of aging and related diseases by establishing the measures of variability, adaptation, regulation or homeostasis, within a system of interest. It may be hoped that the increased use of such measures in research may considerably increase diagnostic and therapeutic capabilities and the fundamental theoretical mathematical understanding of aging and disease. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Modified retrieval algorithm for three types of precipitation distribution using x-band synthetic aperture radar

    Science.gov (United States)

    Xie, Yanan; Zhou, Mingliang; Pan, Dengke

    2017-10-01

    The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.

  2. Local versus nonlocal information in quantum-information theory: Formalism and phenomena

    International Nuclear Information System (INIS)

    Horodecki, Michal; Horodecki, Ryszard; Synak-Radtke, Barbara; Horodecki, Pawel; Oppenheim, Jonathan; Sen, Aditi; Sen, Ujjwal

    2005-01-01

    In spite of many results in quantum information theory, the complex nature of compound systems is far from clear. In general the information is a mixture of local and nonlocal ('quantum') information. It is important from both pragmatic and theoretical points of view to know the relationships between the two components. To make this point more clear, we develop and investigate the quantum-information processing paradigm in which parties sharing a multipartite state distill local information. The amount of information which is lost because the parties must use a classical communication channel is the deficit. This scheme can be viewed as complementary to the notion of distilling entanglement. After reviewing the paradigm in detail, we show that the upper bound for the deficit is given by the relative entropy distance to so-called pseudoclassically correlated states; the lower bound is the relative entropy of entanglement. This implies, in particular, that any entangled state is informationally nonlocal - i.e., has nonzero deficit. We also apply the paradigm to defining the thermodynamical cost of erasing entanglement. We show the cost is bounded from below by relative entropy of entanglement. We demonstrate the existence of several other nonlocal phenomena which can be found using the paradigm of local information. For example, we prove the existence of a form of nonlocality without entanglement and with distinguishability. We analyze the deficit for several classes of multipartite pure states and obtain that in contrast to the GHZ state, the Aharonov state is extremely nonlocal. We also show that there do not exist states for which the deficit is strictly equal to the whole informational content (bound local information). We discuss the relation of the paradigm with measures of classical correlations introduced earlier. It is also proved that in the one-way scenario, the deficit is additive for Bell diagonal states. We then discuss complementary features of

  3. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.

    Science.gov (United States)

    Tan, Woei Wan

    2007-04-01

    The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.

  4. A Pumping Algorithm for Ergodic Stochastic Mean Payoff Games with Perfect Information

    Science.gov (United States)

    Boros, Endre; Elbassioni, Khaled; Gurvich, Vladimir; Makino, Kazuhisa

    In this paper, we consider two-person zero-sum stochastic mean payoff games with perfect information, or BWR-games, given by a digraph G = (V = V B ∪ V W ∪ V R , E), with local rewards r: E to { R}, and three types of vertices: black V B , white V W , and random V R . The game is played by two players, White and Black: When the play is at a white (black) vertex v, White (Black) selects an outgoing arc (v,u). When the play is at a random vertex v, a vertex u is picked with the given probability p(v,u). In all cases, Black pays White the value r(v,u). The play continues forever, and White aims to maximize (Black aims to minimize) the limiting mean (that is, average) payoff. It was recently shown in [7] that BWR-games are polynomially equivalent with the classical Gillette games, which include many well-known subclasses, such as cyclic games, simple stochastic games (SSG's), stochastic parity games, and Markov decision processes. In this paper, we give a new algorithm for solving BWR-games in the ergodic case, that is when the optimal values do not depend on the initial position. Our algorithm solves a BWR-game by reducing it, using a potential transformation, to a canonical form in which the optimal strategies of both players and the value for every initial position are obvious, since a locally optimal move in it is optimal in the whole game. We show that this algorithm is pseudo-polynomial when the number of random nodes is constant. We also provide an almost matching lower bound on its running time, and show that this bound holds for a wider class of algorithms. Let us add that the general (non-ergodic) case is at least as hard as SSG's, for which no pseudo-polynomial algorithm is known.

  5. Properties of the numerical algorithms for problems of quantum information technologies: Benefits of deep analysis

    Science.gov (United States)

    Chernyavskiy, Andrey; Khamitov, Kamil; Teplov, Alexey; Voevodin, Vadim; Voevodin, Vladimir

    2016-10-01

    In recent years, quantum information technologies (QIT) showed great development, although, the way of the implementation of QIT faces the serious difficulties, some of which are challenging computational tasks. This work is devoted to the deep and broad analysis of the parallel algorithmic properties of such tasks. As an example we take one- and two-qubit transformations of a many-qubit quantum state, which are the most critical kernels of many important QIT applications. The analysis of the algorithms uses the methodology of the AlgoWiki project (algowiki-project.org) and consists of two parts: theoretical and experimental. Theoretical part includes features like sequential and parallel complexity, macro structure, and visual information graph. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia) and includes the analysis of locality and memory access, scalability and the set of more specific dynamic characteristics of realization. This approach allowed us to obtain bottlenecks and generate ideas of efficiency improvement.

  6. Tenth workshop on the algorithmic foundations of robotics (WAFR)

    CERN Document Server

    Lozano-Perez, Tomas; Roy, Nicholas; Rus, Daniela; Algorithmic foundations of robotics X

    2013-01-01

    Algorithms are a fundamental component of robotic systems. Robot algorithms process inputs from sensors that provide noisy and partial data, build geometric and physical models of the world, plan high-and low-level actions at different time horizons, and execute these actions on actuators with limited precision. The design and analysis of robot algorithms raise a unique combination of questions from many elds, including control theory, computational geometry and topology, geometrical and physical modeling, reasoning under uncertainty, probabilistic algorithms, game theory, and theoretical computer science. The Workshop on Algorithmic Foundations of Robotics (WAFR) is a single-track meeting of leading researchers in the eld of robot algorithms. Since its inception in 1994, WAFR has been held every other year, and has provided one of the premiere venues for the publication of some of the eld's most important and lasting contributions. This books contains the proceedings of the tenth WAFR, held on June 13{15 201...

  7. Cognition and biology: perspectives from information theory.

    Science.gov (United States)

    Wallace, Rodrick

    2014-02-01

    The intimate relation between biology and cognition can be formally examined through statistical models constrained by the asymptotic limit theorems of communication theory, augmented by methods from statistical mechanics and nonequilibrium thermodynamics. Cognition, often involving submodules that act as information sources, is ubiquitous across the living state. Less metabolic free energy is consumed by permitting crosstalk between biological information sources than by isolating them, leading to evolutionary exaptations that assemble shifting, tunable cognitive arrays at multiple scales, and levels of organization to meet dynamic patterns of threat and opportunity. Cognition is thus necessary for life, but it is not sufficient: An organism represents a highly patterned outcome of path-dependent, blind, variation, selection, interaction, and chance extinction in the context of an adequate flow of free energy and an environment fit for development. Complex, interacting cognitive processes within an organism both record and instantiate those evolutionary and developmental trajectories.

  8. Novel theory of the human brain: information-commutation basis of architecture and principles of operation

    Directory of Open Access Journals (Sweden)

    Bryukhovetskiy AS

    2015-02-01

    Full Text Available Andrey S Bryukhovetskiy Center for Biomedical Technologies, Federal Research and Clinical Center for Specialized Types of Medical Assistance and Medical Technologies of the Federal Medical Biological Agency, NeuroVita Clinic of Interventional and Restorative Neurology and Therapy, Moscow, Russia Abstract: Based on the methodology of the informational approach and research of the genome, proteome, and complete transcriptome profiles of different cells in the nervous tissue of the human brain, the author proposes a new theory of information-commutation organization and architecture of the human brain which is an alternative to the conventional systemic connective morphofunctional paradigm of the brain framework. Informational principles of brain operation are defined: the modular principle, holographic principle, principle of systematicity of vertical commutative connection and complexity of horizontal commutative connection, regulatory principle, relay principle, modulation principle, “illumination” principle, principle of personalized memory and intellect, and principle of low energy consumption. The author demonstrates that the cortex functions only as a switchboard and router of information, while information is processed outside the nervous tissue of the brain in the intermeningeal space. The main structural element of information-commutation in the brain is not the neuron, but information-commutation modules that are subdivided into receiver modules, transmitter modules, and subscriber modules, forming a vertical architecture of nervous tissue in the brain as information lines and information channels, and a horizontal architecture as central, intermediate, and peripheral information-commutation platforms. Information in information-commutation modules is transferred by means of the carriers that are characteristic to the specific information level from inductome to genome, transcriptome, proteome, metabolome, secretome, and magnetome

  9. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  10. Preservation of information in Fourier theory based deconvolved nuclear spectra

    International Nuclear Information System (INIS)

    Madan, V.K.; Gopalakrishnan, K.R.; Sharma, R.C.; Rattan, S.S.

    1995-01-01

    Nuclear spectroscopy is extremely useful to the internal radiation dosimetry for the estimation of body burden due to gamma emitters. Analysis of nuclear spectra is concerned with the extraction of qualitative and quantitative information embedded in the spectra. A spectral deconvolution method based on Fourier theory is probably the simplest method of deconvolving nuclear spectra. It is proved mathematically that the deconvolution method preserves the qualitative information. It is shown by using simulated spectra and an observed gamma ray spectrum that the method preserves the quantitative information. This may provide a novel approach of information extraction from a deconvolved spectrum. The paper discusses the methodology, mathematical analysis, and the results obtained by deconvolving spectra. (author). 6 refs., 2 tabs

  11. Optimized combination model and algorithm of parking guidance information configuration

    Directory of Open Access Journals (Sweden)

    Tian Ye

    2011-01-01

    Full Text Available Abstract Operators of parking guidance and information (PGI systems often have difficulty in providing the best car park availability information to drivers in periods of high demand. A new PGI configuration model based on the optimized combination method was proposed by analyzing of parking choice behavior. This article first describes a parking choice behavioral model incorporating drivers perceptions of waiting times at car parks based on PGI signs. This model was used to predict the influence of PGI signs on the overall performance of the traffic system. Then relationships were developed for estimating the arrival rates at car parks based on driver characteristics, car park attributes as well as the car park availability information displayed on PGI signs. A mathematical program was formulated to determine the optimal display PGI sign configuration to minimize total travel time. A genetic algorithm was used to identify solutions that significantly reduced queue lengths and total travel time compared with existing practices. These procedures were applied to an existing PGI system operating in Deqing Town and Xiuning City. Significant reductions in total travel time of parking vehicles with PGI being configured. This would reduce traffic congestion and lead to various environmental benefits.

  12. On divergence of finite measures and their applicability in statistics and information theory

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; Stummer, W.

    2009-01-01

    Roč. 44, č. 2 (2009), s. 169-187 ISSN 0233-1888 R&D Projects: GA MŠk(CZ) 1M0572; GA ČR(CZ) GA102/07/1131 Institutional research plan: CEZ:AV0Z10750506 Keywords : Local and global divergences of finite measures * Divergences of sigma-finite measures * Statistical censoring * Pinsker's inequality, Ornstein's distance * Differential power entropies Subject RIV: BD - Theory of Information Impact factor: 0.759, year: 2009 http://library.utia.cas.cz/separaty/2009/SI/vajda-on divergence of finite measures and their applicability in statistics and information theory.pdf

  13. EDITORIAL: Quantum control theory for coherence and information dynamics Quantum control theory for coherence and information dynamics

    Science.gov (United States)

    Viola, Lorenza; Tannor, David

    2011-08-01

    Precisely characterizing and controlling the dynamics of realistic open quantum systems has emerged in recent years as a key challenge across contemporary quantum sciences and technologies, with implications ranging from physics, chemistry and applied mathematics to quantum information processing (QIP) and quantum engineering. Quantum control theory aims to provide both a general dynamical-system framework and a constructive toolbox to meet this challenge. The purpose of this special issue of Journal of Physics B: Atomic, Molecular and Optical Physics is to present a state-of-the-art account of recent advances and current trends in the field, as reflected in two international meetings that were held on the subject over the last summer and which motivated in part the compilation of this volume—the Topical Group: Frontiers in Open Quantum Systems and Quantum Control Theory, held at the Institute for Theoretical Atomic, Molecular and Optical Physics (ITAMP) in Cambridge, Massachusetts (USA), from 1-14 August 2010, and the Safed Workshop on Quantum Decoherence and Thermodynamics Control, held in Safed (Israel), from 22-27 August 2010. Initial developments in quantum control theory date back to (at least) the early 1980s, and have been largely inspired by the well-established mathematical framework for classical dynamical systems. As the above-mentioned meetings made clear, and as the burgeoning body of literature on the subject testifies, quantum control has grown since then well beyond its original boundaries, and has by now evolved into a highly cross-disciplinary field which, while still fast-moving, is also entering a new phase of maturity, sophistication, and integration. Two trends deserve special attention: on the one hand, a growing emphasis on control tasks and methodologies that are specifically motivated by QIP, in addition and in parallel to applications in more traditional areas where quantum coherence is nevertheless vital (such as, for instance

  14. Algorithmic Algebraic Combinatorics and Gröbner Bases

    CERN Document Server

    Klin, Mikhail; Jurisic, Aleksandar

    2009-01-01

    This collection of tutorial and research papers introduces readers to diverse areas of modern pure and applied algebraic combinatorics and finite geometries with a special emphasis on algorithmic aspects and the use of the theory of Grobner bases. Topics covered include coherent configurations, association schemes, permutation groups, Latin squares, the Jacobian conjecture, mathematical chemistry, extremal combinatorics, coding theory, designs, etc. Special attention is paid to the description of innovative practical algorithms and their implementation in software packages such as GAP and MAGM

  15. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty , Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  16. Shape reconstruction from apparent contours theory and algorithms

    CERN Document Server

    Bellettini, Giovanni; Paolini, Maurizio

    2015-01-01

    Motivated by a variational model concerning the depth of the objects in a picture and the problem of hidden and illusory contours, this book investigates one of the central problems of computer vision: the topological and algorithmic reconstruction of a smooth three dimensional scene starting from the visible part of an apparent contour. The authors focus their attention on the manipulation of apparent contours using a finite set of elementary moves, which correspond to diffeomorphic deformations of three dimensional scenes. A large part of the book is devoted to the algorithmic part, with implementations, experiments, and computed examples. The book is intended also as a user's guide to the software code appcontour, written for the manipulation of apparent contours and their invariants. This book is addressed to theoretical and applied scientists working in the field of mathematical models of image segmentation.

  17. Interest in and reactions to genetic risk information: The role of implicit theories and self-affirmation.

    Science.gov (United States)

    Taber, Jennifer M; Klein, William M P; Persky, Susan; Ferrer, Rebecca A; Kaufman, Annette R; Thai, Chan L; Harris, Peter R

    2017-10-01

    Implicit theories reflect core assumptions about whether human attributes are malleable or fixed: Incremental theorists believe a characteristic is malleable whereas entity theorists believe it is fixed. People with entity theories about health may be less likely to engage in risk-mitigating behavior. Spontaneous self-affirmation (e.g., reflecting on one's values when threatened) may lessen defensiveness and unhealthy behaviors associated with fixed beliefs, and reduce the likelihood of responding to health risk information with fixed beliefs. Across two studies conducted in the US from 2012 to 2015, we investigated how self-affirmation and implicit theories about health and body weight were linked to engagement with genetic risk information. In Study 1, participants in a genome sequencing trial (n = 511) completed cross-sectional assessments of implicit theories, self-affirmation, and intentions to learn, share, and use genetic information. In Study 2, overweight women (n = 197) were randomized to receive genetic or behavioral explanations for weight; participants completed surveys assessing implicit theories, self-affirmation, self-efficacy, motivation, and intentions. Fixed beliefs about weight were infrequently endorsed across studies (10.8-15.2%). In Study 1, participants with stronger fixed theories were less interested in learning and using genetic risk information about medically actionable disease; these associations were weaker among participants higher in self-affirmation. In Study 2, among participants given behavioral explanations for weight, stronger fixed theories about weight were associated with lower motivation and intentions to eat a healthy diet. Among participants given genetic explanations, being higher in self-affirmation was associated with less fixed beliefs. Stronger health-related fixed theories may decrease the likelihood of benefiting from genetic information, but less so for people who self-affirm. Published by Elsevier Ltd.

  18. Risk-informed decision making in the nuclear industry: Application and effectiveness comparison of different genetic algorithm techniques

    International Nuclear Information System (INIS)

    Gjorgiev, Blaže; Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Multi-objective optimization of STI based on risk-informed decision making. ► Four different genetic algorithms (GAs) techniques are used as optimization tool. ► Advantages/disadvantages among the four different GAs applied are emphasized. - Abstract: The risk-informed decision making (RIDM) process, where insights gained from the probabilistic safety assessment are contemplated together with other engineering insights, is gaining an ever-increasing attention in the process industries. Increasing safety systems availability by applying RIDM is one of the prime goals for the authorities operating with nuclear power plants. Additionally, equipment ageing is gradually becoming a major concern in the process industries and especially in the nuclear industry, since more and more safety-related components are approaching or are already in their wear-out phase. A significant difficulty regarding the consideration of ageing effects on equipment (un)availability is the immense uncertainty the available equipment ageing data are associated to. This paper presents an approach for safety system unavailability reduction by optimizing the related test and maintenance schedule suggested by the technical specifications in the nuclear industry. Given the RIDM philosophy, two additional insights, i.e. ageing data uncertainty and test and maintenance costs, are considered along with unavailability insights gained from the probabilistic safety assessment for a selected standard safety system. In that sense, an approach for multi-objective optimization of the equipment surveillance test interval is proposed herein. Three different objective functions related to each one of the three different insights discussed above comprise the multi-objective nature of the optimization process. Genetic algorithm technique is utilized as an optimization tool. Four different types of genetic algorithms are utilized and consequently comparative analysis is conducted given the

  19. The Use of Ideas of Information Theory for Studying “Language” and Intelligence in Ants

    Directory of Open Access Journals (Sweden)

    Zhanna Reznikova

    2009-11-01

    Full Text Available In this review we integrate results of long term experimental study on ant “language” and intelligence which were fully based on fundamental ideas of Information Theory, such as the Shannon entropy, the Kolmogorov complexity, and the Shannon’s equation connecting the length of a message (l and its frequency (p, i.e., l = –log p for rational communication systems. This approach enabled us to obtain the following important results on ants’ communication and intelligence: (i to reveal “distant homing” in ants, that is, their ability to transfer information about remote events; (ii to estimate the rate of information transmission; (iii to reveal that ants are able to grasp regularities and to use them for “compression” of information; (iv to reveal that ants are able to transfer to each other the information about the number of objects; (v to discover that ants can add and subtract small numbers. The obtained results show that information theory is not only excellent mathematical theory, but many of its results may be considered as Nature laws.

  20. The Fuzzy MCDM Algorithms for the M&A Due Diligence

    Directory of Open Access Journals (Sweden)

    Chung-Tsen Tsao

    2008-04-01

    Full Text Available An M&A due diligence is the process in which one of the parties to the transaction undertakes to investigate the other in order to judge whether to go forward with the transaction on the terms proposed. It encompasses the missions in three phases: searching and preliminary screening potential candidates, evaluating the candidates and deciding the target, and assisting the after-transaction integration. This work suggests using a Fuzzy Multiple Criteria Decision Making approach (Fuzzy MCDM and develops detailed algorithms to carry out the second-phase task. The approach of MCDM is able to facilitate the analysis and integration of information from different aspects and criteria. The theory of Fuzzy Sets can include qualitative information in addition to quantitative information. In the developed algorithms the evaluators' subjective judgments are expressed in linguistic terms which can better reflect human intuitive thought than the quantitative scores. These linguistic judgments are transformed into fuzzy numbers and made subsequent synthesis with quantitative financial figures. The order of candidates can be ranked after a defuzzification. Then the acquiring firm can work out a more specific study, including pricing and costing, on the priority candidates so as to decide the target.

  1. Power Load Prediction Based on Fractal Theory

    OpenAIRE

    Jian-Kai, Liang; Cattani, Carlo; Wan-Qing, Song

    2015-01-01

    The basic theories of load forecasting on the power system are summarized. Fractal theory, which is a new algorithm applied to load forecasting, is introduced. Based on the fractal dimension and fractal interpolation function theories, the correlation algorithms are applied to the model of short-term load forecasting. According to the process of load forecasting, the steps of every process are designed, including load data preprocessing, similar day selecting, short-term load forecasting, and...

  2. Parallel algorithm of real-time infrared image restoration based on total variation theory

    Science.gov (United States)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  3. Alice and Bob meet Banach the interface of asymptotic geometric analysis and quantum information theory

    CERN Document Server

    Aubrun, Guillaume

    2017-01-01

    The quest to build a quantum computer is arguably one of the major scientific and technological challenges of the twenty-first century, and quantum information theory (QIT) provides the mathematical framework for that quest. Over the last dozen or so years, it has become clear that quantum information theory is closely linked to geometric functional analysis (Banach space theory, operator spaces, high-dimensional probability), a field also known as asymptotic geometric analysis (AGA). In a nutshell, asymptotic geometric analysis investigates quantitative properties of convex sets, or other geometric structures, and their approximate symmetries as the dimension becomes large. This makes it especially relevant to quantum theory, where systems consisting of just a few particles naturally lead to models whose dimension is in the thousands, or even in the billions. Alice and Bob Meet Banach is aimed at multiple audiences connected through their interest in the interface of QIT and AGA: at quantum information resea...

  4. Validation of neural spike sorting algorithms without ground-truth information.

    Science.gov (United States)

    Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F

    2016-05-01

    The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Bistatic SAR/ISAR/FSR geometry, signal models and imaging algorithms

    CERN Document Server

    Lazarov, Andon Dimitrov

    2013-01-01

    Bistatic radar consists of a radar system which comprises a transmitter and receiver which are separated by a distance comparable to the expected target distance. This book provides a general theoretical description of such bistatic technology in the context of synthetic aperture, inverse synthetic aperture and forward scattering radars from the point of view of analytical geometrical and signal formation as well as processing theory. Signal formation and image reconstruction algorithms are developed with the application of high informative linear frequency and phase code modulating techniques

  6. Surrogate Marker Evaluation from an Information Theory Perspective

    OpenAIRE

    Alonso Abad, Ariel; Molenberghs, Geert

    2006-01-01

    The last 20 years have seen lots of work in the area of surrogate marker validation, partly devoted to frame the evaluation in a multitrial framework, leading to definitions in terms of the quality of trial- and individual-level association between a potential surrogate and a true endpoint (Buyse et al., 2000, Biostatistics 1, 49–67). A drawback is that different settings have led to different measures at the individual level. Here, we use information theory to create a unified framework, lea...

  7. Information flow, causality, and the classical theory of tachyons

    International Nuclear Information System (INIS)

    Basano, L.

    1977-01-01

    Causal paradoxes arising in the tachyon theory have been systematically solved by using the reinterpretation principle as a consequence of which cause and effect no longer retain an absolute meaning. However, even in the tachyon theory, a cause is always seen to chronologically precede its effect, but this is obtained at the price of allowing cause and effect to be interchanged when required. A recent result has shown that this interchange-ability of cause and effect must not be unlimited if heavy paradoxes are to be avoided. This partial recovery of the classical concept of causality has been expressed by the conjecture that transcendent tachyons cannot be absorbed by a tachyon detector. In this paper the directional properties of the flow of information between two observers in relative motion and its consequences on the logical self-consistency of the theory of superluminal particles are analyzed. It is shown that the above conjecture does not provide a satisfactory solution to the problem because it implies that tachyons of any speed cannot be intercepted by the same detector. (author)

  8. Prosody's Contribution to Fluency: An Examination of the Theory of Automatic Information Processing

    Science.gov (United States)

    Schrauben, Julie E.

    2010-01-01

    LaBerge and Samuels' (1974) theory of automatic information processing in reading offers a model that explains how and where the processing of information occurs and the degree to which processing of information occurs. These processes are dependent upon two criteria: accurate word decoding and automatic word recognition. However, LaBerge and…

  9. Information Theory for Gabor Feature Selection for Face Recognition

    Directory of Open Access Journals (Sweden)

    Shen Linlin

    2006-01-01

    Full Text Available A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004, our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.

  10. Information Theory for Gabor Feature Selection for Face Recognition

    Science.gov (United States)

    Shen, Linlin; Bai, Li

    2006-12-01

    A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004), our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.

  11. Quantum Gravity, Information Theory and the CMB

    Science.gov (United States)

    Kempf, Achim

    2018-04-01

    We review connections between the metric of spacetime and the quantum fluctuations of fields. We start with the finding that the spacetime metric can be expressed entirely in terms of the 2-point correlator of the fluctuations of quantum fields. We then discuss the open question whether the knowledge of only the spectra of the quantum fluctuations of fields also suffices to determine the spacetime metric. This question is of interest because spectra are geometric invariants and their quantization would, therefore, have the benefit of not requiring the modding out of diffeomorphisms. Further, we discuss the fact that spacetime at the Planck scale need not necessarily be either discrete or continuous. Instead, results from information theory show that spacetime may be simultaneously discrete and continuous in the same way that information can. Finally, we review the recent finding that a covariant natural ultraviolet cutoff at the Planck scale implies a signature in the cosmic microwave background (CMB) that may become observable.

  12. COMBINATION OF GENETIC ALGORITHM AND DEMPSTER-SHAFER THEORY OF EVIDENCE FOR LAND COVER CLASSIFICATION USING INTEGRATION OF SAR AND OPTICAL SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    H. T. Chu

    2012-07-01

    Full Text Available The integration of different kinds of remotely sensed data, in particular Synthetic Aperture Radar (SAR and optical satellite imagery, is considered a promising approach for land cover classification because of the complimentary properties of each data source. However, the challenges are: how to fully exploit the capabilities of these multiple data sources, which combined datasets should be used and which data processing and classification techniques are most appropriate in order to achieve the best results. In this paper an approach, in which synergistic use of a feature selection (FS methods with Genetic Algorithm (GA and multiple classifiers combination based on Dempster-Shafer Theory of Evidence, is proposed and evaluated for classifying land cover features in New South Wales, Australia. Multi-date SAR data, including ALOS/PALSAR, ENVISAT/ASAR and optical (Landsat 5 TM+ images, were used for this study. Textural information were also derived and integrated with the original images. Various combined datasets were generated for classification. Three classifiers, namely Artificial Neural Network (ANN, Support Vector Machines (SVMs and Self-Organizing Map (SOM were employed. Firstly, feature selection using GA was applied for each classifier and dataset to determine the optimal input features and parameters. Then the results of three classifiers on particular datasets were combined using the Dempster-Shafer theory of Evidence. Results of this study demonstrate the advantages of the proposed method for land cover mapping using complex datasets. It is revealed that the use of GA in conjunction with the Dempster-Shafer Theory of Evidence can significantly improve the classification accuracy. Furthermore, integration of SAR and optical data often outperform single-type datasets.

  13. Parallel algorithms for numerical linear algebra

    CERN Document Server

    van der Vorst, H

    1990-01-01

    This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

  14. Biased Monte Carlo algorithms on unitary groups

    International Nuclear Information System (INIS)

    Creutz, M.; Gausterer, H.; Sanielevici, S.

    1989-01-01

    We introduce a general updating scheme for the simulation of physical systems defined on unitary groups, which eliminates the systematic errors due to inexact exponentiation of algebra elements. The essence is to work directly with group elements for the stochastic noise. Particular cases of the scheme include the algorithm of Metropolis et al., overrelaxation algorithms, and globally corrected Langevin and hybrid algorithms. The latter are studied numerically for the case of SU(3) theory

  15. Information-preserving structures: A general framework for quantum zero-error information

    International Nuclear Information System (INIS)

    Blume-Kohout, Robin; Ng, Hui Khoon; Poulin, David; Viola, Lorenza

    2010-01-01

    Quantum systems carry information. Quantum theory supports at least two distinct kinds of information (classical and quantum), and a variety of different ways to encode and preserve information in physical systems. A system's ability to carry information is constrained and defined by the noise in its dynamics. This paper introduces an operational framework, using information-preserving structures, to classify all the kinds of information that can be perfectly (i.e., with zero error) preserved by quantum dynamics. We prove that every perfectly preserved code has the same structure as a matrix algebra, and that preserved information can always be corrected. We also classify distinct operational criteria for preservation (e.g., 'noiseless','unitarily correctible', etc.) and introduce two natural criteria for measurement-stabilized and unconditionally preserved codes. Finally, for several of these operational criteria, we present efficient (polynomial in the state-space dimension) algorithms to find all of a channel's information-preserving structures.

  16. A new stochastic algorithm for inversion of dust aerosol size distribution

    Science.gov (United States)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  17. Making a difference: incorporating theories of autonomy into models of informed consent.

    Science.gov (United States)

    Delany, C

    2008-09-01

    Obtaining patients' informed consent is an ethical and legal obligation in healthcare practice. Whilst the law provides prescriptive rules and guidelines, ethical theories of autonomy provide moral foundations. Models of practice of consent, have been developed in the bioethical literature to assist in understanding and integrating the ethical theory of autonomy and legal obligations into the clinical process of obtaining a patient's informed consent to treatment. To review four models of consent and analyse the way each model incorporates the ethical meaning of autonomy and how, as a consequence, they might change the actual communicative process of obtaining informed consent within clinical contexts. An iceberg framework of consent is used to conceptualise how ethical theories of autonomy are positioned and underpin the above surface, and visible clinical communication, including associated legal guidelines and ethical rules. Each model of consent is critically reviewed from the perspective of how it might shape the process of informed consent. All four models would alter the process of obtaining consent. Two models provide structure and guidelines for the content and timing of obtaining patients' consent. The two other models rely on an attitudinal shift in clinicians. They provide ideas for consent by focusing on underlying values, attitudes and meaning associated with the ethical meaning of autonomy. The paper concludes that models of practice that explicitly incorporate the underlying ethical meaning of autonomy as their basis, provide less prescriptive, but more theoretically rich guidance for healthcare communicative practices.

  18. A Synthetic Fusion Rule for Salient Region Detection under the Framework of DS-Evidence Theory

    Directory of Open Access Journals (Sweden)

    Naeem Ayoub

    2018-05-01

    Full Text Available Saliency detection is one of the most valuable research topics in computer vision. It focuses on the detection of the most significant objects/regions in images and reduces the computational time cost of getting the desired information from salient regions. Local saliency detection or common pattern discovery schemes were actively used by the researchers to overcome the saliency detection problems. In this paper, we propose a bottom-up saliency fusion method by taking into consideration the importance of the DS-Evidence (Dempster–Shafer (DS theory. Firstly, we calculate saliency maps from different algorithms based on the pixels-level, patches-level and region-level methods. Secondly, we fuse the pixels based on the foreground and background information under the framework of DS-Evidence theory (evidence theory allows one to combine evidence from different sources and arrive at a degree of belief that takes into account all the available evidence. The development inclination of image saliency detection through DS-Evidence theory gives us better results for saliency prediction. Experiments are conducted on the publicly available four different datasets (MSRA, ECSSD, DUT-OMRON and PASCAL-S. Our saliency detection method performs well and shows prominent results as compared to the state-of-the-art algorithms.

  19. Fast algorithms for chiral fermions in 2 dimensions

    Directory of Open Access Journals (Sweden)

    Hyka (Xhako Dafina

    2018-01-01

    Full Text Available In lattice QCD simulations the formulation of the theory in lattice should be chiral in order that symmetry breaking happens dynamically from interactions. In order to guarantee this symmetry on the lattice one uses overlap and domain wall fermions. On the other hand high computational cost of lattice QCD simulations with overlap or domain wall fermions remains a major obstacle of research in the field of elementary particles. We have developed the preconditioned GMRESR algorithm as fast inverting algorithm for chiral fermions in U(1 lattice gauge theory. In this algorithm we used the geometric multigrid idea along the extra dimension.The main result of this work is that the preconditioned GMRESR is capable to accelerate the convergence 2 to 12 times faster than the other optimal algorithms (SHUMR for different coupling constant and lattice 32x32. Also, in this paper we tested it for larger lattice size 64x64. From the results of simulations we can see that our algorithm is faster than SHUMR. This is a very promising result that this algorithm can be adapted also in 4 dimension.

  20. Galerkin algorithm for multidimensional plasma simulation codes. Informal report

    International Nuclear Information System (INIS)

    Godfrey, B.B.

    1979-03-01

    A Galerkin finite element differencing scheme has been developed for a computer simulation of plasmas. The new difference equations identically satisfy an equation of continuity. Thus, the usual current correction procedure, involving inversion of Poisson's equation, is unnecessary. The algorithm is free of many numerical Cherenkov instabilities. This differencing scheme has been implemented in CCUBE, an already existing relativistic, electromagnetic, two-dimensional PIC code in arbitrary separable, orthogonal coordinates. The separability constraint is eliminated by the new algorithm. The new version of CCUBE exhibits good stability and accuracy with reduced computer memory and time requirements. Details of the algorithm and its implementation are presented