WorldWideScience

Sample records for information theoretical methods

  1. Information-theoretic thresholds from the cavity method

    CERN Document Server

    Coja-Oghlan, Amin; Perkins, Will; Zdeborova, Lenka

    2016-01-01

    Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we establish a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. The proofs provide a conceptual underpinning of the replica symmetric variant of the cavity method, and we expect that the approach will find many future applications.

  2. Information-Theoretic Methods for Identifying Relationships among Climate Variables

    CERN Document Server

    Knuth, Kevin H; Rossow, William B

    2014-01-01

    Information-theoretic quantities, such as entropy, are used to quantify the amount of information a given variable provides. Entropies can be used together to compute the mutual information, which quantifies the amount of information two variables share. However, accurately estimating these quantities from data is extremely challenging. We have developed a set of computational techniques that allow one to accurately compute marginal and joint entropies. These algorithms are probabilistic in nature and thus provide information on the uncertainty in our estimates, which enable us to establish statistical significance of our findings. We demonstrate these methods by identifying relations between cloud data from the International Satellite Cloud Climatology Project (ISCCP) and data from other sources, such as equatorial pacific sea surface temperatures (SST).

  3. Reverse Engineering Cellular Networks with Information Theoretic Methods

    Directory of Open Access Journals (Sweden)

    Julio R. Banga

    2013-05-01

    Full Text Available Building mathematical models of cellular networks lies at the core of systems biology. It involves, among other tasks, the reconstruction of the structure of interactions between molecular components, which is known as network inference or reverse engineering. Information theory can help in the goal of extracting as much information as possible from the available data. A large number of methods founded on these concepts have been proposed in the literature, not only in biology journals, but in a wide range of areas. Their critical comparison is difficult due to the different focuses and the adoption of different terminologies. Here we attempt to review some of the existing information theoretic methodologies for network inference, and clarify their differences. While some of these methods have achieved notable success, many challenges remain, among which we can mention dealing with incomplete measurements, noisy data, counterintuitive behaviour emerging from nonlinear relations or feedback loops, and computational burden of dealing with large data sets.

  4. Information-Theoretic Methods for Identifying Relationships amon

    Data.gov (United States)

    National Aeronautics and Space Administration — Information-theoretic quantities, such as entropy, are used to quantify the amount of information a given variable provides. Entropies can be used together to...

  5. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  6. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Ross S Williamson

    2015-04-01

    Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  7. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Science.gov (United States)

    Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W

    2015-04-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  8. Consensus theoretic classification methods

    Science.gov (United States)

    Benediktsson, Jon A.; Swain, Philip H.

    1992-01-01

    Consensus theory is adopted as a means of classifying geographic data from multiple sources. The foundations and usefulness of different consensus theoretic methods are discussed in conjunction with pattern recognition. Weight selections for different data sources are considered and modeling of non-Gaussian data is investigated. The application of consensus theory in pattern recognition is tested on two data sets: 1) multisource remote sensing and geographic data and 2) very-high-dimensional remote sensing data. The results obtained using consensus theoretic methods are found to compare favorably with those obtained using well-known pattern recognition methods. The consensus theoretic methods can be applied in cases where the Gaussian maximum likelihood method cannot. Also, the consensus theoretic methods are computationally less demanding than the Gaussian maximum likelihood method and provide a means for weighting data sources differently.

  9. Liquid crystal free energy relaxation by a theoretically informed Monte Carlo method using a finite element quadrature approach.

    Science.gov (United States)

    Armas-Pérez, Julio C; Hernández-Ortiz, Juan P; de Pablo, Juan J

    2015-12-28

    A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.

  10. On image segmentation using information theoretic criteria

    CERN Document Server

    Aue, Alexander; 10.1214/11-AOS925

    2012-01-01

    Image segmentation is a long-studied and important problem in image processing. Different solutions have been proposed, many of which follow the information theoretic paradigm. While these information theoretic segmentation methods often produce excellent empirical results, their theoretical properties are still largely unknown. The main goal of this paper is to conduct a rigorous theoretical study into the statistical consistency properties of such methods. To be more specific, this paper investigates if these methods can accurately recover the true number of segments together with their true boundaries in the image as the number of pixels tends to infinity. Our theoretical results show that both the Bayesian information criterion (BIC) and the minimum description length (MDL) principle can be applied to derive statistically consistent segmentation methods, while the same is not true for the Akaike information criterion (AIC). Numerical experiments were conducted to illustrate and support our theoretical fin...

  11. Information theoretic preattentive saliency

    DEFF Research Database (Denmark)

    Loog, Marco

    2011-01-01

    -driven density estimation. Given the features descriptors or filter bank that one wants to use to describe the image content at every position we provide a closed-form expression for the associated saliency at that location. This indeed makes explicit that what is considered salient depends on how i.e. by means...... of which features, image information is described. We illustrate our result by determining a few specific saliency maps based on particular choices of features. One of them makes the link with the mapping underlying well-known Harris interest points, which is a result recently obtained in isolation...

  12. Robust recognition via information theoretic learning

    CERN Document Server

    He, Ran; Yuan, Xiaotong; Wang, Liang

    2014-01-01

    This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip

  13. Information theoretical methods to deconvolute genetic regulatory networks applied to thyroid neoplasms

    Science.gov (United States)

    Hernández-Lemus, Enrique; Velázquez-Fernández, David; Estrada-Gil, Jesús K.; Silva-Zolezzi, Irma; Herrera-Hernández, Miguel F.; Jiménez-Sánchez, Gerardo

    2009-12-01

    Most common pathologies in humans are not caused by the mutation of a single gene, rather they are complex diseases that arise due to the dynamic interaction of many genes and environmental factors. This plethora of interacting genes generates a complexity landscape that masks the real effects associated with the disease. To construct dynamic maps of gene interactions (also called genetic regulatory networks) we need to understand the interplay between thousands of genes. Several issues arise in the analysis of experimental data related to gene function: on the one hand, the nature of measurement processes generates highly noisy signals; on the other hand, there are far more variables involved (number of genes and interactions among them) than experimental samples. Another source of complexity is the highly nonlinear character of the underlying biochemical dynamics. To overcome some of these limitations, we generated an optimized method based on the implementation of a Maximum Entropy Formalism (MaxEnt) to deconvolute a genetic regulatory network based on the most probable meta-distribution of gene-gene interactions. We tested the methodology using experimental data for Papillary Thyroid Cancer (PTC) and Thyroid Goiter tissue samples. The optimal MaxEnt regulatory network was obtained from a pool of 25,593,993 different probability distributions. The group of observed interactions was validated by several (mostly in silico) means and sources. For the associated Papillary Thyroid Cancer Gene Regulatory Network (PTC-GRN) the majority of the nodes (genes) have very few links (interactions) whereas a small number of nodes are highly connected. PTC-GRN is also characterized by high clustering coefficients and network heterogeneity. These properties have been recognized as characteristic of topological robustness, and they have been largely described in relation to biological networks. A number of biological validity outcomes are discussed with regard to both the

  14. Theoretical information reuse and integration

    CERN Document Server

    Rubin, Stuart

    2016-01-01

    Information Reuse and Integration addresses the efficient extension and creation of knowledge through the exploitation of Kolmogorov complexity in the extraction and application of domain symmetry. Knowledge, which seems to be novel, can more often than not be recast as the image of a sequence of transformations, which yield symmetric knowledge. When the size of those transformations and/or the length of that sequence of transforms exceeds the size of the image, then that image is said to be novel or random. It may also be that the new knowledge is random in that no such sequence of transforms, which produces it exists, or is at least known. The nine chapters comprising this volume incorporate symmetry, reuse, and integration as overt operational procedures or as operations built into the formal representations of data and operators employed. Either way, the aforementioned theoretical underpinnings of information reuse and integration are supported.

  15. Information- Theoretic Analysis for the Difficulty of Extracting Hidden Information

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei-ming; LI Shi-qu; CAO Jia; LIU Jiu-fen

    2005-01-01

    The difficulty of extracting hidden information,which is essentially a kind of secrecy, is analyzed by information-theoretic method. The relations between key rate, message rate, hiding capacity and difficulty of extraction are studied in the terms of unicity distance of stego-key, and the theoretic conclusion is used to analyze the actual extracting attack on Least Significant Bit(LSB) steganographic algorithms.

  16. Unorthodox theoretical methods

    Energy Technology Data Exchange (ETDEWEB)

    Nedd, Sean [Iowa State Univ., Ames, IA (United States)

    2012-01-01

    The use of the ReaxFF force field to correlate with NMR mobilities of amine catalytic substituents on a mesoporous silica nanosphere surface is considered. The interfacing of the ReaxFF force field within the Surface Integrated Molecular Orbital/Molecular Mechanics (SIMOMM) method, in order to replicate earlier SIMOMM published data and to compare with the ReaxFF data, is discussed. The development of a new correlation consistent Composite Approach (ccCA) is presented, which incorporates the completely renormalized coupled cluster method with singles, doubles and non-iterative triples corrections towards the determination of heats of formations and reaction pathways which contain biradical species.

  17. Hash Functions and Information Theoretic Security

    Science.gov (United States)

    Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.

  18. Hash functions and information theoretic security

    DEFF Research Database (Denmark)

    Bagheri, Nasoor; Knudsen, Lars Ramkilde; Naderi, Majid;

    2009-01-01

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic...... attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant....

  19. Toward a Theoretical Framework for Information Science

    OpenAIRE

    Amanda Spink

    2000-01-01

    Information Science is beginning to develop a theoretical framework for the modeling of users’ interactions with information retrieval (IR) technologies within the more holistic context of human information behavior (Spink, 1998b). This paper addresses the following questions: (1) What is the nature of Information Science? and (2) What theoretical framework and model is most appropriate for Information Science? This paper proposes a theoretical framework for Information Science based on ...

  20. Toward a Theoretical Framework for Information Science

    Directory of Open Access Journals (Sweden)

    Amanda Spink

    2000-01-01

    Full Text Available Information Science is beginning to develop a theoretical framework for the modeling of users’ interactions with information retrieval (IR technologies within the more holistic context of human information behavior (Spink, 1998b. This paper addresses the following questions: (1 What is the nature of Information Science? and (2 What theoretical framework and model is most appropriate for Information Science? This paper proposes a theoretical framework for Information Science based on an explication of the processes of human information coordinating behavior and information feedback that facilitate the relationship between human information behavior and human interaction with information retrieval (IR technologies (Web, digital libraries, etc..

  1. Novel Method for Calculating a Nonsubjective Informative Prior for a Bayesian Model in Toxicology Screening: A Theoretical Framework.

    Science.gov (United States)

    Woldegebriel, Michael

    2015-11-17

    In toxicology screening (forensic, food-safety), due to several analytical errors (e.g., retention time shift, lack of repeatability in m/z scans, etc.), the ability to confidently identify/confirm a compound remains a challenge. Due to these uncertainties, a probabilistic approach is currently preferred. However, if a probabilistic approach is followed, the only statistical method that is capable of estimating the probability of whether the compound of interest (COI) is present/absent in a given sample is Bayesian statistics. Bayes' theorem can combine prior information (prior probability) with data (likelihood) to give an optimal probability (posterior probability) reflecting the presence/absence of the COI. In this work, a novel method for calculating an informative prior probability for a Bayesian model in targeted toxicology screening is introduced. In contrast to earlier proposals making use of literature citation rates and the prior knowledge of the analyst, this method presents a thorough and nonsubjective approach. The formulation approaches the probability calculation as a clustering and random draw problem that incorporates few analytical method parameters meticulously estimated to reflect sensitivity and specificity of the system. The practicality of the method has been demonstrated and validated using real data and simulated analytical techniques.

  2. Information-theoretic metamodel of organizational evolution

    Science.gov (United States)

    Sepulveda, Alfredo

    2011-12-01

    Social organizations are abstractly modeled by holarchies---self-similar connected networks---and intelligent complex adaptive multiagent systems---large networks of autonomous reasoning agents interacting via scaled processes. However, little is known of how information shapes evolution in such organizations, a gap that can lead to misleading analytics. The research problem addressed in this study was the ineffective manner in which classical model-predict-control methods used in business analytics attempt to define organization evolution. The purpose of the study was to construct an effective metamodel for organization evolution based on a proposed complex adaptive structure---the info-holarchy. Theoretical foundations of this study were holarchies, complex adaptive systems, evolutionary theory, and quantum mechanics, among other recently developed physical and information theories. Research questions addressed how information evolution patterns gleamed from the study's inductive metamodel more aptly explained volatility in organization. In this study, a hybrid grounded theory based on abstract inductive extensions of information theories was utilized as the research methodology. An overarching heuristic metamodel was framed from the theoretical analysis of the properties of these extension theories and applied to business, neural, and computational entities. This metamodel resulted in the synthesis of a metaphor for, and generalization of organization evolution, serving as the recommended and appropriate analytical tool to view business dynamics for future applications. This study may manifest positive social change through a fundamental understanding of complexity in business from general information theories, resulting in more effective management.

  3. The information-theoretic turn

    Directory of Open Access Journals (Sweden)

    Blevins James P.

    2013-01-01

    Full Text Available Over the past decade, information theory has been applied to the analysis of a successively broader range of morphological phenomena. Interestingly, this tradition has arisen independently of the linguistic applications of information theory dating from the 1950’s. Instead, the point of origin for current work lies in a series of studies of morphological processing in which Kostić and associates develop a statistical notion of ‘morphological information’ based on ‘uncertainty’ and ‘uncertainty reduction’. From these initial studies, analyses based on statistical notions of information have been applied to general problems of morphological description and typological classification, leading to a formal rehabilitation of the complex system perspective of traditional WP models.

  4. Qualitative methods in theoretical physics

    CERN Document Server

    Maslov, Dmitrii

    2017-01-01

    This book comprises a set of tools which allow researchers and students to arrive at a qualitatively correct answer without undertaking lengthy calculations. In general, Qualitative Methods in Theoretical Physics is about combining approximate mathematical methods with fundamental principles of physics: conservation laws and symmetries. Readers will learn how to simplify problems, how to estimate results, and how to apply symmetry arguments and conduct dimensional analysis. A comprehensive problem set is included. The book will appeal to a wide range of students and researchers.

  5. Information-Theoretic Active SOM for Improving Generalization Performance

    Directory of Open Access Journals (Sweden)

    Ryotaro Kamimura

    2016-10-01

    Full Text Available In this paper, we introduce a new type of information-theoretic method called “information-theoretic active SOM”, based on the self-organizing maps (SOM for training multi-layered neural networks. The SOM is one of the most important techniques in unsupervised learning. However, SOM knowledge is sometimes ambiguous and cannot be easily interpreted. Thus, we introduce the information-theoretic method to produce clearer and interpretable representations. The present method extends this information-theoretic approach into supervised learning. The main contribution can be summarized by three points. First, it is shown that clear representations by the information-theoretic method can be effective in training supervised learning. Second, the method is sufficiently simple where there are two separated components, namely, information maximization and error minimization component. Usually, two components are mixed in one framework, and it is difficult to compromise between them. In addition, the knowledge obtained by this information-theoretic SOM can be used to solve the shortage of unlabeled data, because the information maximization component is unsupervised and can process all input data with and without labels. The method was applied to the well-known image segmentation datasets. Experimental results showed that clear weights were produced and generalization performance was improved by using the information-theoretic SOM. In addition, the final results were stable, almost independent of the parameter values.

  6. Information theoretic approach for accounting classification

    CERN Document Server

    Ribeiro, E M S

    2014-01-01

    In this paper we consider an information theoretic approach for the accounting classification process. We propose a matrix formalism and an algorithm for calculations of information theoretic measures associated to accounting classification. The formalism may be useful for further generalizations, and computer based implementation. Information theoretic measures, mutual information and symmetric uncertainty, were evaluated for daily transactions recorded in the chart of accounts of a small company during two years. Variation in the information measures due the aggregation of data in the process of accounting classification is observed. In particular, the symmetric uncertainty seems to be a useful parameter for comparing companies over time or in different sectors; or different accounting choices and standards.

  7. Local, nonlocal quantumness and information theoretic measures

    Science.gov (United States)

    Agrawal, Pankaj; Sazim, Sk; Chakrabarty, Indranil; Pati, Arun K.

    2016-08-01

    It has been suggested that there may exist quantum correlations that go beyond entanglement. The existence of such correlations can be revealed by information theoretic quantities such as quantum discord, but not by the conventional measures of entanglement. We argue that a state displays quantumness, that can be of local and nonlocal origin. Information theoretic measures not only characterize the nonlocal quantumness, but also the local quantumness, such as the “local superposition”. This can be a reason, why such measures are nonzero, when there is no entanglement. We consider a generalized version of the Werner state to demonstrate the interplay of local quantumness, nonlocal quantumness and classical mixedness of a state.

  8. Information-Theoretic Perspectives on Geophysical Models

    Science.gov (United States)

    Nearing, Grey

    2016-04-01

    practice of science (except by Gong et al., 2013, whose fundamental insight is the basis for this talk), and here I offer two examples of practical methods that scientists might use to approximately measure ontological information. I place this practical discussion in the context of several recent and high-profile experiments that have found that simple out-of-sample statistical models typically (vastly) outperform our most sophisticated terrestrial hydrology models. I offer some perspective on several open questions about how to use these findings to improve our models and understanding of these systems. Cartwright, N. (1983) How the Laws of Physics Lie. New York, NY: Cambridge Univ Press. Clark, M. P., Kavetski, D. and Fenicia, F. (2011) 'Pursuing the method of multiple working hypotheses for hydrological modeling', Water Resources Research, 47(9). Cover, T. M. and Thomas, J. A. (1991) Elements of Information Theory. New York, NY: Wiley-Interscience. Cox, R. T. (1946) 'Probability, frequency and reasonable expectation', American Journal of Physics, 14, pp. 1-13. Csiszár, I. (1972) 'A Class of Measures of Informativity of Observation Channels', Periodica Mathematica Hungarica, 2(1), pp. 191-213. Davies, P. C. W. (1990) 'Why is the physical world so comprehensible', Complexity, entropy and the physics of information, pp. 61-70. Gong, W., Gupta, H. V., Yang, D., Sricharan, K. and Hero, A. O. (2013) 'Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach', Water Resources Research, 49(4), pp. 2253-2273. Jaynes, E. T. (2003) Probability Theory: The Logic of Science. New York, NY: Cambridge University Press. Nearing, G. S. and Gupta, H. V. (2015) 'The quantity and quality of information in hydrologic models', Water Resources Research, 51(1), pp. 524-538. Popper, K. R. (2002) The Logic of Scientific Discovery. New York: Routledge. Van Horn, K. S. (2003) 'Constructing a logic of plausible inference: a guide to cox's theorem

  9. Managing Logistics Information System: Theoretical Underpinning

    Directory of Open Access Journals (Sweden)

    Adebambo Somuyiwa

    2010-05-01

    Full Text Available The research sought to explain the theoretical background of articulating effective logistics information system in any industrial outfit. This is predicated on the fact that logistics is “the process of strategically managing the acquisition, movement and storage of materials, parts and finished inventory (and the related information flow s through the organisation and its marketing channel in such a way that current and future profitability is maximized through the cost-effective fulfillment of orders. However, the theoretical basis of the information has not been fully understood within the context of logistics system , such that it will give a pointer to how those inherent costs could be managed or saved, as well as enhanced supplier-customer collaborative relationships. It is in the light of this that the paper attempt to give theoretical considerations, through descriptive methodology approach, on how those basic objectives of logistics can be achieved and its ultimate goal can be realized. The study concluded that emphasis should be placed on attributes of logistics information systems and particularly information cost in order to enhance logistics efficiency and effectiveness.

  10. Quantum probabilities: an information-theoretic interpretation

    CERN Document Server

    Bub, Jeffrey

    2010-01-01

    This Chapter develops a realist information-theoretic interpretation of the nonclassical features of quantum probabilities. On this view, what is fundamental in the transition from classical to quantum physics is the recognition that \\emph{information in the physical sense has new structural features}, just as the transition from classical to relativistic physics rests on the recognition that space-time is structurally different than we thought. Hilbert space, the event space of quantum systems, is interpreted as a kinematic (i.e., pre-dynamic) framework for an indeterministic physics, in the sense that the geometric structure of Hilbert space imposes objective probabilistic or information-theoretic constraints on correlations between events, just as the geometric structure of Minkowski space in special relativity imposes spatio-temporal kinematic constraints on events. The interpretation of quantum probabilities is more subjectivist in spirit than other discussions in this book (e.g., the chapter by Timpson)...

  11. Measuring observability by generalized information theoretic quantities

    Institute of Scientific and Technical Information of China (English)

    Badong CHEN; Jinchun HU; Hongbo LI; Zengqi SUN

    2008-01-01

    A normalized measure is established to provide the quantitative information about the degree of observabilty for the discrete-time,stochastically autonomous system.This measure is based on the generalized information theoretic quanties(generalized entropy,mutual information) of the system state and the observations.where the system state can be a discrete or a continuous random vector.Some important properties are presented.For the linear case,the explicit formula for the degree of observability is derived,and the equivalence between the proposed measure and the traditional ranK condition is proved The curves for the degree of observability are depicted in a simple example.

  12. Data, Methods, and Theoretical Implications

    Science.gov (United States)

    Hannagan, Rebecca J.; Schneider, Monica C.; Greenlee, Jill S.

    2012-01-01

    Within the subfields of political psychology and the study of gender, the introduction of new data collection efforts, methodologies, and theoretical approaches are transforming our understandings of these two fields and the places at which they intersect. In this article we present an overview of the research that was presented at a National…

  13. An information theoretic approach to pedigree reconstruction.

    Science.gov (United States)

    Almudevar, Anthony

    2016-02-01

    Network structure is a dominant feature of many biological systems, both at the cellular level and within natural populations. Advances in genotype and gene expression screening made over the last few decades have permitted the reconstruction of these networks. However, resolution to a single model estimate will generally not be possible, leaving open the question of the appropriate method of formal statistical inference. The nonstandard structure of the problem precludes most traditional statistical methodologies. Alternatively, a Bayesian approach provides a natural methodology for formal inference. Construction of a posterior density on the space of network structures allows formal inference regarding features of network structure using specific marginal posterior distributions. An information theoretic approach to this problem will be described, based on the Minimum Description Length principle. This leads to a Bayesian inference model based on the information content of data rather than on more commonly used probabilistic models. The approach is applied to the problem of pedigree reconstruction based on genotypic data. Using this application, it is shown how the MDL approach is able to provide a truly objective control for model complexity. A two-cohort model is used for a simulation study. The MDL approach is compared to COLONY-2, a well known pedigree reconstruction application. The study highlights the problem of genotyping error modeling. COLONY-2 requires prior error rate estimates, and its accuracy proves to be highly sensitive to these estimates. In contrast, the MDL approach does not require prior error rate estimates, and is able to accurately adjust for genotyping error across the range of models considered. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Systems information management: graph theoretical approach

    NARCIS (Netherlands)

    Temel, T.

    2006-01-01

    This study proposes a new method for characterising the underlying information structure of a multi-sector system. A complete characterisation is accomplished by identifying information gaps and cause-effect information pathways in the system, and formulating critical testable hypotheses. Graph-theo

  15. Text-Independent Speaker Verification Based on Information Theoretic Learning

    Directory of Open Access Journals (Sweden)

    Sheeraz Memon

    2011-07-01

    Full Text Available In this paper VQ (Vector Quantization based on information theoretic learning is investigated for the task of text-independent speaker verification. A novel VQ method based on the IT (Information Theoretic principles is used for the task of speaker verification and compared with two classical VQ approaches: the K-means algorithm and the LBG (Linde Buzo Gray algorithm. The paper provides a theoretical background of the vector quantization techniques, which is followed by experimental results illustrating their performance. The results demonstrated that the ITVQ (Information Theoretic Vector Quantization provided the best performance in terms of classification rates, EER (Equal Error Rates and the MSE (Mean Squared Error compare to Kmeans and the LBG algorithms. The outstanding performance of the ITVQ algorithm can be attributed to the fact that the IT criteria used by this algorithm provide superior matching between distribution of the original data vectors and the codewords.

  16. Towards general information theoretical representations of database problems

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, C.

    1997-06-01

    General database systems are described from the General Systems Theoretical (GST) framework. In this context traditional information theoretical (statistical) and general information theoretical (fuzzy measure and set theoretical, possibilistic, and random set theoretical) representations are derived. A preliminary formal framework is introduced.

  17. Information Ergonomics A theoretical approach and practical experience in transportation

    CERN Document Server

    Sandl, Peter

    2012-01-01

    The variety and increasing availability of hypermedia information systems, which are used in stationary applications like operators’ consoles as well as mobile systems, e.g. driver information and navigation systems in automobiles form a foundation for the mediatization of the society. From the human engineering point of view this development and the ensuing increased importance of information systems for economic and private needs require careful deliberation of the derivation and application of ergonomics methods particularly in the field of information systems. This book consists of two closely intertwined parts. The first, theoretical part defines the concept of an information system, followed by an explanation of action regulation as well as cognitive theories to describe man information system interaction. A comprehensive description of information ergonomics concludes the theoretical approach. In the second, practically oriented part of this book authors from industry as well as from academic institu...

  18. Information-theoretical noninvasive damage detection in bridge structures

    Science.gov (United States)

    Sudu Ambegedara, Amila; Sun, Jie; Janoyan, Kerop; Bollt, Erik

    2016-11-01

    Damage detection of mechanical structures such as bridges is an important research problem in civil engineering. Using spatially distributed sensor time series data collected from a recent experiment on a local bridge in Upper State New York, we study noninvasive damage detection using information-theoretical methods. Several findings are in order. First, the time series data, which represent accelerations measured at the sensors, more closely follow Laplace distribution than normal distribution, allowing us to develop parameter estimators for various information-theoretic measures such as entropy and mutual information. Second, as damage is introduced by the removal of bolts of the first diaphragm connection, the interaction between spatially nearby sensors as measured by mutual information becomes weaker, suggesting that the bridge is "loosened." Finally, using a proposed optimal mutual information interaction procedure to prune away indirect interactions, we found that the primary direction of interaction or influence aligns with the traffic direction on the bridge even after damaging the bridge.

  19. Information Theoretic cutting of a cake

    CERN Document Server

    Delgosha, Payam

    2012-01-01

    Cutting a cake is a metaphor for the problem of dividing a resource (cake) among several agents. The problem becomes non-trivial when the agents have different valuations for different parts of the cake (i.e. one agent may like chocolate while the other may like cream). A fair division of the cake is one that takes into account the individual valuations of agents and partitions the cake based on some fairness criterion. Fair division may be accomplished in a distributed or centralized way. Due to its natural and practical appeal, it has been a subject of study in economics under the topic of "Fair Division". To best of our knowledge the role of partial information in fair division has not been studied so far from an information theoretic perspective. In this paper we study two important algorithms in fair division, namely "divide and choose" and "adjusted winner" for the case of two agents. We quantify the benefit of negotiation in the divide and choose algorithm, and its use in tricking the adjusted winner a...

  20. Identifying Cover Songs Using Information-Theoretic Measures of Similarity

    OpenAIRE

    Foster, Peter; Dixon, Simon; Klapuri, Anssi

    2014-01-01

    This paper investigates methods for quantifying similarity between audio signals, specifically for the task of of cover song detection. We consider an information-theoretic approach, where we compute pairwise measures of predictability between time series. We compare discrete-valued approaches operating on quantised audio features, to continuous-valued approaches. In the discrete case, we propose a method for computing the normalised compression distance, where we account for correlation betw...

  1. Information theoretic regularization in diffuse optical tomography.

    Science.gov (United States)

    Panagiotou, Christos; Somayajula, Sangeetha; Gibson, Adam P; Schweiger, Martin; Leahy, Richard M; Arridge, Simon R

    2009-05-01

    Diffuse optical tomography (DOT) retrieves the spatially distributed optical characteristics of a medium from external measurements. Recovering the parameters of interest involves solving a nonlinear and highly ill-posed inverse problem. This paper examines the possibility of regularizing DOT via the introduction of a priori information from alternative high-resolution anatomical modalities, using the information theory concepts of mutual information (MI) and joint entropy (JE). Such functionals evaluate the similarity between the reconstructed optical image and the prior image while bypassing the multimodality barrier manifested as the incommensurate relation between the gray value representations of corresponding anatomical features in the two modalities. By introducing structural information, we aim to improve the spatial resolution and quantitative accuracy of the solution. We provide a thorough explanation of the theory from an imaging perspective, accompanied by preliminary results using numerical simulations. In addition we compare the performance of MI and JE. Finally, we have adopted a method for fast marginal entropy evaluation and optimization by modifying the objective function and extending it to the JE case. We demonstrate its use on an image reconstruction framework and show significant computational savings.

  2. Image Information Mining System Evaluation Using Information-Theoretic Measures

    Directory of Open Access Journals (Sweden)

    Mihai Datcu

    2005-08-01

    Full Text Available During the last decade, the exponential increase of multimedia and remote sensing image archives, the fast expansion of the world wide web, and the high diversity of users have yielded concepts and systems for successful content-based image retrieval and image information mining. Image data information systems require both database and visual capabilities, but there is a gap between these systems. Database systems usually do not deal with multidimensional pictorial structures and vision systems do not provide database query functions. In terms of these points, the evaluation of content-based image retrieval systems became a focus of research interest. One can find several system evaluation approaches in literature, however, only few of them go beyond precision-recall graphs and do not allow a detailed evaluation of an interactive image retrieval system. Apart from the existing evaluation methodologies, we aim at the overall validation of our knowledge-driven content-based image information mining system. In this paper, an evaluation approach is demonstrated that is based on information-theoretic quantities to determine the information flow between system levels of different semantic abstraction and to analyze human-computer interactions.

  3. Image Information Mining System Evaluation Using Information-Theoretic Measures

    Science.gov (United States)

    Daschiel, Herbert; Datcu, Mihai

    2005-12-01

    During the last decade, the exponential increase of multimedia and remote sensing image archives, the fast expansion of the world wide web, and the high diversity of users have yielded concepts and systems for successful content-based image retrieval and image information mining. Image data information systems require both database and visual capabilities, but there is a gap between these systems. Database systems usually do not deal with multidimensional pictorial structures and vision systems do not provide database query functions. In terms of these points, the evaluation of content-based image retrieval systems became a focus of research interest. One can find several system evaluation approaches in literature, however, only few of them go beyond precision-recall graphs and do not allow a detailed evaluation of an interactive image retrieval system. Apart from the existing evaluation methodologies, we aim at the overall validation of our knowledge-driven content-based image information mining system. In this paper, an evaluation approach is demonstrated that is based on information-theoretic quantities to determine the information flow between system levels of different semantic abstraction and to analyze human-computer interactions.

  4. An Information Theoretic Analysis of Decision in Computer Chess

    CERN Document Server

    Godescu, Alexandru

    2011-01-01

    The basis of the method proposed in this article is the idea that information is one of the most important factors in strategic decisions, including decisions in computer chess and other strategy games. The model proposed in this article and the algorithm described are based on the idea of a information theoretic basis of decision in strategy games . The model generalizes and provides a mathematical justification for one of the most popular search algorithms used in leading computer chess programs, the fractional ply scheme. However, despite its success in leading computer chess applications, until now few has been published about this method. The article creates a fundamental basis for this method in the axioms of information theory, then derives the principles used in programming the search and describes mathematically the form of the coefficients. One of the most important parameters of the fractional ply search is derived from fundamental principles. Until now this coefficient has been usually handcrafted...

  5. Information-theoretic analysis of electronic and printed document authentication

    Science.gov (United States)

    Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Villan, Renato; Topak, Emre; Vila Forcén, José Emilio; Deguillaume, Frederic; Rytsar, Yuriy; Pun, Thierry

    2006-02-01

    In this paper we consider the problem of document authentication in electronic and printed forms. We formulate this problem from the information-theoretic perspectives and present the joint source-channel coding theorems showing the performance limits in such protocols. We analyze the security of document authentication methods and present the optimal attacking strategies with corresponding complexity estimates that, contrarily to the existing studies, crucially rely on the information leaked by the authentication protocol. Finally, we present the results of experimental validation of the developed concept that justifies the practical efficiency of the elaborated framework.

  6. Landscape habitat diversity: An information theoretic measure

    Energy Technology Data Exchange (ETDEWEB)

    Loehle, C. [Argonne National Lab., IL (United States); Wein, G. [Memphis State Univ., TN (United States). Dept. of Biology

    1994-06-01

    Biotic diversity is a topic of increasing concern, but current tools for quantifying diversity at the landscape level are inadequate. A new index is proposed. Beginning with a classified raster image of a landscape, each habitat type is assigned a value based on an ordination axis distance. The change in value from one patch to the next depends on how similar the two patches are. An information measure d{sub I} is used to evaluate deviation from uniformity of the ordination values at different scales. Different areas can be compared if habitat values are based on the same ordination scale. This new method provides a powerful tool for both displaying and calculating landscape habitat diversity.

  7. Blind Spectrum Sensing by Information Theoretic Criteria for Cognitive Radios

    CERN Document Server

    Wang, Rui

    2010-01-01

    Spectrum sensing is a fundamental and critical issue for opportunistic spectrum access in cognitive radio networks. Among the many spectrum sensing methods, the information theoretic criteria (ITC) based method is a promising blind method which can reliably detect the primary users while requiring little prior information. In this paper, we provide an intensive treatment on the ITC sensing method. To this end, we first introduce a new over-determined channel model constructed by applying multiple antennas or over sampling at the secondary user in order to make the ITC applicable. Then, a simplified ITC sensing algorithm is introduced, which needs to compute and compare only two decision values. Compared with the original ITC sensing algorithm, the simplified algorithm significantly reduces the computational complexity without losing any performance. Applying the recent advances in random matrix theory, we then derive closed-form expressions to tightly approximate both the probability of false alarm and probab...

  8. A group theoretic approach to quantum information

    CERN Document Server

    Hayashi, Masahito

    2017-01-01

    This textbook is the first one addressing quantum information from the viewpoint of group symmetry. Quantum systems have a group symmetrical structure. This structure enables to handle systematically quantum information processing. However, there is no other textbook focusing on group symmetry for quantum information although there exist many textbooks for group representation. After the mathematical preparation of quantum information, this book discusses quantum entanglement and its quantification by using group symmetry. Group symmetry drastically simplifies the calculation of several entanglement measures although their calculations are usually very difficult to handle. This book treats optimal information processes including quantum state estimation, quantum state cloning, estimation of group action and quantum channel etc. Usually it is very difficult to derive the optimal quantum information processes without asymptotic setting of these topics. However, group symmetry allows to derive these optimal solu...

  9. Theoretical Implications for Inform and Influence Activities

    Science.gov (United States)

    2013-05-23

    CEMA Cyber Electromagnetic Activities CA Civil Affairs DOD Department of Defense DSB Defense Science Board IIA Inform and Influence Activities IO...23 Figure 3. CEMA Lines of Effort .................................................................................................... 23 Figure 4...capabilities into IIA and cyber electromagnetic activities ( CEMA ) 3. A name change from PSYOP to military information support activities (MISO) 4

  10. An information theoretic characterisation of auditory encoding.

    Directory of Open Access Journals (Sweden)

    Tobias Overath

    2007-10-01

    Full Text Available The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT. In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content.

  11. An information theoretic approach for privacy metrics

    Directory of Open Access Journals (Sweden)

    Michele Bezzi

    2010-12-01

    Full Text Available Organizations often need to release microdata without revealing sensitive information. To this scope, data are anonymized and, to assess the quality of the process, various privacy metrics have been proposed, such as k-anonymity, l-diversity, and t-closeness. These metrics are able to capture different aspects of the disclosure risk, imposing minimal requirements on the association of an individual with the sensitive attributes. If we want to combine them in a optimization problem, we need a common framework able to express all these privacy conditions. Previous studies proposed the notion of mutual information to measure the different kinds of disclosure risks and the utility, but, since mutual information is an average quantity, it is not able to completely express these conditions on single records. We introduce here the notion of one-symbol information (i.e., the contribution to mutual information by a single record that allows to express and compare the disclosure risk metrics. In addition, we obtain a relation between the risk values t and l, which can be used for parameter setting. We also show, by numerical experiments, how l-diversity and t-closeness can be represented in terms of two different, but equally acceptable, conditions on the information gain..

  12. Information-theoretic evaluation for computational biomedical ontologies

    CERN Document Server

    Clark, Wyatt Travis

    2014-01-01

    The development of effective methods for the prediction of ontological annotations is an important goal in computational biology, yet evaluating their performance is difficult due to problems caused by the structure of biomedical ontologies and incomplete annotations of genes. This work proposes an information-theoretic framework to evaluate the performance of computational protein function prediction. A Bayesian network is used, structured according to the underlying ontology, to model the prior probability of a protein's function. The concepts of misinformation and remaining uncertainty are

  13. Information Theoretic Authentication and Secrecy Codes in the Splitting Model

    CERN Document Server

    Huber, Michael

    2011-01-01

    In the splitting model, information theoretic authentication codes allow non-deterministic encoding, that is, several messages can be used to communicate a particular plaintext. Certain applications require that the aspect of secrecy should hold simultaneously. Ogata-Kurosawa-Stinson-Saido (2004) have constructed optimal splitting authentication codes achieving perfect secrecy for the special case when the number of keys equals the number of messages. In this paper, we establish a construction method for optimal splitting authentication codes with perfect secrecy in the more general case when the number of keys may differ from the number of messages. To the best knowledge, this is the first result of this type.

  14. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  15. An Information-Theoretic Privacy Criterion for Query Forgery in Information Retrieval

    CERN Document Server

    Rebollo-Monedero, David; Forné, Jordi

    2011-01-01

    In previous work, we presented a novel information-theoretic privacy criterion for query forgery in the domain of information retrieval. Our criterion measured privacy risk as a divergence between the user's and the population's query distribution, and contemplated the entropy of the user's distribution as a particular case. In this work, we make a twofold contribution. First, we thoroughly interpret and justify the privacy metric proposed in our previous work, elaborating on the intimate connection between the celebrated method of entropy maximization and the use of entropies and divergences as measures of privacy. Secondly, we attempt to bridge the gap between the privacy and the information-theoretic communities by substantially adapting some technicalities of our original work to reach a wider audience, not intimately familiar with information theory and the method of types.

  16. Network Complexity Measures. An Information-Theoretic Approach.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    2015-04-01

    Full Text Available Quantitative graph analysis by using structural indices has been intricate in a sense that it often remains unclear which structural graph measures is the most suitable one, see [1, 12, 13]. In general, quantitative graph analysis deals with quantifying structural information of networks by using a measurement approach [5]. As special problem thereof is to characterize a graph quantitatively, that means to determine a measure that captures structural features of a network meaningfully. Various classical structural graph measures have been used to tackle this problem [13]. A fruitful approach by using information-theoretic [21] and statistical methods is to quantify the structural information content of a graph [1, 8, 18]. In this note, we sketch some classical information measures. Also, we briefly address the problem what kind of measures capture structural information uniquely. This relates to determine the discrimination power (or also called uniqueness of a graph measure, that is, how is the ability of the measures to discriminate non-isomorphic graphs structurally. [1] D. Bonchev. Information Theoretic Indices for Characterization of Chemical Structures. Research Studies Press, Chichester, 1983. [5] M. Dehmer and F. Emmert-Streib. Quantitative Graph Theory. Theory and Applications. CRC Press, 2014. [8] M. Dehmer, M. Grabner, and K. Varmuza. Information indices with high discriminative power for graphs. PLoS ONE, 7:e31214, 2012. [12] F. Emmert-Streib and M. Dehmer. Exploring statistical and population aspects of network complexity. PLoS ONE, 7:e34523, 2012. [13] F. Harary. Graph Theory. Addison Wesley Publishing Company, 1969. Reading, MA, USA. [18] A. Mowshowitz. Entropy and the complexity of the graphs I: An index of the relative complexity of a graph. Bull. Math. Biophys., 30:175–204, 1968. [21] C. E. Shannon and W. Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.

  17. Information theoretic learning Renyi's entropy and Kernel perspectives

    CERN Document Server

    Principe, Jose C

    2010-01-01

    This book presents the first cohesive treatment of Information Theoretic Learning (ITL) algorithms to adapt linear or nonlinear learning machines both in supervised or unsupervised paradigms. ITL is a framework where the conventional concepts of second order statistics (covariance, L2 distances, correlation functions) are substituted by scalars and functions with information theoretic underpinnings, respectively entropy, mutual information and correntropy. ITL quantifies the stochastic structure of the data beyond second order statistics for improved performance without using full-blown Bayesi

  18. PET Image Reconstruction Using Information Theoretic Anatomical Priors

    Science.gov (United States)

    Somayajula, Sangeetha; Panagiotou, Christos; Rangarajan, Anand; Li, Quanzheng; Arridge, Simon R.

    2011-01-01

    We describe a nonparametric framework for incorporating information from co-registered anatomical images into positron emission tomographic (PET) image reconstruction through priors based on information theoretic similarity measures. We compare and evaluate the use of mutual information (MI) and joint entropy (JE) between feature vectors extracted from the anatomical and PET images as priors in PET reconstruction. Scale-space theory provides a framework for the analysis of images at different levels of detail, and we use this approach to define feature vectors that emphasize prominent boundaries in the anatomical and functional images, and attach less importance to detail and noise that is less likely to be correlated in the two images. Through simulations that model the best case scenario of perfect agreement between the anatomical and functional images, and a more realistic situation with a real magnetic resonance image and a PET phantom that has partial volumes and a smooth variation of intensities, we evaluate the performance of MI and JE based priors in comparison to a Gaussian quadratic prior, which does not use any anatomical information. We also apply this method to clinical brain scan data using F18 Fallypride, a tracer that binds to dopamine receptors and therefore localizes mainly in the striatum. We present an efficient method of computing these priors and their derivatives based on fast Fourier transforms that reduce the complexity of their convolution-like expressions. Our results indicate that while sensitive to initialization and choice of hyperparameters, information theoretic priors can reconstruct images with higher contrast and superior quantitation than quadratic priors. PMID:20851790

  19. Wireless Information-Theoretic Security in an Outdoor Topology with Obstacles: Theoretical Analysis and Experimental Measurements

    Directory of Open Access Journals (Sweden)

    Dagiuklas Tasos

    2011-01-01

    Full Text Available This paper presents a Wireless Information-Theoretic Security (WITS scheme, which has been recently introduced as a robust physical layer-based security solution, especially for infrastructureless networks. An autonomic network of moving users was implemented via 802.11n nodes of an ad hoc network for an outdoor topology with obstacles. Obstructed-Line-of-Sight (OLOS and Non-Line-of-Sight (NLOS propagation scenarios were examined. Low-speed user movement was considered, so that Doppler spread could be discarded. A transmitter and a legitimate receiver exchanged information in the presence of a moving eavesdropper. Average Signal-to-Noise Ratio (SNR values were acquired for both the main and the wiretap channel, and the Probability of Nonzero Secrecy Capacity was calculated based on theoretical formula. Experimental results validate theoretical findings stressing the importance of user location and mobility schemes on the robustness of Wireless Information-Theoretic Security and call for further theoretical analysis.

  20. Content-based Image Retrieval by Information Theoretic Measure

    Directory of Open Access Journals (Sweden)

    Madasu Hanmandlu

    2011-09-01

    Full Text Available Content-based image retrieval focuses on intuitive and efficient methods for retrieving images from databases based on the content of the images. A new entropy function that serves as a measure of information content in an image termed as 'an information theoretic measure' is devised in this paper. Among the various query paradigms, 'query by example' (QBE is adopted to set a query image for retrieval from a large image database. In this paper, colour and texture features are extracted using the new entropy function and the dominant colour is considered as a visual feature for a particular set of images. Thus colour and texture features constitute the two-dimensional feature vector for indexing the images. The low dimensionality of the feature vector speeds up the atomic query. Indices in a large database system help retrieve the images relevant to the query image without looking at every image in the database. The entropy values of colour and texture and the dominant colour are considered for measuring the similarity. The utility of the proposed image retrieval system based on the information theoretic measures is demonstrated on a benchmark dataset.Defence Science Journal, 2011, 61(5, pp.415-430, DOI:http://dx.doi.org/10.14429/dsj.61.1177

  1. One-dimensional barcode reading: an information theoretic approach.

    Science.gov (United States)

    Houni, Karim; Sawaya, Wadih; Delignon, Yves

    2008-03-10

    In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.

  2. Decision-Theoretic Methods in Simulation Optimization

    Science.gov (United States)

    2014-09-24

    Materiel Command REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is...Alamos National Lab: Frazier visited LANL , hosted by Frank Alexander, in January 2013, where he discussed the use of simulation optimization methods for...Alexander, Turab Lookman, and others from LANL , at the Materials Informatics Workshop at the Sante Fe Institute in April 2013. In February 2014, Frazier

  3. What is "system": the information-theoretic arguments

    CERN Document Server

    Dugic, M

    2006-01-01

    The problem of "what is 'system'?" is in the very foundations of modern quantum mechanics. Here, we point out the interest in this topic in the information-theoretic context. E.g., we point out the possibility to manipulate a pair of mutually non-interacting, non-entangled systems to employ entanglement of the newly defined '(sub)systems' consisting the one and the same composite system. Given the different divisions of a composite system into "subsystems", the Hamiltonian of the system may perform in general non-equivalent quantum computations. Redefinition of "subsystems" of a composite system may be regarded as a method for avoiding decoherence in the quantum hardware. In principle, all the notions refer to a composite system as simple as the hydrogen atom.

  4. Applications of theoretical methods in atmospheric science

    DEFF Research Database (Denmark)

    Johnson, Matthew Stanley; Goodsite, Michael E.

    2008-01-01

    Theoretical chemistry involves explaining chemical phenomenon using natural laws. The primary tool of theoretical chemistry is quantum chemistry, and the field may be divided into electronic structure calculations, reaction dynamics and statistical mechanics. These three all play a role in addres...

  5. Information-Theoretic Secure Verifiable Secret Sharing over RSA Modulus

    Institute of Scientific and Technical Information of China (English)

    QIU Gang; WANG Hong; WEI Shimin; XIAO Guozhen

    2006-01-01

    The well-known non-interactive and information-theoretic secure verifiable secret sharing scheme presented by Pedersen is over a large prime. In this paper, we construct a novel non-interactive and information-theoretic verifiable secret sharing over RSA (Rivest,Shamir,Adleman) modulus and give the rigorous security proof. It is shown how to distribute a secret among a group such that any set of k parties get no information about the secret. The presented scheme is generally applied to constructions of secure distributed multiplication and threshold or forward-secure signature protocols.

  6. Introduction of an information-theoretic method to predict recovery rates of active compounds for Bayesian in silico screening: theory and screening trials.

    Science.gov (United States)

    Vogt, Martin; Bajorath, Jürgen

    2007-01-01

    We present the first method to predict compound recovery rates from descriptor statistics. A log-odds function is designed that models probability distributions of descriptor values of active and inactive molecules in chemical space and used to determine the likelihood of database compounds to exhibit a specific activity. The divergence of probability models for active and inactive compounds is applied to evaluate the ability of the log-odds likelihood function to recover active compounds from a background database. The divergence measure, which is closely related to the Kullback-Leibler distance, is strongly correlated with recovery rates of Bayesian virtual screening calculations. It has thus been possible to predict compound recovery rates for different activity classes. Prior to practical virtual screening trials, one can also estimate how likely it would be to recover active compounds from a given screening database.

  7. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  8. Recent theoretical progress on an information geometrodynamical approach to chaos

    CERN Document Server

    Cafaro, Carlo

    2008-01-01

    In this paper, we report our latest research on a novel theoretical information-geometric framework suitable to characterize chaotic dynamical behavior of arbitrary complex systems on curved statistical manifolds. Specifically, an information-geometric analogue of the Zurek-Paz quantum chaos criterion of linear entropy growth and an information-geometric characterization of chaotic (integrable) energy level statistics of a quantum antiferromagnetic Ising spin chain in a tilted (transverse) external magnetic field are presented.

  9. Set-theoretic methods in control

    CERN Document Server

    Blanchini, Franco

    2015-01-01

    The second edition of this monograph describes the set-theoretic approach for the control and analysis of dynamic systems, both from a theoretical and practical standpoint.  This approach is linked to fundamental control problems, such as Lyapunov stability analysis and stabilization, optimal control, control under constraints, persistent disturbance rejection, and uncertain systems analysis and synthesis.  Completely self-contained, this book provides a solid foundation of mathematical techniques and applications, extensive references to the relevant literature, and numerous avenues for further theoretical study. All the material from the first edition has been updated to reflect the most recent developments in the field, and a new chapter on switching systems has been added.  Each chapter contains examples, case studies, and exercises to allow for a better understanding of theoretical concepts by practical application. The mathematical language is kept to the minimum level necessary for the adequate for...

  10. A quantum information theoretic analysis of three flavor neutrino oscillations

    CERN Document Server

    Banerjee, Subhashish; Srikanth, R; Hiesmayr, Beatrix C

    2015-01-01

    Correlations exhibited by neutrino oscillations are studied via quantum information theoretic quantities. We show that the strongest type of entanglement, genuine multipartite entanglement, is persistent in the flavour changing states. We prove the existence of Bell-type nonlocal features, in both its absolute and genuine avatars. Finally, we show that a measure of nonclassicality, dissension, which is a generalization of quantum discord to the tripartite case, is nonzero for almost the entire range of time in the evolution of an initial electron-neutrino. Via these quantum information theoretic quantities capturing different aspects of quantum correlations, we elucidate the differences between the flavour types, shedding light on the quantum-information theoretic aspects of the weak force.

  11. A Theoretical Paradigm of Information Retrieval in Information Science and Computer Science

    Directory of Open Access Journals (Sweden)

    M. S. Saleem Basha

    2012-09-01

    Full Text Available This paper describes the theoretical paradigms of information retrieval in information science and computer science, and constructs the theory framework of information retrieval from three perspectives that are user, information and technology. It evaluates the research priorities of the two disciplines and cross-domain of information retrieval theory. Finally, it points-out the theory status and development trend of information retrieval in information science and computer science, and provides exploration direction in information retrieval theory.

  12. Theoretical development of information science: A brief history

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2016-01-01

    This paper presents a brief history of information science (IS) as viewed by the author. The term ‘information science’ goes back to 1955 and evolved in the aftermath of Claude Shannon’s ‘information theory’ (1948), which also inspired research into problems in fields of library science...... and documentation. These subjects were a main focus of what became established as ‘information science’, which from 1964 onwards was often termed ‘library and information science’ (LIS). However, the usefulness of Shannon’s information theory as the theoretical foundation of the field was been challenged. Among....... Today information science is very fragmented, but a growing number of researchers find that the problems in the field should be related to theories of knowledge and understood from a social and cultural perspective, thereby re-establishing connections with idea’s such as social epistemology which may...

  13. Role of information theoretic uncertainty relations in quantum theory

    Energy Technology Data Exchange (ETDEWEB)

    Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz [FNSPE, Czech Technical University in Prague, Břehová 7, 115 19 Praha 1 (Czech Republic); ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin (Germany); Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk [Department of Physics and Astronomy, University of Sussex, Falmer, Brighton, BN1 9QH (United Kingdom); Joo, Jaewoo, E-mail: j.joo@surrey.ac.uk [Advanced Technology Institute and Department of Physics, University of Surrey, Guildford, GU2 7XH (United Kingdom)

    2015-04-15

    Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again, improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.

  14. Characterizing quantum theory in terms of information-theoretic constraints

    CERN Document Server

    Clifton, R; Halvorson, H; Clifton, Rob; Bub, Jeffrey; Halvorson, Hans

    2003-01-01

    We show that three fundamental information-theoretic constraints--the impossibility of superluminal information transfer between two physical systems by performing measurements on one of them, the impossibility of broadcasting the information contained in an unknown physical state, and the impossibility of unconditionally secure bit commitment--suffice to entail that the observables and state space of a physical theory are quantum-mechanical. We demonstrate the converse derivation in part, and consider the implications of alternative answers to a remaining open question about nonlocality and bit commitment.

  15. Almost Free Modules Set-Theoretic Methods

    CERN Document Server

    Eklof, PC

    1990-01-01

    This is an extended treatment of the set-theoretic techniques which have transformed the study of abelian group and module theory over the last 15 years. Part of the book is new work which does not appear elsewhere in any form. In addition, a large body of material which has appeared previously (in scattered and sometimes inaccessible journal articles) has been extensively reworked and in many cases given new and improved proofs. The set theory required is carefully developed with algebraists in mind, and the independence results are derived from explicitly stated axioms. The book contains exe

  16. Information-Theoretical Complexity Analysis of Selected Elementary Chemical Reactions

    Science.gov (United States)

    Molina-Espíritu, M.; Esquivel, R. O.; Dehesa, J. S.

    We investigate the complexity of selected elementary chemical reactions (namely, the hydrogenic-abstraction reaction and the identity SN2 exchange reaction) by means of the following single and composite information-theoretic measures: disequilibrium (D), exponential entropy(L), Fisher information (I), power entropy (J), I-D, D-L and I-J planes and Fisher-Shannon (FS) and Lopez-Mancini-Calbet (LMC) shape complexities. These quantities, which are functionals of the one-particle density, are computed in both position (r) and momentum (p) spaces. The analysis revealed that the chemically significant regions of these reactions can be identified through most of the single information-theoretic measures and the two-component planes, not only the ones which are commonly revealed by the energy, such as the reactant/product (R/P) and the transition state (TS), but also those that are not present in the energy profile such as the bond cleavage energy region (BCER), the bond breaking/forming regions (B-B/F) and the charge transfer process (CT). The analysis of the complexities shows that the energy profile of the abstraction reaction bears the same information-theoretical features of the LMC and FS measures, however for the identity SN2 exchange reaction does not hold a simple behavior with respect to the LMC and FS measures. Most of the chemical features of interest (BCER, B-B/F and CT) are only revealed when particular information-theoretic aspects of localizability (L or J), uniformity (D) and disorder (I) are considered.

  17. THEORETICAL-METHODICAL FUNDAMENTALS OF INDUSTRIAL MARKETING RESEARCH

    OpenAIRE

    N. Butenko

    2009-01-01

    The article proves the necessity to research theoretical and methodical fundamentals of industrial marketing and defines main key aspects of relationship management with the customers on industrial market.

  18. Theoretical-methodical Fundamentals of industrial marketing research

    OpenAIRE

    N. Butenko

    2009-01-01

    The article proves the necessity to research theoretical and methodical fundamentals of industrial marketing and defines main key aspects of relationship management with the customers on industrial market.

  19. Research methods in information

    CERN Document Server

    Pickard, Alison Jane

    2013-01-01

    The long-awaited 2nd edition of this best-selling research methods handbook is fully updated and includes brand new coverage of online research methods and techniques, mixed methodology and qualitative analysis. There is an entire chapter contributed by Professor Julie McLeod, Sue Childs and Elizabeth Lomas focusing on research data management, applying evidence from the recent JISC funded 'DATUM' project. The first to focus entirely on the needs of the information and communications community, it guides the would-be researcher through the variety of possibilities open to them under the heading "research" and provides students with the confidence to embark on their dissertations. The focus here is on the 'doing' and although the philosophy and theory of research is explored to provide context, this is essentially a practical exploration of the whole research process with each chapter fully supported by examples and exercises tried and tested over a whole teaching career. The book will take readers through eac...

  20. Game Theoretic Methods for the Smart Grid

    CERN Document Server

    Saad, Walid; Poor, H Vincent; Başar, Tamer

    2012-01-01

    The future smart grid is envisioned as a large-scale cyber-physical system encompassing advanced power, communications, control, and computing technologies. In order to accommodate these technologies, it will have to build on solid mathematical tools that can ensure an efficient and robust operation of such heterogeneous and large-scale cyber-physical systems. In this context, this paper is an overview on the potential of applying game theory for addressing relevant and timely open problems in three emerging areas that pertain to the smart grid: micro-grid systems, demand-side management, and communications. In each area, the state-of-the-art contributions are gathered and a systematic treatment, using game theory, of some of the most relevant problems for future power systems is provided. Future opportunities for adopting game theoretic methodologies in the transition from legacy systems toward smart and intelligent grids are also discussed. In a nutshell, this article provides a comprehensive account of the...

  1. Axiomatic Relation between Thermodynamic and Information-Theoretic Entropies

    Science.gov (United States)

    Weilenmann, Mirjam; Kraemer, Lea; Faist, Philippe; Renner, Renato

    2016-12-01

    Thermodynamic entropy, as defined by Clausius, characterizes macroscopic observations of a system based on phenomenological quantities such as temperature and heat. In contrast, information-theoretic entropy, introduced by Shannon, is a measure of uncertainty. In this Letter, we connect these two notions of entropy, using an axiomatic framework for thermodynamics [E. H. Lieb and J. Yngvason Proc. R. Soc. 469, 20130408 (2013)]. In particular, we obtain a direct relation between the Clausius entropy and the Shannon entropy, or its generalization to quantum systems, the von Neumann entropy. More generally, we find that entropy measures relevant in nonequilibrium thermodynamics correspond to entropies used in one-shot information theory.

  2. Information-theoretic characterization of uncertainty in manual control

    OpenAIRE

    Trendafilovv, D.; Murray-Smith, R.

    2013-01-01

    We present a novel approach for quantifying the\\ud impact of uncertainty in manual control, based on information\\ud and control theories and utilizing the information-theoretic\\ud capacity of empowerment, a task-independent universal utility\\ud measure. Empowerment measures, for agent-environment systems\\ud with stochastic transitions, how much influence, which can be\\ud sensed by the agent sensors, an agent has on its environment.\\ud It enables combining different types of disturbances, aris...

  3. Information-Theoretic Bounded Rationality and ε-Optimality

    Directory of Open Access Journals (Sweden)

    Daniel A. Braun

    2014-08-01

    Full Text Available Bounded rationality concerns the study of decision makers with limited information processing resources. Previously, the free energy difference functional has been suggested to model bounded rational decision making, as it provides a natural trade-off between an energy or utility function that is to be optimized and information processing costs that are measured by entropic search costs. The main question of this article is how the information-theoretic free energy model relates to simple ε-optimality models of bounded rational decision making, where the decision maker is satisfied with any action in an ε-neighborhood of the optimal utility. We find that the stochastic policies that optimize the free energy trade-off comply with the notion of ε-optimality. Moreover, this optimality criterion even holds when the environment is adversarial. We conclude that the study of bounded rationality based on ε-optimality criteria that abstract away from the particulars of the information processing constraints is compatible with the information-theoretic free energy model of bounded rationality.

  4. Information Theoretic Criteria for Observation-to-Observation Association

    Science.gov (United States)

    2014-09-01

    orbit determination (IOD) method as well as criteria from information theory . The two main criteria we use in this paper are mutual information and...problem by using an appropriate initial orbit determination (IOD) method as well as criteria from information theory . The two main criteria we use in...approaches as they more accurately capture degree of dependence between variables. This paper barely touches the tip of the iceberg , and its most

  5. How many clusters? An information-theoretic perspective.

    Science.gov (United States)

    Still, Susanne; Bialek, William

    2004-12-01

    Clustering provides a common means of identifying structure in complex data, and there is renewed interest in clustering as a tool for the analysis of large data sets in many fields. A natural question is how many clusters are appropriate for the description of a given system. Traditional approaches to this problem are based on either a framework in which clusters of a particular shape are assumed as a model of the system or on a two-step procedure in which a clustering criterion determines the optimal assignments for a given number of clusters and a separate criterion measures the goodness of the classification to determine the number of clusters. In a statistical mechanics approach, clustering can be seen as a trade-off between energy- and entropy-like terms, with lower temperature driving the proliferation of clusters to provide a more detailed description of the data. For finite data sets, we expect that there is a limit to the meaningful structure that can be resolved and therefore a minimum temperature beyond which we will capture sampling noise. This suggests that correcting the clustering criterion for the bias that arises due to sampling errors will allow us to find a clustering solution at a temperature that is optimal in the sense that we capture maximal meaningful structure--without having to define an external criterion for the goodness or stability of the clustering. We show that in a general information-theoretic framework, the finite size of a data set determines an optimal temperature, and we introduce a method for finding the maximal number of clusters that can be resolved from the data in the hard clustering limit.

  6. Information-theoretic model selection applied to supernovae data

    CERN Document Server

    Biesiada, M

    2007-01-01

    There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...

  7. Quantum dynamic imaging theoretical and numerical methods

    CERN Document Server

    Ivanov, Misha

    2011-01-01

    Studying and using light or "photons" to image and then to control and transmit molecular information is among the most challenging and significant research fields to emerge in recent years. One of the fastest growing areas involves research in the temporal imaging of quantum phenomena, ranging from molecular dynamics in the femto (10-15s) time regime for atomic motion to the atto (10-18s) time scale of electron motion. In fact, the attosecond "revolution" is now recognized as one of the most important recent breakthroughs and innovations in the science of the 21st century. A major participant in the development of ultrafast femto and attosecond temporal imaging of molecular quantum phenomena has been theory and numerical simulation of the nonlinear, non-perturbative response of atoms and molecules to ultrashort laser pulses. Therefore, imaging quantum dynamics is a new frontier of science requiring advanced mathematical approaches for analyzing and solving spatial and temporal multidimensional partial differ...

  8. Theoretical and numerical method in aeroacoustics

    Directory of Open Access Journals (Sweden)

    Nicuşor ALEXANDRESCU

    2010-06-01

    Full Text Available The paper deals with the mathematical and numerical modeling of the aerodynamic noisegenerated by the fluid flow interaction with the solid structure of a rotor blade.Our analysis use Lighthill’s acoustic analogy. Lighthill idea was to express the fundamental equationsof motion into a wave equation for acoustic fluctuation with a source term on the right-hand side. Theobtained wave equation is solved numerically by the spatial discretization. The method is applied inthe case of monopole source placed in different points of blade surfaces to find this effect of noisepropagation.

  9. Information-theoretic security without an honest majority

    CERN Document Server

    Broadbent, Anne

    2007-01-01

    We present six multiparty protocols with information-theoretic security that tolerate an arbitrary number of corrupt participants. All protocols assume pairwise authentic private channels and a broadcast channel (in a single case, we require a simultaneous broadcast channel). We give protocols for veto, vote, anonymous bit transmission, collision detection, notification and anonymous message transmission. Not assuming an honest majority, in most cases, a single corrupt participant can make the protocol abort. All protocols achieve functionality never obtained before without the use of either computational assumptions or of an honest majority.

  10. A theoretical framework of tracer methods for marine sediment dynamics

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A new theoretical framework of tracer methods is proposed in the present contribution, on the basis of mass conservation. This model is applicable for both artificial and natural tracers. It can be used to calculate the spatial distribution patterns of sediment transport rate, thus providing independent information and verification for the results derived from empirical formulae. For the procedures of the calculation, first, the tracer concentration and topographic maps of two times are obtained. Then, the spatial and temporal changes in the concentration and seabed elevation are calculated, and the boundary conditions required are determined by field observations (such as flow and bedform migration measurements). Finally, based upon eqs. (1) and (13), the transport rate is calculated and expressed as a function of the position over the study area. Further, appropriate modifications to the model may allow the tracer to have different densities and grain size distributions from the bulk sediment.

  11. Applied Mathematical Methods in Theoretical Physics

    Science.gov (United States)

    Masujima, Michio

    2005-04-01

    All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.

  12. Information-Theoretically Secure Voting Without an Honest Majority

    CERN Document Server

    Broadbent, Anne

    2008-01-01

    We present three voting protocols with unconditional privacy and information-theoretic correctness, without assuming any bound on the number of corrupt voters or voting authorities. All protocols have polynomial complexity and require private channels and a simultaneous broadcast channel. Our first protocol is a basic voting scheme which allows voters to interact in order to compute the tally. Privacy of the ballot is unconditional, but any voter can cause the protocol to fail, in which case information about the tally may nevertheless transpire. Our second protocol introduces voting authorities which allow the implementation of the first protocol, while reducing the interaction and limiting it to be only between voters and authorities and among the authorities themselves. The simultaneous broadcast is also limited to the authorities. As long as a single authority is honest, the privacy is unconditional, however, a single corrupt authority or a single corrupt voter can cause the protocol to fail. Our final pr...

  13. Deep and Structured Robust Information Theoretic Learning for Image Analysis.

    Science.gov (United States)

    Deng, Yue; Bao, Feng; Deng, Xuesong; Wang, Ruiping; Kong, Youyong; Dai, Qionghai

    2016-07-07

    This paper presents a robust information theoretic (RIT) model to reduce the uncertainties, i.e. missing and noisy labels, in general discriminative data representation tasks. The fundamental pursuit of our model is to simultaneously learn a transformation function and a discriminative classifier that maximize the mutual information of data and their labels in the latent space. In this general paradigm, we respectively discuss three types of the RIT implementations with linear subspace embedding, deep transformation and structured sparse learning. In practice, the RIT and deep RIT are exploited to solve the image categorization task whose performances will be verified on various benchmark datasets. The structured sparse RIT is further applied to a medical image analysis task for brain MRI segmentation that allows group-level feature selections on the brain tissues.

  14. Exploring the joint measurability using an information-theoretic approach

    Science.gov (United States)

    Hsu, Li-Yi

    2016-10-01

    We explore the legal purity parameters for the joint measurements. Instead of direct unsharpening the measurements, we perform the quantum cloning before the sharp measurements. The necessary fuzziness in the unsharp measurements is equivalently introduced in the imperfect cloning process. Based on the information causality and the consequent noisy nonlocal computation, one can derive the information-theoretic quadratic inequalities that must be satisfied by any physical theory. On the other hand, to guarantee the classicality, the linear Bell-type inequalities deduced by these quadratic ones must be obeyed. As for the joint measurability, the purity parameters must be chosen to obey both types of inequalities. Finally, the quadratic inequalities for purity parameters in the joint measurability region are derived.

  15. Optimal information transfer in enzymatic networks: A field theoretic formulation

    Science.gov (United States)

    Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.

    2017-07-01

    Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in

  16. Informing Physics: Jacob Bekenstein and the Informational Turn in Theoretical Physics

    Science.gov (United States)

    Belfer, Israel

    2014-03-01

    In his PhD dissertation in the early 1970s, the Mexican-Israeli theoretical physicist Jacob Bekenstein developed the thermodynamics of black holes using a generalized version of the second law of thermodynamics. This work made it possible for physicists to describe and analyze black holes using information-theoretical concepts. It also helped to transform information theory into a fundamental and foundational concept in theoretical physics. The story of Bekenstein's work—which was initially opposed by many scientists, including Stephen Hawking—highlights the transformation within physics towards an information-oriented scientific mode of theorizing. This "informational turn" amounted to a mild-mannered revolution within physics, revolutionary without being rebellious.

  17. An information-theoretic framework for flow visualization.

    Science.gov (United States)

    Xu, Lijie; Lee, Teng-Yok; Shen, Han-Wei

    2010-01-01

    The process of visualization can be seen as a visual communication channel where the input to the channel is the raw data, and the output is the result of a visualization algorithm. From this point of view, we can evaluate the effectiveness of visualization by measuring how much information in the original data is being communicated through the visual communication channel. In this paper, we present an information-theoretic framework for flow visualization with a special focus on streamline generation. In our framework, a vector field is modeled as a distribution of directions from which Shannon's entropy is used to measure the information content in the field. The effectiveness of the streamlines displayed in visualization can be measured by first constructing a new distribution of vectors derived from the existing streamlines, and then comparing this distribution with that of the original data set using the conditional entropy. The conditional entropy between these two distributions indicates how much information in the original data remains hidden after the selected streamlines are displayed. The quality of the visualization can be improved by progressively introducing new streamlines until the conditional entropy converges to a small value. We describe the key components of our framework with detailed analysis, and show that the framework can effectively visualize 2D and 3D flow data.

  18. Methods for evaluating information sources

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2012-01-01

    . Reading a text is often not a simple process. All the methods discussed here are steps on the way on learning how to read, understand, and criticize texts. According to hermeneutics it involves the subjectivity of the reader, and that subjectivity is influenced, more or less, by different theoretical...

  19. Information-Theoretic Dictionary Learning for Image Classification.

    Science.gov (United States)

    Qiu, Qiang; Patel, Vishal M; Chellappa, Rama

    2014-11-01

    We present a two-stage approach for learning dictionaries for object classification tasks based on the principle of information maximization. The proposed method seeks a dictionary that is compact, discriminative, and generative. In the first stage, dictionary atoms are selected from an initial dictionary by maximizing the mutual information measure on dictionary compactness, discrimination and reconstruction. In the second stage, the selected dictionary atoms are updated for improved reconstructive and discriminative power using a simple gradient ascent algorithm on mutual information. Experiments using real data sets demonstrate the effectiveness of our approach for image classification tasks.

  20. Physics Without Physics. The Power of Information-theoretical Principles

    Science.gov (United States)

    D'Ariano, Giacomo Mauro

    2017-01-01

    David Finkelstein was very fond of the new information-theoretic paradigm of physics advocated by John Archibald Wheeler and Richard Feynman. Only recently, however, the paradigm has concretely shown its full power, with the derivation of quantum theory (Chiribella et al., Phys. Rev. A 84:012311, 2011; D'Ariano et al., 2017) and of free quantum field theory (D'Ariano and Perinotti, Phys. Rev. A 90:062106, 2014; Bisio et al., Phys. Rev. A 88:032301, 2013; Bisio et al., Ann. Phys. 354:244, 2015; Bisio et al., Ann. Phys. 368:177, 2016) from informational principles. The paradigm has opened for the first time the possibility of avoiding physical primitives in the axioms of the physical theory, allowing a re-foundation of the whole physics over logically solid grounds. In addition to such methodological value, the new information-theoretic derivation of quantum field theory is particularly interesting for establishing a theoretical framework for quantum gravity, with the idea of obtaining gravity itself as emergent from the quantum information processing, as also suggested by the role played by information in the holographic principle (Susskind, J. Math. Phys. 36:6377, 1995; Bousso, Rev. Mod. Phys. 74:825, 2002). In this paper I review how free quantum field theory is derived without using mechanical primitives, including space-time, special relativity, Hamiltonians, and quantization rules. The theory is simply provided by the simplest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the three following simple principles: homogeneity, locality, and isotropy. The inherent discrete nature of the informational derivation leads to an extension of quantum field theory in terms of a quantum cellular automata and quantum walks. A simple heuristic argument sets the scale to the Planck one, and the currently observed regime where discreteness is not visible is the so-called "relativistic regime" of small wavevectors, which

  1. Physics Without Physics - The Power of Information-theoretical Principles

    Science.gov (United States)

    D'Ariano, Giacomo Mauro

    2016-11-01

    David Finkelstein was very fond of the new information-theoretic paradigm of physics advocated by John Archibald Wheeler and Richard Feynman. Only recently, however, the paradigm has concretely shown its full power, with the derivation of quantum theory (Chiribella et al., Phys. Rev. A 84:012311, 2011; D'Ariano et al., 2017) and of free quantum field theory (D'Ariano and Perinotti, Phys. Rev. A 90:062106, 2014; Bisio et al., Phys. Rev. A 88:032301, 2013; Bisio et al., Ann. Phys. 354:244, 2015; Bisio et al., Ann. Phys. 368:177, 2016) from informational principles. The paradigm has opened for the first time the possibility of avoiding physical primitives in the axioms of the physical theory, allowing a re-foundation of the whole physics over logically solid grounds. In addition to such methodological value, the new information-theoretic derivation of quantum field theory is particularly interesting for establishing a theoretical framework for quantum gravity, with the idea of obtaining gravity itself as emergent from the quantum information processing, as also suggested by the role played by information in the holographic principle (Susskind, J. Math. Phys. 36:6377, 1995; Bousso, Rev. Mod. Phys. 74:825, 2002). In this paper I review how free quantum field theory is derived without using mechanical primitives, including space-time, special relativity, Hamiltonians, and quantization rules. The theory is simply provided by the simplest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the three following simple principles: homogeneity, locality, and isotropy. The inherent discrete nature of the informational derivation leads to an extension of quantum field theory in terms of a quantum cellular automata and quantum walks. A simple heuristic argument sets the scale to the Planck one, and the currently observed regime where discreteness is not visible is the so-called "relativistic regime" of small wavevectors, which

  2. Information-Theoretic Benchmarking of Land Surface Models

    Science.gov (United States)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  3. 城市形态信息图谱的理论框架与案例分析%Research on the Theoretic Method and Application of the Urban Form Information TUPU

    Institute of Scientific and Technical Information of China (English)

    郭瑛琦; 齐清文; 姜莉莉; 张岸; 任建顺; 王晓山

    2011-01-01

    displayed after the process of the imagery thinking and the abstract summary, and using the computer multi-dimensional and dynamic visualization technology. It supplies a new method in the urban form research field. After defining the conception of the urban form information TUPU, the normal thinking rule and the technique process are summarized and the theoretical frame of urban form information TUPU is built. Then, based on integrating the spatial data with the statistic figure, the urban form classification TUPU of China is built by using the indices of the compactness and fractal dimension to identify different categories, and, cities of China are roughly divided into six categories, I. E. Centralized form, ribbon form, radioactive form, two cities form, ribbon group form and centralized group form. And then, the normal process of the Geo-Info-TUPU guiding city planning is put forward: calculating the quantitative indexes; making sure the urban form; analyzing the impact factors; finding out the city development period; and putting forward the suggestions of planning. At the end, the advantages and disadvantages of the urban form information TUPU are summarized. The advantages include vividness, accuracy and practicalness, whereas the disadvantage is that, the basic knowledge of the city should be know because the TUPU analysis is based on empirical hypothesis. The Geo-Info-TUPU supplies a new method in the urban form research field, and it can be used to diagnose the regular discipline and predict the future.

  4. Information Theoretic Similarity Measures for Content Based Image Retrieval.

    Science.gov (United States)

    Zachary, John; Iyengar, S. S.

    2001-01-01

    Content-based image retrieval is based on the idea of extracting visual features from images and using them to index images in a database. Proposes similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When used in conjunction with vectors, this method displays…

  5. Theoretical perspectives on learning in an informal setting

    Science.gov (United States)

    Anderson, David; Lucas, Keith B.; Ginns, Ian S.

    2003-02-01

    Research into learning in informal settings such as museums has been in a formative state during the past decade, and much of that research has been descriptive and lacking a theory base. In this article, it is proposed that the human constructivist view of learning can guide research and assist the interpretation of research data because it recognizes an individual's prior knowledge and active involvement in knowledge construction during a museum visit. This proposal is supported by reference to the findings of a previously reported interpretive case study, which included concept mapping and semistructured interviews, of the knowledge transformations of three Year 7 students who had participated in a class visit to a science museum and associated postvisit activities. The findings from that study are shown in this report to be consistent with the human constructivist view of learning in that for all three students, learning was found to be at times incremental and at other times to involve substantial restructuring of knowledge. Thus, we regard that the human constructivist view of learning has much merit and utility for researchers investigating the development of knowledge and understanding emergent from experiences in informal settings. The theoretical and practical implications of these findings for teachers and staff of museums and similar institutions are also discussed.

  6. Towards integrating control and information theories from information-theoretic measures to control performance limitations

    CERN Document Server

    Fang, Song; Ishii, Hideaki

    2017-01-01

    This book investigates the performance limitation issues in networked feedback systems. The fact that networked feedback systems consist of control and communication devices and systems calls for the integration of control theory and information theory. The primary contributions of this book lie in two aspects: the newly-proposed information-theoretic measures and the newly-discovered control performance limitations. We first propose a number of information notions to facilitate the analysis. Using those notions, classes of performance limitations of networked feedback systems, as well as state estimation systems, are then investigated. In general, the book presents a unique, cohesive treatment of performance limitation issues of networked feedback systems via an information-theoretic approach. This book is believed to be the first to treat the aforementioned subjects systematically and in a unified manner, offering a unique perspective differing from existing books.

  7. Information System Quality Assessment Methods

    OpenAIRE

    Korn, Alexandra

    2014-01-01

    This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this w...

  8. Information System Quality Assessment Methods

    OpenAIRE

    2014-01-01

    This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this w...

  9. An information theoretic approach for combining neural network process models.

    Science.gov (United States)

    Sridhar, D V.; Bartlett, E B.; Seagrave, R C.

    1999-07-01

    Typically neural network modelers in chemical engineering focus on identifying and using a single, hopefully optimal, neural network model. Using a single optimal model implicitly assumes that one neural network model can extract all the information available in a given data set and that the other candidate models are redundant. In general, there is no assurance that any individual model has extracted all relevant information from the data set. Recently, Wolpert (Neural Networks, 5(2), 241 (1992)) proposed the idea of stacked generalization to combine multiple models. Sridhar, Seagrave and Barlett (AIChE J., 42, 2529 (1996)) implemented the stacked generalization for neural network models by integrating multiple neural networks into an architecture known as stacked neural networks (SNNs). SNNs consist of a combination of the candidate neural networks and were shown to provide improved modeling of chemical processes. However, in Sridhar's work SNNs were limited to using a linear combination of artificial neural networks. While a linear combination is simple and easy to use, it can utilize only those model outputs that have a high linear correlation to the output. Models that are useful in a nonlinear sense are wasted if a linear combination is used. In this work we propose an information theoretic stacking (ITS) algorithm for combining neural network models. The ITS algorithm identifies and combines useful models regardless of the nature of their relationship to the actual output. The power of the ITS algorithm is demonstrated through three examples including application to a dynamic process modeling problem. The results obtained demonstrate that the SNNs developed using the ITS algorithm can achieve highly improved performance as compared to selecting and using a single hopefully optimal network or using SNNs based on a linear combination of neural networks.

  10. From the scientific method to the clinical method: theoretical considerations

    Directory of Open Access Journals (Sweden)

    Roberto Hernández Hernández

    2010-12-01

    Full Text Available The scientific method is a general method composed of several stages necessary for the development of any scientific research. It is the focus to approach reality and to study natural phenomenon, reality itself and thoughts in order to find their essence and interrelationships. The clinical method is the particular application of the scientific method and in the present economic conditions its use is crucial because of the advantages that it reports from this point of view, as well as for the wellbeing of the patient.

  11. Information Theoretic Inequalities as Bounds in Superconformal Field Theory

    CERN Document Server

    Zhou, Yang

    2016-01-01

    An information theoretic approach to bounds in superconformal field theories is proposed. It is proved that the supersymmetric R\\'enyi entropy $\\bar S_\\alpha$ is a monotonically decreasing function of $\\alpha$ and $(\\alpha-1)\\bar S_\\alpha$ is a concave function of $\\alpha$. Under the assumption that the thermal entropy associated with the "replica trick" time circle is bounded from below by the charge in the supersymmetric system, it is further proved that both ${\\alpha-1\\over \\alpha}\\bar S_\\alpha$ and $(\\alpha-1)\\bar S_\\alpha$ monotonically increase as functions of $\\alpha$. Because $\\bar S_\\alpha$ enjoys universal relations with the Weyl anomaly coefficients in even-dimensional superconformal field theories, one therefore obtains a set of bounds on these coefficients by imposing the inequalities of $\\bar S_\\alpha$. Some of the bounds coincide with Hofman-Maldacena bounds and the others are new. We also check the inequalities for examples in odd-dimensions.

  12. THEORETICAL STUDY OF THREE-DIMENSIONAL NUMERICAL MANIFOLD METHOD

    Institute of Scientific and Technical Information of China (English)

    LUO Shao-ming; ZHANG Xiang-wei; L(U) Wen-ge; JIANG Dong-ru

    2005-01-01

    The three-dimensional numerical manifold method(NMM) is studied on the basis of two-dimensional numerical manifold method. The three-dimensional cover displacement function is studied. The mechanical analysis and Hammer integral method of three-dimensional numerical manifold method are put forward. The stiffness matrix of three-dimensional manifold element is derived and the dissection rules are given. The theoretical system and the numerical realizing method of three-dimensional numerical manifold method are systematically studied. As an example, the cantilever with load on the end is calculated, and the results show that the precision and efficiency are agreeable.

  13. Online Information Source & Access Method

    OpenAIRE

    2009-01-01

    Online resources play important role in research and development of the country. So, LIS professional are interested in accessing and guiding users and readers to all available information sources. This article highlighted and described the availability of online information sources, open access e-journals and its access method.

  14. Information theoretic multiscale truncated SVD for multilead electrocardiogram.

    Science.gov (United States)

    Sharma, L N

    2016-06-01

    In this paper an information theory based multiscale singular value decomposition (SVD) is proposed for multilead electrocardiogram (ECG) signal processing. The shrinkage of singular values for different multivariate multiscale matrices at wavelet scales is based on information content. It aims to capture and preserve the information of clinically important local waves like P-waves, Q-waves, T-waves and QRS-complexes. The information is derived through clinically relevant multivariate multiscale entropy in SVD domain modifying Shannon's entropy. This optimizes the approximate ranks for matrices to capture the clinical components of ECG signals appearing at different scales. A newly introduced multivariate clinical distortion (MCD) metric is computed and compared with existing subjective and objective signal distortion measures. The proposed method is tested with records from CSE multilead measurement library and PTB diagnostic ECG database for various pathological cases. It gives average percentage root mean square difference (PRD), average normalized root mean square error (NRMSE), average wavelet energy based diagnostic distortion measure (WEDD) values 5.8879%, 0.0059 and 1.0760% respectively for myocarditis pathology. The corresponding MCD value is 1.9429%. The highest average PRD and average WEDD values are 11.4053% and 5.5194% for cardiomyopathy with the corresponding MCD value 1.4003%. Based on WEDD values and mean opinion scores (MOS), the quality group of all processed signals fall under excellent category. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Theoretical foundations of information security investment security companies

    Directory of Open Access Journals (Sweden)

    G.V. Berlyak

    2015-03-01

    Full Text Available Methodological problems related to the lack of guidance in the provisions (standards of accounting on the reflection in the accounting and financial reporting of the research object. In this connection, it is proposed to amend the provisions (standards of accounting. This will allow to come to the consistency of accounting methods of operations with elements of investment activity. Based on analysis of the information needs of users suggested indicators identikativnye blocks (block corporate finance unit assess the relationship with financial institutions, block the fulfillment of obligations according to the calculations, the investment unit, a science and innovation, investment security and developed forms of internal accounting controls and improvements to existing forms financial statements for the investment activities of the enterprise. Using enterprise data reporting forms provide timely and reliable information on the identity and structure of investment security and enable the company to effectively plan and develop personnel policies for enterprise management.

  16. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    OpenAIRE

    Yun Chen; Hui Yang

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic pe...

  17. Visual words assignment via information-theoretic manifold embedding.

    Science.gov (United States)

    Deng, Yue; Li, Yipeng; Qian, Yanjun; Ji, Xiangyang; Dai, Qionghai

    2014-10-01

    Codebook-based learning provides a flexible way to extract the contents of an image in a data-driven manner for visual recognition. One central task in such frameworks is codeword assignment, which allocates local image descriptors to the most similar codewords in the dictionary to generate histogram for categorization. Nevertheless, existing assignment approaches, e.g., nearest neighbors strategy (hard assignment) and Gaussian similarity (soft assignment), suffer from two problems: 1) too strong Euclidean assumption and 2) neglecting the label information of the local descriptors. To address the aforementioned two challenges, we propose a graph assignment method with maximal mutual information (GAMI) regularization. GAMI takes the power of manifold structure to better reveal the relationship of massive number of local features by nonlinear graph metric. Meanwhile, the mutual information of descriptor-label pairs is ultimately optimized in the embedding space for the sake of enhancing the discriminant property of the selected codewords. According to such objective, two optimization models, i.e., inexact-GAMI and exact-GAMI, are respectively proposed in this paper. The inexact model can be efficiently solved with a closed-from solution. The stricter exact-GAMI nonparametrically estimates the entropy of descriptor-label pairs in the embedding space and thus leads to a relatively complicated but still trackable optimization. The effectiveness of GAMI models are verified on both the public and our own datasets.

  18. Theoretical Analysis of Heuristic Search Methods for Online POMDPs.

    Science.gov (United States)

    Ross, Stéphane; Pineau, Joelle; Chaib-Draa, Brahim

    2008-01-01

    Planning in partially observable environments remains a challenging problem, despite significant recent advances in offline approximation techniques. A few online methods have also been proposed recently, and proven to be remarkably scalable, but without the theoretical guarantees of their offline counterparts. Thus it seems natural to try to unify offline and online techniques, preserving the theoretical properties of the former, and exploiting the scalability of the latter. In this paper, we provide theoretical guarantees on an anytime algorithm for POMDPs which aims to reduce the error made by approximate offline value iteration algorithms through the use of an efficient online searching procedure. The algorithm uses search heuristics based on an error analysis of lookahead search, to guide the online search towards reachable beliefs with the most potential to reduce error. We provide a general theorem showing that these search heuristics are admissible, and lead to complete and ε-optimal algorithms. This is, to the best of our knowledge, the strongest theoretical result available for online POMDP solution methods. We also provide empirical evidence showing that our approach is also practical, and can find (provably) near-optimal solutions in reasonable time.

  19. THEORETICAL APPROACHES TO THE DEFINITION OF THE "INFORMATION RESOURCE"

    OpenAIRE

    Netreba, I.

    2014-01-01

    Existing approaches to determining the nature of the category "information resource" are detailed and systematized. The relationships between the categories "information resource", "information technology", "information management system" are revealed. Determined the importance of information resources for the production process at the enterprise.

  20. Predictive coding and the slowness principle: an information-theoretic approach.

    Science.gov (United States)

    Creutzig, Felix; Sprekeler, Henning

    2008-04-01

    Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.

  1. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory.

    Science.gov (United States)

    Agres, Kat; Abdallah, Samer; Pearce, Marcus

    2017-01-25

    A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different types of statistical information affect listeners' memory for auditory stimuli. We used a combination of behavioral and computational methods to investigate memory for non-linguistic auditory sequences. Participants repeatedly heard tone sequences varying systematically in their information-theoretic properties. Expectedness ratings of tones were collected during three listening sessions, and a recognition memory test was given after each session. Information-theoretic measures of sequential predictability significantly influenced listeners' expectedness ratings, and variations in these properties had a significant impact on memory performance. Predictable sequences yielded increasingly better memory performance with increasing exposure. Computational simulations using a probabilistic model of auditory expectation suggest that listeners dynamically formed a new, and increasingly accurate, implicit cognitive model of the information-theoretic structure of the sequences throughout the experimental session.

  2. A theoretical basis for the Harmonic Balance Method

    CERN Document Server

    García-Saldaña, Johanna D

    2012-01-01

    The Harmonic Balance method provides a heuristic approach for finding truncated Fourier series as an approximation to the periodic solutions of ordinary differential equations. Another natural way for obtaining these type of approximations consists in applying numerical methods. In this paper we recover the pioneering results of Stokes and Urabe that provide a theoretical basis for proving that near these truncated series, whatever is the way they have been obtained, there are actual periodic solutions of the equation. We will restrict our attention to one-dimensional non-autonomous ordinary differential equations and we apply the results obtained to a couple of concrete examples coming from planar autonomous systems.

  3. On the Radau pseudospectral method: theoretical and implementation advances

    Science.gov (United States)

    Sagliano, Marco; Theil, Stephan; Bergsma, Michiel; D'Onofrio, Vincenzo; Whittle, Lisa; Viavattene, Giulia

    2017-09-01

    In the last decades the theoretical development of more and more refined direct methods, together with a new generation of CPUs, led to a significant improvement of numerical approaches for solving optimal-control problems. One of the most promising class of methods is based on pseudospectral optimal control. These methods do not only provide an efficient algorithm to solve optimal-control problems, but also define a theoretical framework for linking the discrete numerical solution to the analytical one in virtue of the covector mapping theorem. However, several aspects in their implementation can be refined. In this framework SPARTAN, the first European tool based on flipped-Radau pseudospectral method, has been developed. This paper illustrates the aspects implemented for SPARTAN, which can potentially be valid for any other transcription. The novelties included in this work consist specifically of a new hybridization of the Jacobian matrix computation made of four distinct parts. These contributions include a new analytical formulation for expressing Lagrange cost function for open final-time problems, and the use of dual-number theory for ensuring exact differentiation. Moreover, a self-scaling strategy for primal and dual variables, which combines the projected-Jacobian rows normalization and the covector mapping, is described. Three concrete examples show the validity of the novelties introduced, and the quality of the results obtained with the proposed methods.

  4. Spike train analysis toolkit: enabling wider application of information-theoretic techniques to neurophysiology.

    Science.gov (United States)

    Goldberg, David H; Victor, Jonathan D; Gardner, Esther P; Gardner, Daniel

    2009-09-01

    Conventional methods widely available for the analysis of spike trains and related neural data include various time- and frequency-domain analyses, such as peri-event and interspike interval histograms, spectral measures, and probability distributions. Information theoretic methods are increasingly recognized as significant tools for the analysis of spike train data. However, developing robust implementations of these methods can be time-consuming, and determining applicability to neural recordings can require expertise. In order to facilitate more widespread adoption of these informative methods by the neuroscience community, we have developed the Spike Train Analysis Toolkit. STAToolkit is a software package which implements, documents, and guides application of several information-theoretic spike train analysis techniques, thus minimizing the effort needed to adopt and use them. This implementation behaves like a typical Matlab toolbox, but the underlying computations are coded in C for portability, optimized for efficiency, and interfaced with Matlab via the MEX framework. STAToolkit runs on any of three major platforms: Windows, Mac OS, and Linux. The toolkit reads input from files with an easy-to-generate text-based, platform-independent format. STAToolkit, including full documentation and test cases, is freely available open source via http://neuroanalysis.org , maintained as a resource for the computational neuroscience and neuroinformatics communities. Use cases drawn from somatosensory and gustatory neurophysiology, and community use of STAToolkit, demonstrate its utility and scope.

  5. Theoretical development of information science: A brief history

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2016-01-01

    This paper presents a brief history of information science (IS) as viewed by the author. The term ‘information science’ goes back to 1955 and evolved in the aftermath of Claude Shannon’s ‘information theory’ (1948), which also inspired research into problems in fields of library science...... have remained implicit in in the field much of the time....

  6. An integrated organisation-wide data quality management and information governance framework: theoretical underpinnings

    Directory of Open Access Journals (Sweden)

    Siaw-Teng Liaw

    2014-10-01

    Full Text Available Introduction Increasing investment in eHealth aims to improve cost effectiveness and safety of care. Data extraction and aggregation can create new data products to improve professional practice and provide feedback to improve the quality of source data. A previous systematic review concluded that locally relevant clinical indicators and use of clinical record systems could support clinical governance. We aimed to extend and update the review with a theoretical framework.Methods We searched PubMed, Medline, Web of Science, ABI Inform (Proquest and Business Source Premier (EBSCO using the terms curation, information ecosystem, data quality management (DQM, data governance, information governance (IG and data stewardship. We focused on and analysed the scope of DQM and IG processes, theoretical frameworks, and determinants of the processing, quality assurance, presentation and sharing of data across the enterprise.Findings There are good theoretical reasons for integrated governance, but there is variable alignment of DQM, IG and health system objectives across the health enterprise. Ethical constraints exist that require health information ecosystems to process data in ways that are aligned with improving health and system efficiency and ensuring patient safety. Despite an increasingly ‘big-data’ environment, DQM and IG in health services are still fragmented across the data production cycle. We extend current work on DQM and IG with a theoretical framework for integrated IG across the data cycle.Conclusions The dimensions of this theory-based framework would require testing with qualitative and quantitative studies to examine the applicability and utility, along with an evaluation of its impact on data quality across the health enterprise.

  7. Parametric Sensitivity Analysis for Stochastic Molecular Systems using Information Theoretic Metrics

    CERN Document Server

    Tsourtis, Anastasios; Katsoulakis, Markos A; Harmandaris, Vagelis

    2014-01-01

    In this paper we extend the parametric sensitivity analysis (SA) methodology proposed in Ref. [Y. Pantazis and M. A. Katsoulakis, J. Chem. Phys. 138, 054115 (2013)] to continuous time and continuous space Markov processes represented by stochastic differential equations and, particularly, stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field therefore they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an al...

  8. Concepts and methods in modern theoretical chemistry statistical mechanics

    CERN Document Server

    Ghosh, Swapan Kumar

    2013-01-01

    Concepts and Methods in Modern Theoretical Chemistry: Statistical Mechanics, the second book in a two-volume set, focuses on the dynamics of systems and phenomena. A new addition to the series Atoms, Molecules, and Clusters, this book offers chapters written by experts in their fields. It enables readers to learn how concepts from ab initio quantum chemistry and density functional theory (DFT) can be used to describe, understand, and predict chemical dynamics. This book covers a wide range of subjects, including discussions on the following topics: Time-dependent DFT Quantum fluid dynamics (QF

  9. Toward a Critical Theoretic Perspective in Information Systems.

    Science.gov (United States)

    Benoit, Gerald

    2002-01-01

    Considers the logico-analytic philosophy of library and information science (LIS); discusses Jurgen Habermas' theory of communicative action; examines how LIS, particularly research into librarian-patron interaction and information system design, favors an empiricist view of language and thus may be limiting its effectiveness; and suggests the…

  10. Information theoretical performance measure for associative memories and its application to neural networks.

    Science.gov (United States)

    Schlüter, M; Kerschhaggl, O; Wagner, F

    1999-08-01

    We present a general performance measure (information loss) for associative memories based on information theoretical concepts. This performance measure can be estimated, provided that mean values of observables have been determined for the associative memory. Then the estimation guarantees a minimal association quality. The formalism allows the application of the performance measure to complex systems where the relation between input and output of the associative memory is not explicitly known. Here we apply our formalism to the Hopfield model and estimate the storage capacity alpha(c) from the numerically determined information loss. In contrast to other numerical methods the whole overlap distribution is taken into account. Our numerical value alpha(c)=0.1379(4) for the storage capacity in the Hopfield model is below numerical values obtained previously. This indicates that the consideration of small remnant overlaps lowers the storage capacity of the Hopfield model.

  11. Group-theoretical method for physical property tensors of quasicrystals

    Institute of Scientific and Technical Information of China (English)

    Gong Ping; Hu Cheng-Zheng; Zhou Xiang; Wang Ai-Jun; Miao Ling

    2006-01-01

    In addition to the phonon variable there is the phason variable in hydrodynamics for quasicrystals. These two kinds of hydrodynamic variables have different transformation properties. The phonon variable transforms under the vector representation, whereas the phason variable transforms under another related representation. Thus, a basis (or a set of basis functions) in the representation space should include such two kinds of variables. This makes it more difficult to determine the physical property tensors of quasicrystals. In this paper the group-theoretical method is given to determine the physical property tensors of quasicrystals. As an illustration of this method we calculate the third-order elasticity tensors of quasicrystals with five-fold symmetry by means of basis functions. It follows that the linear phonon elasticity is isotropic, but the nonlinear phonon elasticity is anisotropic for pentagonal quasicrystals. Meanwhile, the basis functions are constructed for all noncrystallographic point groups of quasicrystals.

  12. Theoretical physics 7 quantum mechanics : methods and applications

    CERN Document Server

    Nolting, Wolfgang

    2017-01-01

    This textbook offers a clear and comprehensive introduction to methods and applications in quantum mechanics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, thus developing the understanding of quantized states further on. The first part of the book introduces the quantum theory of angular momentum and approximation methods. More complex themes are covered in the second part of the book, which describes multiple particle systems and scattering theory. Ideally suited to undergraduate students with some grounding in the basics of quantum mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets.  About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this seri...

  13. Theoretical and experimental physical methods of neutron-capture therapy

    Science.gov (United States)

    Borisov, G. I.

    2011-09-01

    This review is based to a substantial degree on our priority developments and research at the IR-8 reactor of the Russian Research Centre Kurchatov Institute. New theoretical and experimental methods of neutron-capture therapy are developed and applied in practice; these are: A general analytical and semi-empiric theory of neutron-capture therapy (NCT) based on classical neutron physics and its main sections (elementary theories of moderation, diffuse, reflection, and absorption of neutrons) rather than on methods of mathematical simulation. The theory is, first of all, intended for practical application by physicists, engineers, biologists, and physicians. This theory can be mastered by anyone with a higher education of almost any kind and minimal experience in operating a personal computer.

  14. Integrated information in discrete dynamical systems: motivation and theoretical framework.

    Directory of Open Access Journals (Sweden)

    David Balduzzi

    2008-06-01

    Full Text Available This paper introduces a time- and state-dependent measure of integrated information, phi, which captures the repertoire of causal states available to a system as a whole. Specifically, phi quantifies how much information is generated (uncertainty is reduced when a system enters a particular state through causal interactions among its elements, above and beyond the information generated independently by its parts. Such mathematical characterization is motivated by the observation that integrated information captures two key phenomenological properties of consciousness: (i there is a large repertoire of conscious experiences so that, when one particular experience occurs, it generates a large amount of information by ruling out all the others; and (ii this information is integrated, in that each experience appears as a whole that cannot be decomposed into independent parts. This paper extends previous work on stationary systems and applies integrated information to discrete networks as a function of their dynamics and causal architecture. An analysis of basic examples indicates the following: (i phi varies depending on the state entered by a network, being higher if active and inactive elements are balanced and lower if the network is inactive or hyperactive. (ii phi varies for systems with identical or similar surface dynamics depending on the underlying causal architecture, being low for systems that merely copy or replay activity states. (iii phi varies as a function of network architecture. High phi values can be obtained by architectures that conjoin functional specialization with functional integration. Strictly modular and homogeneous systems cannot generate high phi because the former lack integration, whereas the latter lack information. Feedforward and lattice architectures are capable of generating high phi but are inefficient. (iv In Hopfield networks, phi is low for attractor states and neutral states, but increases if the networks

  15. A field theoretical approach to the quasi-continuum method

    Science.gov (United States)

    Iyer, Mrinal; Gavini, Vikram

    2011-08-01

    The quasi-continuum method has provided many insights into the behavior of lattice defects in the past decade. However, recent numerical analysis suggests that the approximations introduced in various formulations of the quasi-continuum method lead to inconsistencies—namely, appearance of ghost forces or residual forces, non-conservative nature of approximate forces, etc.—which affect the numerical accuracy and stability of the method. In this work, we identify the source of these errors to be the incompatibility of using quadrature rules, which is a local notion, on a non-local representation of energy. We eliminate these errors by first reformulating the extended interatomic interactions into a local variational problem that describes the energy of a system via potential fields. We subsequently introduce the quasi-continuum reduction of these potential fields using an adaptive finite-element discretization of the formulation. We demonstrate that the present formulation resolves the inconsistencies present in previous formulations of the quasi-continuum method, and show using numerical examples the remarkable improvement in the accuracy of solutions. Further, this field theoretic formulation of quasi-continuum method makes mathematical analysis of the method more amenable using functional analysis and homogenization theories.

  16. Information-theoretic analysis of a stimulated-Brillouin-scattering-based slow-light system.

    Science.gov (United States)

    Lee, Myungjun; Zhu, Yunhui; Gauthier, Daniel J; Gehm, Michael E; Neifeld, Mark A

    2011-11-10

    We use an information-theoretic method developed by Neifeld and Lee [J. Opt. Soc. Am. A 25, C31 (2008)] to analyze the performance of a slow-light system. Slow-light is realized in this system via stimulated Brillouin scattering in a 2 km-long, room-temperature, highly nonlinear fiber pumped by a laser whose spectrum is tailored and broadened to 5 GHz. We compute the information throughput (IT), which quantifies the fraction of information transferred from the source to the receiver and the information delay (ID), which quantifies the delay of a data stream at which the information transfer is largest, for a range of experimental parameters. We also measure the eye-opening (EO) and signal-to-noise ratio (SNR) of the transmitted data stream and find that they scale in a similar fashion to the information-theoretic method. Our experimental findings are compared to a model of the slow-light system that accounts for all pertinent noise sources in the system as well as data-pulse distortion due to the filtering effect of the SBS process. The agreement between our observations and the predictions of our model is very good. Furthermore, we compare measurements of the IT for an optimal flattop gain profile and for a Gaussian-shaped gain profile. For a given pump-beam power, we find that the optimal profile gives a 36% larger ID and somewhat higher IT compared to the Gaussian profile. Specifically, the optimal (Gaussian) profile produces a fractional slow-light ID of 0.94 (0.69) and an IT of 0.86 (0.86) at a pump-beam power of 450 mW and a data rate of 2.5 Gbps. Thus, the optimal profile better utilizes the available pump-beam power, which is often a valuable resource in a system design.

  17. Document Clustering using Sequential Information Bottleneck Method

    CERN Document Server

    Gayathri, P J; Punithavalli, M

    2010-01-01

    This paper illustrates the Principal Direction Divisive Partitioning (PDDP) algorithm and describes its drawbacks and introduces a combinatorial framework of the Principal Direction Divisive Partitioning (PDDP) algorithm, then describes the simplified version of the EM algorithm called the spherical Gaussian EM (sGEM) algorithm and Information Bottleneck method (IB) is a technique for finding accuracy, complexity and time space. The PDDP algorithm recursively splits the data samples into two sub clusters using the hyper plane normal to the principal direction derived from the covariance matrix, which is the central logic of the algorithm. However, the PDDP algorithm can yield poor results, especially when clusters are not well separated from one another. To improve the quality of the clustering results problem, it is resolved by reallocating new cluster membership using the IB algorithm with different settings. IB Method gives accuracy but time consumption is more. Furthermore, based on the theoretical backgr...

  18. Information theoretical quantification of cooperativity in signalling complexes

    DEFF Research Database (Denmark)

    Lenaerts, Tom; Ferkinghoff-Borg, Jesper; Schymkowitz, Joost

    2009-01-01

    Background: Intra-cellular information exchange, propelled by cascades of interacting signalling proteins, is essential for the proper functioning and survival of cells. Now that the interactome of several organisms is being mapped and several structural mechanisms of cooperativity at the molecul...

  19. Towards an Information Theoretic Analysis of Searchable Encryption

    NARCIS (Netherlands)

    Sedghi, S.; Doumen, J.M.; Hartel, P.H.; Jonker, W.

    2008-01-01

    Searchable encryption is a technique that allows a client to store data in encrypted form on a curious server, such that data can be retrieved while leaking a minimal amount of information to the server. Many searchable encryption schemes have been proposed and proved secure in their own computation

  20. Towards an Information Theoretic Analysis of Searchable Encryption (Extended Version)

    NARCIS (Netherlands)

    Sedghi, S.; Doumen, J.M.; Hartel, P.H.; Jonker, W.

    2008-01-01

    Searchable encryption is a technique that allows a client to store data in encrypted form on a curious server, such that data can be retrieved while leaking a minimal amount of information to the server. Many searchable encryption schemes have been proposed and proved secure in their own computation

  1. Towards an Information Theoretic Analysis of Searchable Encryption (Extended Version)

    NARCIS (Netherlands)

    Sedghi, S.; Doumen, J.M.; Hartel, Pieter H.; Jonker, Willem

    2008-01-01

    Searchable encryption is a technique that allows a client to store data in encrypted form on a curious server, such that data can be retrieved while leaking a minimal amount of information to the server. Many searchable encryption schemes have been proposed and proved secure in their own

  2. Towards an Information Theoretic Analysis of Searchable Encryption

    NARCIS (Netherlands)

    Sedghi, S.; Doumen, J.M.; Hartel, Pieter H.; Jonker, Willem

    2008-01-01

    Searchable encryption is a technique that allows a client to store data in encrypted form on a curious server, such that data can be retrieved while leaking a minimal amount of information to the server. Many searchable encryption schemes have been proposed and proved secure in their own

  3. JIDT: An information-theoretic toolkit for studying the dynamics of complex systems

    CERN Document Server

    Lizier, Joseph T

    2014-01-01

    Complex systems are increasingly being viewed as distributed information processing systems, particularly in the domains of computational neuroscience, bioinformatics and Artificial Life. This trend has resulted in a strong uptake in the use of (Shannon) information-theoretic measures to analyse the dynamics of complex systems in these fields. We introduce the Java Information Dynamics Toolkit (JIDT): a Google code project which provides a standalone, (GNU GPL v3 licensed) open-source code implementation for empirical estimation of information-theoretic measures from time-series data. While the toolkit provides classic information-theoretic measures (e.g. entropy, mutual information, conditional mutual information), it ultimately focusses on implementing higher-level measures for information dynamics. That is, JIDT focusses on quantifying information storage, transfer and modification, and the dynamics of these operations in space and time. For this purpose, it includes implementations of the transfer entropy...

  4. Studies of Chinese speakers with dysarthria: informing theoretical models.

    Science.gov (United States)

    Whitehill, Tara L

    2010-01-01

    Most theoretical models of dysarthria have been developed based on research using individuals speaking English or other Indo-European languages. Studies of individuals with dysarthria speaking other languages can allow investigation into the universality of such models, and the interplay between language-specific and language-universal aspects of dysarthria. In this article, studies of Cantonese- and Mandarin-Chinese speakers with dysarthria are reviewed. The studies focused on 2 groups of speakers: those with cerebral palsy and those with Parkinson's disease. Key findings are compared with similar studies of English speakers. Since Chinese is tonal in nature, the impact of dysarthria on lexical tone has received considerable attention in the literature. The relationship between tone [which involves fundamental frequency (F(0)) control at the syllable level] and intonation (involving F(0) control at the sentential level) has received more recent attention. Many findings for Chinese speakers with dysarthria support earlier findings for English speakers, thus affirming the language-universal aspect of dysarthria. However, certain differences, which can be attributed to the distinct phonologies of Cantonese and Mandarin, highlight the language-specific aspects of the condition.

  5. Proposing a Theoretical Framework for Digital Age Youth Information Behavior Building upon Radical Change Theory

    Science.gov (United States)

    Koh, Kyungwon

    2011-01-01

    Contemporary young people are engaged in a variety of information behaviors, such as information seeking, using, sharing, and creating. The ways youth interact with information have transformed in the shifting digital information environment; however, relatively little empirical research exists and no theoretical framework adequately explains…

  6. Information theoretic derivation of network architecture and learning algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Jones, R.D.; Barnes, C.W.; Lee, Y.C.; Mead, W.C.

    1991-01-01

    Using variational techniques, we derive a feedforward network architecture that minimizes a least squares cost function with the soft constraint that the mutual information between input and output be maximized. This permits optimum generalization for a given accuracy. A set of learning algorithms are also obtained. The network and learning algorithms are tested on a set of test problems which emphasize time series prediction. 6 refs., 1 fig.

  7. Experimental and Theoretical Methods in Algebra, Geometry and Topology

    CERN Document Server

    Veys, Willem; Bridging Algebra, Geometry, and Topology

    2014-01-01

    Algebra, geometry and topology cover a variety of different, but intimately related research fields in modern mathematics. This book focuses on specific aspects of this interaction. The present volume contains refereed papers which were presented at the International Conference “Experimental and Theoretical Methods in Algebra, Geometry and Topology”, held in Eforie Nord (near Constanta), Romania, during 20-25 June 2013. The conference was devoted to the 60th anniversary of the distinguished Romanian mathematicians Alexandru Dimca and Ştefan Papadima. The selected papers consist of original research work and a survey paper. They are intended for a large audience, including researchers and graduate students interested in algebraic geometry, combinatorics, topology, hyperplane arrangements and commutative algebra. The papers are written by well-known experts from different fields of mathematics, affiliated to universities from all over the word, they cover a broad range of topics and explore the research f...

  8. Information-Theoretic Considerations Concerning the Origin of Life.

    Science.gov (United States)

    Adami, Christoph

    2015-09-01

    Research investigating the origins of life usually either focuses on exploring possible life-bearing chemistries in the pre-biotic Earth, or else on synthetic approaches. Comparatively little work has explored fundamental issues concerning the spontaneous emergence of life using only concepts (such as information and evolution) that are divorced from any particular chemistry. Here, I advocate studying the probability of spontaneous molecular self-replication as a function of the information contained in the replicator, and the environmental conditions that might enable this emergence. I show (under certain simplifying assumptions) that the probability to discover a self-replicator by chance depends exponentially on the relative rate of formation of the monomers. If the rate at which monomers are formed is somewhat similar to the rate at which they would occur in a self-replicating polymer, the likelihood to discover such a replicator by chance is increased by many orders of magnitude. I document such an increase in searches for a self-replicator within the digital life system avida.

  9. An Information-Theoretic Measure for Face Recognition: Comparison with Structural Similarity

    Directory of Open Access Journals (Sweden)

    Asmhan Flieh Hassan

    2014-11-01

    Full Text Available Automatic recognition of people faces is a challenging problem that has received significant attention from signal processing researchers in recent years. This is due to its several applications in different fields, including security and forensic analysis. Despite this attention, face recognition is still one among the most challenging problems. Up to this moment, there is no technique that provides a reliable solution to all situations. In this paper a novel technique for face recognition is presented. This technique, which is called ISSIM, is derived from our recently published information - theoretic similarity measure HSSIM, which was based on joint histogram. Face recognition with ISSIM is still based on joint histogram of a test image and a database images. Performance evaluation was performed on MATLAB using part of the well-known AT&T image database that consists of 49 face images, from which seven subjects are chosen, and for each subject seven views (poses are chosen with different facial expressions. The goal of this paper is to present a simplified approach for face recognition that may work in real-time environments. Performance of our information - theoretic face recognition method (ISSIM has been demonstrated experimentally and is shown to outperform the well-known, statistical-based method (SSIM.

  10. Information Theoretic Detection of Objects Embedded in Cluttered Aerial Scenes.

    Science.gov (United States)

    1982-07-01

    procedure must only be accomplished once for each template density and most numerical problems are avoided. The Cyclic Coordinate Method ( Bazaraa , 1979:271...TApril 1962). L4. Bazaraa , Mokhtar S. and C. M. Shetty. Nonlinear Pro- gramming Theory and Algorithms. New York: John Wiley and Sons, 1979. 5. Burg, John P

  11. Theoretical studies of potential energy surfaces and computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, R. [Argonne National Laboratory, IL (United States)

    1993-12-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. This difficult challenge is met with general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions, are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

  12. METHODS OF POLYMODAL INFORMATION TRANSMISSION

    Directory of Open Access Journals (Sweden)

    O. O. Basov

    2015-03-01

    Full Text Available The research results upon the application of the existing information transmission methods in polymodal info communication systems are presented herein. The analysis of the existing commutation ways and multiplexing schemes has revealed that modern means of telecommunication are capable of providing polymodal information delivery with the required quality to the customer correspondent terminal. Under these conditions substantial capacity resource consumption in the data transmission networks with a simultaneous static time multiplexing is required, however, it is easier to achieve the modality synchronization within that kind of an infrastructure. The data networks with a static time multiplexing demand employing more sophisticated supporting algorithms of the guaranteed data blocks delivery quality. However, due to the stochastic data blocks delays modality synchronizing during the off-line processing is more difficult to provide. Nowadays there are objective preconditions for a data networking realization which is invariable to the applied transmission technology. This capability is defined by a wide (person-to-person application of the optical technologies in the transport infrastructure of the polymodal info communication systems. In case of the availability of the customer terminal and networking functioning matching mode it becomes possible to organize channels in the latter which can adaptively select the most effective networking technology according to the current volume allocation and modality types in the messages.

  13. An information theoretic approach to the functional classification of neurons

    CERN Document Server

    Schneidman, E; Berry, M J; Schneidman, Elad; Bialek, William; Berry, Michael J.

    2002-01-01

    A population of neurons typically exhibits a broad diversity of responses to sensory inputs. The intuitive notion of functional classification is that cells can be clustered so that most of the diversity is captured in the identity of the clusters rather than by individuals within clusters. We show how this intuition can be made precise using information theory, without any need to introduce a metric on the space of stimuli or responses. Applied to the retinal ganglion cells of the salamander, this approach recovers classical results, but also provides clear evidence for subclasses beyond those identified previously. Further, we find that each of the ganglion cells is functionally unique, and that even within the same subclass only a few spikes are needed to reliably distinguish between cells.

  14. Information Theoretic Limits on Learning Stochastic Differential Equations

    CERN Document Server

    Bento, José; Montanari, Andrea

    2011-01-01

    Consider the problem of learning the drift coefficient of a stochastic differential equation from a sample path. In this paper, we assume that the drift is parametrized by a high dimensional vector. We address the question of how long the system needs to be observed in order to learn this vector of parameters. We prove a general lower bound on this time complexity by using a characterization of mutual information as time integral of conditional variance, due to Kadota, Zakai, and Ziv. This general lower bound is applied to specific classes of linear and non-linear stochastic differential equations. In the linear case, the problem under consideration is the one of learning a matrix of interaction coefficients. We evaluate our lower bound for ensembles of sparse and dense random matrices. The resulting estimates match the qualitative behavior of upper bounds achieved by computationally efficient procedures.

  15. Data normalization in biosurveillance: an information-theoretic approach.

    Science.gov (United States)

    Peter, William; Najmi, Amir H; Burkom, Howard

    2007-10-11

    An approach to identifying public health threats by characterizing syndromic surveillance data in terms of its surprisability is discussed. Surprisability in our model is measured by assigning a probability distribution to a time series, and then calculating its entropy, leading to a straightforward designation of an alert. Initial application of our method is to investigate the applicability of using suitably-normalized syndromic counts (i.e., proportions) to improve early event detection.

  16. Multi-way Communications: An Information Theoretic Perspective

    KAUST Repository

    Chaaban, Anas

    2015-09-15

    Multi-way communication is a means to significantly improve the spectral efficiency of wireless networks. For instance, in a bi-directional (or two-way) communication channel, two users can simultaneously use the transmission medium to exchange information, thus achieving up to twice the rate that would be achieved had each user transmitted separately. Multi-way communications provides an overview on the developments in this research area since it has been initiated by Shannon. The basic two-way communication channel is considered first, followed by the two-way relay channel obtained by the deployment of an additional cooperative relay node to improve the overall communication performance. This basic setup is then extended to multi-user systems. For all these setups, fundamental limits on the achievable rates are reviewed, thereby making use of a linear high-SNR deterministic channel model to provide valuable insights which are helpful when discussing the coding schemes for Gaussian channel models in detail. Several tools and communication strategies are used in the process, including (but not limited to) computation, signal-space alignment, and nested-lattice codes. Finally, extensions of multi-way communication channels to multiple antenna settings are discussed. © 2015 A. Chaaban and A. Sezgin.

  17. Theoretical and experimental investigation of multispectral photoacoustic osteoporosis detection method

    Science.gov (United States)

    Steinberg, Idan; Hershkovich, Hadas Sara; Gannot, Israel; Eyal, Avishay

    2014-03-01

    Osteoporosis is a widespread disorder, which has a catastrophic impact on patients lives and overwhelming related to healthcare costs. Recently, we proposed a multispectral photoacoustic technique for early detection of osteoporosis. Such technique has great advantages over pure ultrasonic or optical methods as it allows the deduction of both bone functionality from the bone absorption spectrum and bone resistance to fracture from the characteristics of the ultrasound propagation. We demonstrated the propagation of multiple acoustic modes in animal bones in-vitro. To further investigate the effects of multiple wavelength excitations and of induced osteoporosis on the PA signal a multispectral photoacoustic system is presented. The experimental investigation is based on measuring the interference of multiple acoustic modes. The performance of the system is evaluated and a simple two mode theoretical model is fitted to the measured phase signals. The results show that such PA technique is accurate and repeatable. Then a multiple wavelength excitation is tested. It is shown that the PA response due to different excitation wavelengths revels that absorption by the different bone constitutes has a profound effect on the mode generation. The PA response is measured in single wavelength before and after induced osteoporosis. Results show that induced osteoporosis alters the measured amplitude and phase in a consistent manner which allows the detection of the onset of osteoporosis. These results suggest that a complete characterization of the bone over a region of both acoustic and optical frequencies might be used as a powerful tool for in-vivo bone evaluation.

  18. Number theoretic methods in cryptography complexity lower bounds

    CERN Document Server

    Shparlinski, Igor

    1999-01-01

    The book introduces new techniques which imply rigorous lower bounds on the complexity of some number theoretic and cryptographic problems. These methods and techniques are based on bounds of character sums and numbers of solutions of some polynomial equations over finite fields and residue rings. It also contains a number of open problems and proposals for further research. We obtain several lower bounds, exponential in terms of logp, on the de­ grees and orders of • polynomials; • algebraic functions; • Boolean functions; • linear recurring sequences; coinciding with values of the discrete logarithm modulo a prime p at suf­ ficiently many points (the number of points can be as small as pI/He). These functions are considered over the residue ring modulo p and over the residue ring modulo an arbitrary divisor d of p - 1. The case of d = 2 is of special interest since it corresponds to the representation of the right­ most bit of the discrete logarithm and defines whether the argument is a quadratic...

  19. A Research Method to Investigate Information Seeking using the Concept of Information Horizons: An Example from a Study of Lower Socio-economic Students’ Information Seeking Behavior

    OpenAIRE

    Sonnenwald, D. H.; Wildemuth, B. S.; Harmon, G. L.

    2001-01-01

    As research questions and topics in information studies evolve, there is a continual need to seek out innovative research methods to help us investigate and address these questions. This paper presents an emerging research method, the creation and analysis of information horizon maps, and discusses the use of such maps in an ongoing research study. Sonnenwald's (1999) framework for human information behavior provides a theoretical foundation for this method. This theoretical framework suggest...

  20. Economic valuation of informal care: the contingent valuation method applied to informal caregiving.

    Science.gov (United States)

    van den Berg, Bernard; Brouwer, Werner; van Exel, Job; Koopmanschap, Marc

    2005-02-01

    This paper reports the results of the application of the contingent valuation method (CVM) to determine a monetary value of informal care. We discuss the current practice in valuing informal care and a theoretical model of the costs and benefits related to the provision of informal care. In addition, we developed a survey in which informal caregivers' willingness to accept (WTA) to provide an additional hour of informal care was elicited. This method is better than normally recommended valuation methods able to capture the heterogeneity and dynamics of informal care. Data were obtained from postal surveys. A total of 153 informal caregivers and 149 care recipients with rheumatoid arthritis returned a completed survey. Informal caregivers reported a mean WTA to provide a hypothetical additional hour of informal care of 9.52 Euro (n=124). Many hypotheses derived from the theoretical model and the literature were supported by the data.CVM is a promising alternative for existing methods like the opportunity cost method and the proxy good method to determine a monetary value of informal care that can be incorporated in the numerator of any economic evaluation.

  1. Information technology equipment cooling method

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, Mark D.

    2015-10-20

    According to one embodiment, a system for removing heat from a rack of information technology equipment may include a sidecar indoor air to liquid heat exchanger that cools air utilized by the rack of information technology equipment to cool the rack of information technology equipment. The system may also include a liquid to liquid heat exchanger and an outdoor heat exchanger. The system may further include configurable pathways to connect and control fluid flow through the sidecar heat exchanger, the liquid to liquid heat exchanger, the rack of information technology equipment, and the outdoor heat exchanger based upon ambient temperature and/or ambient humidity to remove heat generated by the rack of information technology equipment.

  2. Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics

    Energy Technology Data Exchange (ETDEWEB)

    Tsourtis, Anastasios, E-mail: tsourtis@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, Crete (Greece); Pantazis, Yannis, E-mail: pantazis@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Harmandaris, Vagelis, E-mail: harman@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, and Institute of Applied and Computational Mathematics (IACM), Foundation for Research and Technology Hellas (FORTH), GR-70013 Heraklion, Crete (Greece)

    2015-07-07

    In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.

  3. Theoretical Modelling Methods for Thermal Management of Batteries

    Directory of Open Access Journals (Sweden)

    Bahman Shabani

    2015-09-01

    Full Text Available The main challenge associated with renewable energy generation is the intermittency of the renewable source of power. Because of this, back-up generation sources fuelled by fossil fuels are required. In stationary applications whether it is a back-up diesel generator or connection to the grid, these systems are yet to be truly emissions-free. One solution to the problem is the utilisation of electrochemical energy storage systems (ESS to store the excess renewable energy and then reusing this energy when the renewable energy source is insufficient to meet the demand. The performance of an ESS amongst other things is affected by the design, materials used and the operating temperature of the system. The operating temperature is critical since operating an ESS at low ambient temperatures affects its capacity and charge acceptance while operating the ESS at high ambient temperatures affects its lifetime and suggests safety risks. Safety risks are magnified in renewable energy storage applications given the scale of the ESS required to meet the energy demand. This necessity has propelled significant effort to model the thermal behaviour of ESS. Understanding and modelling the thermal behaviour of these systems is a crucial consideration before designing an efficient thermal management system that would operate safely and extend the lifetime of the ESS. This is vital in order to eliminate intermittency and add value to renewable sources of power. This paper concentrates on reviewing theoretical approaches used to simulate the operating temperatures of ESS and the subsequent endeavours of modelling thermal management systems for these systems. The intent of this review is to present some of the different methods of modelling the thermal behaviour of ESS highlighting the advantages and disadvantages of each approach.

  4. An information-theoretic approach to assess practical identifiability of parametric dynamical systems.

    Science.gov (United States)

    Pant, Sanjay; Lombardi, Damiano

    2015-10-01

    A new approach for assessing parameter identifiability of dynamical systems in a Bayesian setting is presented. The concept of Shannon entropy is employed to measure the inherent uncertainty in the parameters. The expected reduction in this uncertainty is seen as the amount of information one expects to gain about the parameters due to the availability of noisy measurements of the dynamical system. Such expected information gain is interpreted in terms of the variance of a hypothetical measurement device that can measure the parameters directly, and is related to practical identifiability of the parameters. If the individual parameters are unidentifiable, correlation between parameter combinations is assessed through conditional mutual information to determine which sets of parameters can be identified together. The information theoretic quantities of entropy and information are evaluated numerically through a combination of Monte Carlo and k-nearest neighbour methods in a non-parametric fashion. Unlike many methods to evaluate identifiability proposed in the literature, the proposed approach takes the measurement-noise into account and is not restricted to any particular noise-structure. Whilst computationally intensive for large dynamical systems, it is easily parallelisable and is non-intrusive as it does not necessitate re-writing of the numerical solvers of the dynamical system. The application of such an approach is presented for a variety of dynamical systems--ranging from systems governed by ordinary differential equations to partial differential equations--and, where possible, validated against results previously published in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Bounded rationality, abstraction and hierarchical decision-making: an information-theoretic optimality principle

    Directory of Open Access Journals (Sweden)

    Tim eGenewein

    2015-11-01

    Full Text Available Abstraction and hierarchical information-processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such a flexibility in artificial systems is challenging, even with more and more computational power. Here we investigate the hypothesis that abstraction and hierarchical information-processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems.

  6. Contrast properties of entropic criteria for blind source separation : a unifying framework based on information-theoretic inequalities/

    OpenAIRE

    Vrins, Frédéric

    2007-01-01

    In the recent years, Independent Component Analysis (ICA) has become a fundamental tool in adaptive signal and data processing, especially in the field of Blind Source Separation (BSS). Even though there exist some methods for which an algebraic solution to the ICA problem may be found, other iterative methods are very popular. Among them is the class of information-theoretic approaches, laying on entropies. The associated objective functions are maximized based on optimization schemes, and o...

  7. Theoretical Chemical Thermometry on Geothermal Waters: Problems and Methods

    Science.gov (United States)

    Pang, Zhong-He; Reed, Mark

    1998-03-01

    Using a synthetic geothermal water, we examine the effect of errors in Al analyses on theoretical chemical geothermometry based on multicomponent chemical equilibrium calculations of mineral equilibria. A new approach named FixAl that entails the construction of a modified Q/K graph eliminates problems with water analyses lacking Al or with erroneous analyses of Al. This is made possible by forcing the water to be at equilibrium with a selected Al-bearing mineral, such as microcline. In a FixAl graph, a modified Q/K value is plotted against temperature for Al-bearing minerals. Saturation indices of nonaluminous minerals are plotted in the same way as in an ordinary Q/K graph. In addition to Al concentration errors, degassing of CO 2 and dilution of reservoir water interfere with computed equilibrium geothermometry. These effects can be distinguished in a Q/K graph by comparing curves for nonaluminous minerals to those of aluminous minerals then correcting for CO 2 loss and dilution by a trial and error method. Example geothermal waters from China, Iceland, and the USA that are used to demonstrate the methods show that errors in Al concentrations are common, and some are severe. The FixAl approach has proved useful for chemical geothermometry for geothermal waters lacking Al analysis and for waters with an incorrect Al analysis. The equilibrium temperatures estimated by the FixAl approach agree well with quartz, chalcedony, and isotopic geothermometers. The best choice of mineral for forced equilibrium depends on pH. For most neutral pH waters, microcline and albite work well; for more acidic waters, kaolinite or illite are good choices. Measured pH plays a critical role in computed equilibria, and we find that the best pH to use is the one to which the reported carbonate also applies. Commonly this is the laboratory pH instead of field pH, but the field pH is still necessary to constrain CO 2 degassing. Calculations on numerous waters in the 80-180°C reservoir

  8. JIDT: An information-theoretic toolkit for studying the dynamics of complex systems

    Directory of Open Access Journals (Sweden)

    Joseph Troy Lizier

    2014-12-01

    Full Text Available Complex systems are increasingly being viewed as distributed information processing systems, particularly in the domains of computational neuroscience, bioinformatics and Artificial Life. This trend has resulted in a strong uptake in the use of (Shannon information-theoretic measures to analyse the dynamics of complex systems in these fields. We introduce the Java Information Dynamics Toolkit (JIDT: a Google code project which provides a standalone, (GNU GPL v3 licensed open-source code implementation for empirical estimation of information-theoretic measures from time-series data. While the toolkit provides classic information-theoretic measures (e.g. entropy, mutual information, conditional mutual information, it ultimately focusses on implementing higher-level measures for information dynamics. That is, JIDT focusses on quantifying information storage, transfer and modification, and the dynamics of these operations in space and time. For this purpose, it includes implementations of the transfer entropy and active information storage, their multivariate extensions and local or pointwise variants. JIDT provides implementations for both discrete and continuous-valued data for each measure, including various types of estimator for continuous data (e.g. Gaussian, box-kernel and Kraskov-Stoegbauer-Grassberger which can be swapped at run-time due to Java's object-oriented polymorphism. Furthermore, while written in Java, the toolkit can be used directly in MATLAB, GNU Octave, Python and other environments. We present the principles behind the code design, and provide several examples to guide users.

  9. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    Science.gov (United States)

    Chen, Yun; Yang, Hui

    2016-12-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  10. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    Science.gov (United States)

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  11. Methods for evaluating information sources

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2012-01-01

    The article briefly presents and discusses 12 different approaches to the evaluation of information sources (for example a Wikipedia entry or a journal article): (1) the checklist approach; (2) classical peer review; (3) modified peer review; (4) evaluation based on examining the coverage...

  12. Transformational Teaching: Theoretical Underpinnings, Basic Principles, and Core Methods.

    Science.gov (United States)

    Slavich, George M; Zimbardo, Philip G

    2012-12-01

    Approaches to classroom instruction have evolved considerably over the past 50 years. This progress has been spurred by the development of several learning principles and methods of instruction, including active learning, student-centered learning, collaborative learning, experiential learning, and problem-based learning. In the present paper, we suggest that these seemingly different strategies share important underlying characteristics and can be viewed as complimentary components of a broader approach to classroom instruction called transformational teaching. Transformational teaching involves creating dynamic relationships between teachers, students, and a shared body of knowledge to promote student learning and personal growth. From this perspective, instructors are intellectual coaches who create teams of students who collaborate with each other and with their teacher to master bodies of information. Teachers assume the traditional role of facilitating students' acquisition of key course concepts, but do so while enhancing students' personal development and attitudes toward learning. They accomplish these goals by establishing a shared vision for a course, providing modeling and mastery experiences, challenging and encouraging students, personalizing attention and feedback, creating experiential lessons that transcend the boundaries of the classroom, and promoting ample opportunities for preflection and reflection. We propose that these methods are synergistically related and, when used together, maximize students' potential for intellectual and personal growth.

  13. Quantum entanglement of identical particles by standard information-theoretic notions

    Science.gov (United States)

    Lo Franco, Rosario; Compagno, Giuseppe

    2016-02-01

    Quantum entanglement of identical particles is essential in quantum information theory. Yet, its correct determination remains an open issue hindering the general understanding and exploitation of many-particle systems. Operator-based methods have been developed that attempt to overcome the issue. Here we introduce a state-based method which, as second quantization, does not label identical particles and presents conceptual and technical advances compared to the previous ones. It establishes the quantitative role played by arbitrary wave function overlaps, local measurements and particle nature (bosons or fermions) in assessing entanglement by notions commonly used in quantum information theory for distinguishable particles, like partial trace. Our approach furthermore shows that bringing identical particles into the same spatial location functions as an entangling gate, providing fundamental theoretical support to recent experimental observations with ultracold atoms. These results pave the way to set and interpret experiments for utilizing quantum correlations in realistic scenarios where overlap of particles can count, as in Bose-Einstein condensates, quantum dots and biological molecular aggregates.

  14. Adequacy as an integral characteristic of management information quality: theoretical background

    OpenAIRE

    Kalyuzhna, N. G.

    2015-01-01

    The aim of the article. The aim of the article is to define theoretical background of management information quality. The results of the analysis. It is proved that the quality of management processes information support is one of the determining factors of the enterprise management effectiveness providing. The results of information processing and using depends on the decision-maker subjective interpretation. That causes the necessity of determining measures to reduce management decisi...

  15. THEORETICAL STUDY (AB INITIO AND DFT METHODS) ON ...

    African Journals Online (AJOL)

    3Division of Computational Physics, Institute for Computational Science, Ton ... 4Faculty of Applied Sciences, Ton Duc Thang University, Ho Chi Minh City, ... a complex of Al(III) with xylenol orange as an ultra-sensitive colored reagent ..... Husain, A. Pharmaceutical Analysis, Theoretical Basis of Analysis: Complexometric.

  16. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    Energy Technology Data Exchange (ETDEWEB)

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  17. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    Energy Technology Data Exchange (ETDEWEB)

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Guzmán, Orlando [Departamento de Física, Universidad Autónoma Metropolitana, Iztapalapa, DF 09340, México (Mexico); Hernández-Ortiz, Juan P. [Departamento de Materiales y Minerales, Universidad Nacional de Colombia, Sede Medellín, Medellín (Colombia); Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Pablo, Juan J. de, E-mail: depablo@uchicago.edu [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Materials Science Division, Argonne National Laboratory, Argonne, Illinois 60439 (United States)

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  18. An information theoretic approach for non-rigid image registration using voxel class probabilities.

    Science.gov (United States)

    D'Agostino, Emiliano; Maes, Frederik; Vandermeulen, Dirk; Suetens, Paul

    2006-06-01

    We propose two information theoretic similarity measures that allow to incorporate tissue class information in non-rigid image registration. The first measure assumes that tissue class probabilities have been assigned to each of the images to be registered by prior segmentation of both of them. One image is then non-rigidly deformed to match the other such that the fuzzy overlap of corresponding voxel object labels becomes similar to the ideal case whereby the tissue probability maps of both images are identical. Image similarity is assessed during registration by the divergence between the ideal and actual joint class probability distributions of both images. A second registration measure is proposed that applies in case a segmentation is available for only one of the images, for instance an atlas image that is to be matched to a study image to guide the segmentation thereof. Intensities in one image are matched to the fuzzy class labels in the other image by minimizing the conditional entropy of the intensities in the first image given the class labels in the second image. We derive analytic expressions for the gradient of each measure with respect to individual voxel displacements to derive a force field that drives the registration process, which is regularized by a viscous fluid model. The performance of the class-based measures is evaluated in the context of non-rigid inter-subject registration and atlas-based segmentation of MR brain images and compared with maximization of mutual information using only intensity information. Our results demonstrate that incorporation of class information in the registration measure significantly improves the overlap between corresponding tissue classes after non-rigid matching. The methods proposed here open new perspectives for integrating segmentation and registration in a single process, whereby the output of one is used to guide the other.

  19. A quantum-information theoretic analysis of three-flavor neutrino oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Subhashish, E-mail: subhashish@iitj.ac.in; Alok, Ashutosh Kumar, E-mail: akalok@iitj.ac.in [Indian Institute of Technology Jodhpur, 342011, Jodhpur (India); Srikanth, R., E-mail: srik@poornaprajna.org [Poornaprajna Institute of Scientific Research, Sadashivnagar, 560080, Banglore (India); Hiesmayr, Beatrix C., E-mail: Beatrix.Hiesmayr@univie.ac.at [University of Vienna, Boltzmanngasse 5, 1090, Vienna (Austria)

    2015-10-13

    Correlations exhibited by neutrino oscillations are studied via quantum-information theoretic quantities. We show that the strongest type of entanglement, genuine multipartite entanglement, is persistent in the flavor changing states. We prove the existence of Bell-type nonlocal features, in both its absolute and genuine avatars. Finally, we show that a measure of nonclassicality, dissension, which is a generalization of quantum discord to the tripartite case, is nonzero for almost the entire range of time in the evolution of an initial electron-neutrino. Via these quantum-information theoretic quantities, capturing different aspects of quantum correlations, we elucidate the differences between the flavor types, shedding light on the quantum-information theoretic aspects of the weak force.

  20. Spectral entropies as information-theoretic tools for complex network comparison

    CERN Document Server

    De Domenico, Manlio

    2016-01-01

    Any physical system can be viewed from the perspective that information is implicitly represented in its state. However, the quantification of this information when it comes to complex networks has remained largely elusive. In this work, we use techniques inspired by quantum statistical mechanics to define an entropy measure for complex networks and to develop a set of information-theoretic tools, based on network spectral properties, such as Renyi q-entropy, generalized Kullback-Leibler and Jensen-Shannon divergences, the latter allowing us to define a natural distance measure between complex networks. First we show that by minimizing the Kullback-Leibler divergence between an observed network and a parametric network model, inference of model parameter(s) by means of maximum-likelihood estimation can be achieved and model selection can be performed appropriate information criteria. Second, we show that the information-theoretic metric quantifies the distance between pairs of networks and we can use it, for ...

  1. Metabolic scaling in animals: methods, empirical results, and theoretical explanations.

    Science.gov (United States)

    White, Craig R; Kearney, Michael R

    2014-01-01

    Life on earth spans a size range of around 21 orders of magnitude across species and can span a range of more than 6 orders of magnitude within species of animal. The effect of size on physiology is, therefore, enormous and is typically expressed by how physiological phenomena scale with mass(b). When b ≠ 1 a trait does not vary in direct proportion to mass and is said to scale allometrically. The study of allometric scaling goes back to at least the time of Galileo Galilei, and published scaling relationships are now available for hundreds of traits. Here, the methods of scaling analysis are reviewed, using examples for a range of traits with an emphasis on those related to metabolism in animals. Where necessary, new relationships have been generated from published data using modern phylogenetically informed techniques. During recent decades one of the most controversial scaling relationships has been that between metabolic rate and body mass and a number of explanations have been proposed for the scaling of this trait. Examples of these mechanistic explanations for metabolic scaling are reviewed, and suggestions made for comparing between them. Finally, the conceptual links between metabolic scaling and ecological patterns are examined, emphasizing the distinction between (1) the hypothesis that size- and temperature-dependent variation among species and individuals in metabolic rate influences ecological processes at levels of organization from individuals to the biosphere and (2) mechanistic explanations for metabolic rate that may explain the size- and temperature-dependence of this trait.

  2. GROUPWARE - MODERN INFORMATION MANAGERIAL METHOD

    Directory of Open Access Journals (Sweden)

    Rozalia NISTOR

    2006-01-01

    Full Text Available The notion groupware contents the information technologies that facilitate theteam work and that are intended for communication, collaboration,coordination within the organization. Having as base software routines forteamwork, the groupware technology has many applications in themanagement process of the organization. The notion groupware refers to aspecial class of web packages connected to a network of personalcomputers: email, chat, video IP, newsgroups, etc. The studies from theliterature consider the groupware as a class of software programs thatfacilitate the coordination, the communication and the cooperation within themember of a group. As in marketing the marketing-mix is known as the “4P”,in the area of groupware its characteristics are known as the “3C”:communication within the group; coordination among the members of thegroup; collaboration among the members of the group. From the groupwaresoftware those with relevance for the managerial activity are: electronic mail,Internet meetings, time management, project management, the managementof dissimulated information. The groupware technologies can be divised inmany categories based on two elements: time and space. The users of agroupware work together in the same time – real time groupware, or invarious periods of time – offline groupware.

  3. One method of storing information

    CERN Document Server

    Titov, Oleg

    2010-01-01

    Formulate the problem as follows. Split a file into n pieces so that it can be restored without any m parts (1<=m<=n). Such problems are called problems secret sharing. There exists a set of methods for solving such problems, but they all require a fairly large number of calculations applied to the problem posed above. The proposed method does not require calculations, and requires only the operations of the division of the file into equal (nearly equal) parts and gluing them in a certain order in one or more files.

  4. Dimensional Information-Theoretic Measurement of Facial Emotion Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Jihun Hamm

    2014-01-01

    Full Text Available Altered facial expressions of emotions are characteristic impairments in schizophrenia. Ratings of affect have traditionally been limited to clinical rating scales and facial muscle movement analysis, which require extensive training and have limitations based on methodology and ecological validity. To improve reliable assessment of dynamic facial expression changes, we have developed automated measurements of facial emotion expressions based on information-theoretic measures of expressivity of ambiguity and distinctiveness of facial expressions. These measures were examined in matched groups of persons with schizophrenia (n=28 and healthy controls (n=26 who underwent video acquisition to assess expressivity of basic emotions (happiness, sadness, anger, fear, and disgust in evoked conditions. Persons with schizophrenia scored higher on ambiguity, the measure of conditional entropy within the expression of a single emotion, and they scored lower on distinctiveness, the measure of mutual information across expressions of different emotions. The automated measures compared favorably with observer-based ratings. This method can be applied for delineating dynamic emotional expressivity in healthy and clinical populations.

  5. Active Markov Information-Theoretic Path Planning for Robotic Environmental Sensing

    CERN Document Server

    Low, Kian Hsiang; Khosla, Pradeep

    2011-01-01

    Recent research in multi-robot exploration and mapping has focused on sampling environmental fields, which are typically modeled using the Gaussian process (GP). Existing information-theoretic exploration strategies for learning GP-based environmental field maps adopt the non-Markovian problem structure and consequently scale poorly with the length of history of observations. Hence, it becomes computationally impractical to use these strategies for in situ, real-time active sampling. To ease this computational burden, this paper presents a Markov-based approach to efficient information-theoretic path planning for active sampling of GP-based fields. We analyze the time complexity of solving the Markov-based path planning problem, and demonstrate analytically that it scales better than that of deriving the non-Markovian strategies with increasing length of planning horizon. For a class of exploration tasks called the transect sampling task, we provide theoretical guarantees on the active sampling performance of...

  6. Numerical Methods Application for Reinforced Concrete Elements-Theoretical Approach for Direct Stiffness Matrix Method

    Directory of Open Access Journals (Sweden)

    Sergiu Ciprian Catinas

    2015-07-01

    Full Text Available A detailed theoretical and practical investigation of the reinforced concrete elements is due to recent techniques and method that are implemented in the construction market. More over a theoretical study is a demand for a better and faster approach nowadays due to rapid development of the calculus technique. The paper above will present a study for implementing in a static calculus the direct stiffness matrix method in order capable to address phenomena related to different stages of loading, rapid change of cross section area and physical properties. The method is a demand due to the fact that in our days the FEM (Finite Element Method is the only alternative to such a calculus and FEM are considered as expensive methods from the time and calculus resources point of view. The main goal in such a method is to create the moment-curvature diagram in the cross section that is analyzed. The paper above will express some of the most important techniques and new ideas as well in order to create the moment curvature graphic in the cross sections considered.

  7. An Information-Theoretic Approach to PMU Placement in Electric Power Systems

    CERN Document Server

    Li, Qiao; Weng, Yang; Negi, Rohit; Franchetti, Franz; Ilic, Marija D

    2012-01-01

    This paper presents an information-theoretic approach to address the phasor measurement unit (PMU) placement problem in electric power systems. Different from the conventional 'topological observability' based approaches, this paper advocates a much more refined, information-theoretic criterion, namely the mutual information (MI) between the PMU measurements and the power system states. The proposed MI criterion can not only include the full system observability as a special case, but also can rigorously model the remaining uncertainties in the power system states with PMU measurements, so as to generate highly informative PMU configurations. Further, the MI criterion can facilitate robust PMU placement by explicitly modeling probabilistic PMU outages. We propose a greedy PMU placement algorithm, and show that it achieves an approximation ratio of (1-1/e) for any PMU placement budget. We further show that the performance is the best that one can achieve in practice, in the sense that it is NP-hard to achieve ...

  8. Information Theoretic Source Seeking Strategies for Multiagent Plume Tracking in Turbulent Fields

    Directory of Open Access Journals (Sweden)

    Hadi Hajieghrary

    2017-01-01

    Full Text Available We present information theoretic search strategies for single and multi-robot teams to localize the source of biochemical contaminants in turbulent flows. The robots synthesize the information provided by sporadic and intermittent sensor readings to optimize their exploration strategy. By leveraging the spatio-temporal sensing capabilities of a mobile sensing network, our strategies result in control actions that maximize the information gained by the team while minimizing the time spent localizing the biochemical source. By leveraging the team’s ability to obtain simultaneous measurements at different locations, we show how a multi-robot team is able to speed up the search process resulting in a collaborative information theoretic search strategy. We validate our proposed strategies in both simulations and experiments.

  9. Managing Knowledge by the Information System and Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Otilija Sedlak

    2006-12-01

    Full Text Available Knowledge is a source of competitive adventage. In small enterprises there is simultaneous cooperation and competition. Knowledge management research is focused on large firms. Collaboration among small enterprises and with large firms is common. Information systems and information technology play a paramount role in coordinating and controlling joint ventures. Information system is a key tool in the management of knowledge sharing. This paper offer game-theoretic approach to answer the questions under cooperation and competition: what to share, with whom, when, under what conditions is paramount, and the role of information system in managing knowledge in small enterprises.

  10. Information Theoretic Exemplification of the Impact of Transmitter-Receiver Cognition on the Channel Capacity

    CERN Document Server

    Anzabi-Nezhad, Nima S; Kakhki, Mohammad Molavi

    2011-01-01

    In this paper, we study, information theoretically, the impact of transmitter and or receiver cognition on the channel capacity. The cognition can be described by state information, dependent on the channel noise and or input. Specifically, as a new idea, we consider the receiver cognition as a state information dependent on the noise and we derive a capacity theorem based on the Gaussian version of the Cover-Chiang capacity theorem for two-sided state information channel. As intuitively expected, the receiver cognition increases the channel capacity and our theorem shows this increase quantitatively. Also, our capacity theorem includes the famous Costa theorem as its special cases.

  11. Examining the outsourcing of information systems functions from diverse theoretical perspectives

    OpenAIRE

    Moura, Isabel Cristina A. A.; Grover, Varun

    2001-01-01

    Submitted to "Information & Management" in 2001. In recent years there has been a significant increase in the incidence of information systems (IS) outsourcing. Technological uncertainty, cost reduction, the need to concentration on core business, and the increasing quality and competition among a growing cadre of service providers, (e.g.) are often discussed as key outsourcing motivators. While a number of theoretical frameworks have been used to structure studies on this phenomenon, the ...

  12. On the Role of Information Theoretic Uncertainty Relations in Quantum Theory

    OpenAIRE

    Jizba, Petr; Dunningham, Jacob A.; Joo, Jaewoo

    2014-01-01

    Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) R\\'{e}nyi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson-Schr\\"{o}ding...

  13. An Evolutionary Game-Theoretic Framework for Cyber-threat Information Sharing

    Science.gov (United States)

    2014-09-09

    An Evolutionary Game-Theoretic Framework for Cyber-threat Information Sharing Deepak Tosh, Shamik Sengupta Dept of Computer Science and Engineering...situation. Index Terms—Cybersecurity, CYBEX, Evolutionary Game Theory, Incentive Model, Information Sharing I. INTRODUCTION A robust cybersecurity...Charles.Kamhoua.1@us.af.mil Kevin.Kwiat@us.af.mil Andrew Martin Department of Computer Science University of Oxford Andrew.Martin@cs.ox.ac.uk Abstract

  14. Basic Therapeutic Communication: Theoretical and Practical Information for Outdoor Adventure Professionals.

    Science.gov (United States)

    West-Smith, Lisa

    1997-01-01

    Uses the scenario of a women's adventure therapy trip to illustrate theoretical and practical information about "basic" therapeutic communication skills with participants in outdoor adventure settings. Discusses gender-based language style (women's "tentative" language), issues of physical and emotional safety, a philosophical rationale for…

  15. Phenomenological description of selected elementary chemical reaction mechanisms: An information-theoretic study

    Energy Technology Data Exchange (ETDEWEB)

    Esquivel, R.O., E-mail: esquivel@xanum.uam.m [Departamento de Quimica, Universidad Autonoma Metropolitana, 09340 Mexico D.F. (Mexico); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, 18071-Granada (Spain); Instituto Carlos I de Fisica Teorica y Computacional, Universidad de Granada, 18071-Granada (Spain); Flores-Gallegos, N.; Iuga, C.; Carrera, E.M. [Departamento de Quimica, Universidad Autonoma Metropolitana, 09340 Mexico D.F. (Mexico); Angulo, J.C. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, 18071-Granada (Spain); Instituto Carlos I de Fisica Teorica y Computacional, Universidad de Granada, 18071-Granada (Spain); Antolin, J. [Departamento de Fisica Aplicada, EUITIZ, Universidad de Zaragoza, 50018-Zaragoza (Spain); Instituto Carlos I de Fisica Teorica y Computacional, Universidad de Granada, 18071-Granada (Spain)

    2010-02-01

    The information-theoretic description of the course of two elementary chemical reactions allows a phenomenological description of the chemical course of the hydrogenic abstraction and the S{sub N}2 identity reactions by use of Shannon entropic measures in position and momentum spaces. The analyses reveal their synchronous/asynchronous mechanistic behavior.

  16. Distinguishing spectral and temporal properties of speech using an information-theoretic approach

    DEFF Research Database (Denmark)

    Christiansen, Thomas Ulrich; Greenberg, Steven

    2007-01-01

    The spectro-temporal coding of Danish consonants was investigated using an information-theoretic analysis. Listeners identified eleven consonants spoken in CV[l] context. In each condition, only a portion of the original spectrum was played. Center frequencies of 750, 1500 and 3000 Hz, were prese...

  17. TheoReTS - An information system for theoretical spectra based on variational predictions from molecular potential energy and dipole moment surfaces

    Science.gov (United States)

    Rey, Michaël; Nikitin, Andrei V.; Babikov, Yurii L.; Tyuterev, Vladimir G.

    2016-09-01

    Knowledge of intensities of rovibrational transitions of various molecules and theirs isotopic species in wide spectral and temperature ranges is essential for the modeling of optical properties of planetary atmospheres, brown dwarfs and for other astrophysical applications. TheoReTS ("Theoretical Reims-Tomsk Spectral data") is an Internet accessible information system devoted to ab initio based rotationally resolved spectra predictions for some relevant molecular species. All data were generated from potential energy and dipole moment surfaces computed via high-level electronic structure calculations using variational methods for vibration-rotation energy levels and transitions. When available, empirical corrections to band centers were applied, all line intensities remaining purely ab initio. The current TheoReTS implementation contains information on four-to-six atomic molecules, including phosphine, methane, ethylene, silane, methyl-fluoride, and their isotopic species 13CH4 , 12CH3D , 12CH2D2 , 12CD4 , 13C2H4, … . Predicted hot methane line lists up to T = 2000 K are included. The information system provides the associated software for spectra simulation including absorption coefficient, absorption and emission cross-sections, transmittance and radiance. The simulations allow Lorentz, Gauss and Voight line shapes. Rectangular, triangular, Lorentzian, Gaussian, sinc and sinc squared apparatus function can be used with user-defined specifications for broadening parameters and spectral resolution. All information is organized as a relational database with the user-friendly graphical interface according to Model-View-Controller architectural tools. The full-featured web application is written on PHP using Yii framework and C++ software modules. In case of very large high-temperature line lists, a data compression is implemented for fast interactive spectra simulations of a quasi-continual absorption due to big line density. Applications for the TheoReTS may

  18. Detecting Network Vulnerabilities Through Graph TheoreticalMethods

    Energy Technology Data Exchange (ETDEWEB)

    Cesarz, Patrick; Pomann, Gina-Maria; Torre, Luis de la; Villarosa, Greta; Flournoy, Tamara; Pinar, Ali; Meza Juan

    2007-09-30

    Identifying vulnerabilities in power networks is an important problem, as even a small number of vulnerable connections can cause billions of dollars in damage to a network. In this paper, we investigate a graph theoretical formulation for identifying vulnerabilities of a network. We first try to find the most critical components in a network by finding an optimal solution for each possible cutsize constraint for the relaxed version of the inhibiting bisection problem, which aims to find loosely coupled subgraphs with significant demand/supply mismatch. Then we investigate finding critical components by finding a flow assignment that minimizes the maximum among flow assignments on all edges. We also report experiments on IEEE 30, IEEE 118, and WSCC 179 benchmark power networks.

  19. Theoretical prediction method of subcooled flow boiling CHF

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)

  20. Information-theoretic resolution of perceptual WSS watermarking of non i.i.d. Gaussian signals

    CERN Document Server

    Pateux, Stéphane; Guillemot, Christine

    2008-01-01

    The theoretical foundations of data hiding have been revealed by formulating the problem as message communication over a noisy channel. We revisit the problem in light of a more general characterization of the watermark channel and of weighted distortion measures. Considering spread spectrum based information hiding, we release the usual assumption of an i.i.d. cover signal. The game-theoretic resolution of the problem reveals a generalized characterization of optimum attacks. The paper then derives closed-form expressions for the different parameters exhibiting a practical embedding and extraction technique.

  1. A short course in quantum information theory. An approach from theoretical physics

    Energy Technology Data Exchange (ETDEWEB)

    Diosi, L. [KFKI Research Institute for Partical and Nuclear Physics, Budapest (Hungary)

    2007-07-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. (orig.)

  2. Cognition in Orienteering: Theoretical Perspectives and Methods of Study.

    Science.gov (United States)

    Ottosson, Torgny

    1996-01-01

    Almost without exception, published studies on cognition in orienteering have adopted an information processing perspective involving dualism between objective and subjective worlds. An alternative, experiential framework focuses on the orienteer's conception of (or way of experiencing) the task to be accomplished, and on "affordances" (lines of…

  3. Dynamical Systems Method and Applications Theoretical Developments and Numerical Examples

    CERN Document Server

    Ramm, Alexander G

    2012-01-01

    Demonstrates the application of DSM to solve a broad range of operator equations The dynamical systems method (DSM) is a powerful computational method for solving operator equations. With this book as their guide, readers will master the application of DSM to solve a variety of linear and nonlinear problems as well as ill-posed and well-posed problems. The authors offer a clear, step-by-step, systematic development of DSM that enables readers to grasp the method's underlying logic and its numerous applications. Dynamical Systems Method and Applications begins with a general introduction and

  4. Advances on BYY harmony learning: information theoretic perspective, generalized projection geometry, and independent factor autodetermination.

    Science.gov (United States)

    Xu, Lei

    2004-07-01

    The nature of Bayesian Ying-Yang harmony learning is reexamined from an information theoretic perspective. Not only its ability for model selection and regularization is explained with new insights, but also discussions are made on its relations and differences from the studies of minimum description length (MDL), Bayesian approach, the bit-back based MDL, Akaike information criterion (AIC), maximum likelihood, information geometry, Helmholtz machines, and variational approximation. Moreover, a generalized projection geometry is introduced for further understanding such a new mechanism. Furthermore, new algorithms are also developed for implementing Gaussian factor analysis (FA) and non-Gaussian factor analysis (NFA) such that selecting appropriate factors is automatically made during parameter learning.

  5. Open source tools for the information theoretic analysis of neural data.

    Science.gov (United States)

    Ince, Robin A A; Mazzoni, Alberto; Petersen, Rasmus S; Panzeri, Stefano

    2010-01-01

    The recent and rapid development of open source software tools for the analysis of neurophysiological datasets consisting of simultaneous multiple recordings of spikes, field potentials and other neural signals holds the promise for a significant advance in the standardization, transparency, quality, reproducibility and variety of techniques used to analyze neurophysiological data and for the integration of information obtained at different spatial and temporal scales. In this review we focus on recent advances in open source toolboxes for the information theoretic analysis of neural responses. We also present examples of their use to investigate the role of spike timing precision, correlations across neurons, and field potential fluctuations in the encoding of sensory information. These information toolboxes, available both in MATLAB and Python programming environments, hold the potential to enlarge the domain of application of information theory to neuroscience and to lead to new discoveries about how neurons encode and transmit information.

  6. Method Engineering: Engineering of Information Systems Development Methods and Tools

    NARCIS (Netherlands)

    Brinkkemper, J.N.; Brinkkemper, Sjaak

    1996-01-01

    This paper proposes the term method engineering for the research field of the construction of information systems development methods and tools. Some research issues in method engineering are identified. One major research topic in method engineering is discussed in depth: situational methods, i.e.

  7. Theoretical and methodical support of calculating expenses for quality

    OpenAIRE

    Пархоменко, Валерій Миколайович

    2015-01-01

    A hybrid models of calculating expenses for quality on the full life cycle in combination with types of expenses on purpose and model of goods costing considering expenses for quality by standard-functional method have been developed.

  8. Information-theoretic bound on the energy cost of stochastic simulation

    CERN Document Server

    Wiesner, Karoline; Rieper, Elisabeth; Vedral, Vlatko

    2011-01-01

    Physical systems are often simulated using a stochastic computation where different final states result from identical initial states. Here, we derive the minimum energy cost of simulating a complex data set of a general physical system with a stochastic computation. We show that the cost is proportional to the difference between two information-theoretic measures of complexity of the data - the statistical complexity and the predictive information. We derive the difference as the amount of information erased during the computation. Finally, we illustrate the physics of information by implementing the stochastic computation as a Gedankenexperiment of a Szilard-type engine. The results create a new link between thermodynamics, information theory, and complexity.

  9. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    Science.gov (United States)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  10. EEG-fMRI based information theoretic characterization of the human perceptual decision system.

    Directory of Open Access Journals (Sweden)

    Dirk Ostwald

    Full Text Available The modern metaphor of the brain is that of a dynamic information processing device. In the current study we investigate how a core cognitive network of the human brain, the perceptual decision system, can be characterized regarding its spatiotemporal representation of task-relevant information. We capitalize on a recently developed information theoretic framework for the analysis of simultaneously acquired electroencephalography (EEG and functional magnetic resonance imaging data (fMRI (Ostwald et al. (2010, NeuroImage 49: 498-516. We show how this framework naturally extends from previous validations in the sensory to the cognitive domain and how it enables the economic description of neural spatiotemporal information encoding. Specifically, based on simultaneous EEG-fMRI data features from n = 13 observers performing a visual perceptual decision task, we demonstrate how the information theoretic framework is able to reproduce earlier findings on the neurobiological underpinnings of perceptual decisions from the response signal features' marginal distributions. Furthermore, using the joint EEG-fMRI feature distribution, we provide novel evidence for a highly distributed and dynamic encoding of task-relevant information in the human brain.

  11. Integral methods in science and engineering theoretical and practical aspects

    CERN Document Server

    Constanda, C; Rollins, D

    2006-01-01

    Presents a series of analytic and numerical methods of solution constructed for important problems arising in science and engineering, based on the powerful operation of integration. This volume is meant for researchers and practitioners in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students.

  12. Transformational Teaching: Theoretical Underpinnings, Basic Principles, and Core Methods

    Science.gov (United States)

    Slavich, George M.; Zimbardo, Philip G.

    2012-01-01

    Approaches to classroom instruction have evolved considerably over the past 50 years. This progress has been spurred by the development of several learning principles and methods of instruction, including active learning, student-centered learning, collaborative learning, experiential learning, and problem-based learning. In the present paper, we…

  13. Method for gathering and summarizing internet information

    Science.gov (United States)

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  14. Method for gathering and summarizing internet information

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  15. An Information-Theoretic Alternative to the Cronbach's Alpha Coefficient of Item Reliability

    OpenAIRE

    Fokoue, Ernest; Gunduz, Necla

    2015-01-01

    We propose an information-theoretic alternative to the popular Cronbach alpha coefficient of reliability. Particularly suitable for contexts in which instruments are scored on a strictly nonnumeric scale, our proposed index is based on functions of the entropy of the distributions of defined on the sample space of responses. Our reliability index tracks the Cronbach alpha coefficient uniformly while offering several other advantages discussed in great details in this paper.

  16. An integrated organisation-wide data quality management and information governance framework: theoretical underpinnings.

    Science.gov (United States)

    Liaw, Siaw-Teng; Pearce, Christopher; Liyanage, Harshana; Liaw, Gladys S S; de Lusignan, Simon

    2014-01-01

    Increasing investment in eHealth aims to improve cost effectiveness and safety of care. Data extraction and aggregation can create new data products to improve professional practice and provide feedback to improve the quality of source data. A previous systematic review concluded that locally relevant clinical indicators and use of clinical record systems could support clinical governance. We aimed to extend and update the review with a theoretical framework. We searched PubMed, Medline, Web of Science, ABI Inform (Proquest) and Business Source Premier (EBSCO) using the terms curation, information ecosystem, data quality management (DQM), data governance, information governance (IG) and data stewardship. We focused on and analysed the scope of DQM and IG processes, theoretical frameworks, and determinants of the processing, quality assurance, presentation and sharing of data across the enterprise. There are good theoretical reasons for integrated governance, but there is variable alignment of DQM, IG and health system objectives across the health enterprise. Ethical constraints exist that require health information ecosystems to process data in ways that are aligned with improving health and system efficiency and ensuring patient safety. Despite an increasingly 'big-data' environment, DQM and IG in health services are still fragmented across the data production cycle. We extend current work on DQM and IG with a theoretical framework for integrated IG across the data cycle. The dimensions of this theory-based framework would require testing with qualitative and quantitative studies to examine the applicability and utility, along with an evaluation of its impact on data quality across the health enterprise.

  17. Information-theoretic equilibration: the appearance of irreversibility under complex quantum dynamics

    OpenAIRE

    2012-01-01

    The question of how irreversibility can emerge as a generic phenomena when the underlying mechanical theory is reversible has been a long-standing fundamental problem for both classical and quantum mechanics. We describe a mechanism for the appearance of irreversibility that applies to coherent, isolated systems in a pure quantum state. This equilibration mechanism requires only an assumption of sufficiently complex internal dynamics and natural information-theoretic constraints arising from ...

  18. Information-Theoretic Conditions for Two-Party Secure Function Evaluation

    DEFF Research Database (Denmark)

    Schaffner, Christian; Crépeau, Claude; Savvides, George

    2006-01-01

    The standard security definition of unconditional secure function evaluation, which is based on the ideal/real model paradigm, has the disadvantage of being overly complicated to work with in practice. On the other hand, simpler ad-hoc definitions tailored to special scenarios have often been...... flawed. Motivated by this unsatisfactory situation, we give an information-theoretic security definition of secure function evaluation which is very simple yet provably equivalent to the standard, simulation-based definitions....

  19. Information-Theoretic Conditions for Two-Party Secure Function Evaluation

    DEFF Research Database (Denmark)

    Schaffner, Christian; Crépeau, Claude; Savvides, George

    2006-01-01

    The standard security definition of unconditional secure function evaluation, which is based on the ideal/real model paradigm, has the disadvantage of being overly complicated to work with in practice. On the other hand, simpler ad-hoc definitions tailored to special scenarios have often been...... flawed. Motivated by this unsatisfactory situation, we give an information-theoretic security definition of secure function evaluation which is very simple yet provably equivalent to the standard, simulation-based definitions....

  20. Theoretical and applied aerodynamics and related numerical methods

    CERN Document Server

    Chattot, J J

    2015-01-01

    This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...

  1. Variational Principles and Methods in Theoretical Physics and Chemistry

    Science.gov (United States)

    Nesbet, Robert K.

    2005-07-01

    Preface; Part I. Classical Mathematics and Physics: 1. History of variational theory; 2. Classical mechanics; 3. Applied mathematics; Part II. Bound States in Quantum Mechanics: 4. Time-independent quantum mechanics; 5. Independent-electron models; 6. Time-dependent theory and linear response; Part III. Continuum States and Scattering Theory: 7. Multiple scattering theory for molecules and solids; 8. Variational methods for continuum states; 9. Electron-impact rovibrational excitation of molecules; Part IV. Field Theories: 10. Relativistic Lagrangian theories.

  2. Research on polarization imaging information parsing method

    Science.gov (United States)

    Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong

    2016-11-01

    Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.

  3. Information-theoretic approach to quantum error correction and reversible measurement

    CERN Document Server

    Nielsen, M A; Schumacher, B; Barnum, H N; Caves, Carlton M.; Schumacher, Benjamin; Barnum, Howard

    1997-01-01

    Quantum operations provide a general description of the state changes allowed by quantum mechanics. The reversal of quantum operations is important for quantum error-correcting codes, teleportation, and reversing quantum measurements. We derive information-theoretic conditions and equivalent algebraic conditions that are necessary and sufficient for a general quantum operation to be reversible. We analyze the thermodynamic cost of error correction and show that error correction can be regarded as a kind of ``Maxwell demon,'' for which there is an entropy cost associated with information obtained from measurements performed during error correction. A prescription for thermodynamically efficient error correction is given.

  4. Information Theoretic Global Measures of Dirac Equation With Morse and Trigonometric Rosen-Morse Potentials

    Science.gov (United States)

    Najafizade, S. A.; Hassanabadi, H.; Zarrinkamar, S.

    2017-09-01

    In this study, the information-theoretic measures of (1+1)-dimensional Dirac equation in both position and momentum spaces are investigated for the trigonometric Rosen-Morse and the Morse potentials. The solutions of the corresponding Dirac equation are obtained in an exact analytical manner in the first step. Next, using the Fourier transformation, the position and momentum Shannon information entropies are obtained and some features of the probability densities are analyzed. The consistency with Bialynicki-Birula-Mycielski inequality and Heisenberg uncertainty is checked.

  5. An Information-Theoretic Perspective on Coarse-Graining, Including the Transition from Micro to Macro

    Directory of Open Access Journals (Sweden)

    Kristian Lindgren

    2015-05-01

    Full Text Available An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. In the micro-macro transition, it is shown how this local information quantity is transformed into a macroscopic entropy, as the local states are aggregated into macroscopic concentration variables. The information loss in this transition is identified, and the connection to the irreversibility of the macro dynamics and the second law of thermodynamics is discussed. This is then connected to a process of further coarse-graining towards higher characteristic length scales in the context of chemical reaction-diffusion dynamics capable of pattern formation. On these higher levels of coarse-graining, information flows across length scales and across space are defined. These flows obey a continuity equation for information, and they are connected to the thermodynamic constraints of the system, via an outflow of information from macroscopic to microscopic levels in the form of entropy production, as well as an inflow of information, from an external free energy source, if a spatial chemical pattern is to be maintained.

  6. Perspectives on Cybersecurity Information Sharing among Multiple Stakeholders Using a Decision-Theoretic Approach.

    Science.gov (United States)

    He, Meilin; Devine, Laura; Zhuang, Jun

    2017-08-11

    The government, private sectors, and others users of the Internet are increasingly faced with the risk of cyber incidents. Damage to computer systems and theft of sensitive data caused by cyber attacks have the potential to result in lasting harm to entities under attack, or to society as a whole. The effects of cyber attacks are not always obvious, and detecting them is not a simple proposition. As the U.S. federal government believes that information sharing on cybersecurity issues among organizations is essential to safety, security, and resilience, the importance of trusted information exchange has been emphasized to support public and private decision making by encouraging the creation of the Information Sharing and Analysis Center (ISAC). Through a decision-theoretic approach, this article provides new perspectives on ISAC, and the advent of the new Information Sharing and Analysis Organizations (ISAOs), which are intended to provide similar benefits to organizations that cannot fit easily into the ISAC structure. To help understand the processes of information sharing against cyber threats, this article illustrates 15 representative information sharing structures between ISAC, government, and other participating entities, and provide discussions on the strategic interactions between different stakeholders. This article also identifies the costs of information sharing and information security borne by different parties in this public-private partnership both before and after cyber attacks, as well as the two main benefits. This article provides perspectives on the mechanism of information sharing and some detailed cost-benefit analysis. © 2017 Society for Risk Analysis.

  7. How much a galaxy knows about its large-scale environment ? : An information theoretic perspective

    CERN Document Server

    Pandey, Biswajit

    2016-01-01

    The small-scale environment characterized by the local density is known to play a crucial role in deciding the galaxy properties but the role of large-scale environment on galaxy formation and evolution still remain a less clear issue. We propose an information theoretic framework to investigate the influence of large-scale environment on galaxy properties and apply it to the data from the Galaxy Zoo project which provides the visual morphological classifications of $\\sim 1$ million galaxies from the Sloan Digital Sky Survey. We find a non-zero mutual information between morphology and environment which decreases with increasing length scales but persists throughout the entire length scales probed. We estimate the conditional mutual information and the interaction information between morphology and environment by conditioning the environment on different length scales and find a synergic interaction between them which operates upto at least a length scales of $ \\sim 30 \\, h^{-1}\\, {\\rm Mpc}$. Our analysis ind...

  8. Using a fuzzy comprehensive evaluation method to determine product usability: A proposed theoretical framework.

    Science.gov (United States)

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process.

  9. An algorithmic and information-theoretic approach to multimetric index construction

    Science.gov (United States)

    Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.

    2013-01-01

    The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified

  10. Density functional reactivity theory study of SN2 reactions from the information-theoretic perspective.

    Science.gov (United States)

    Wu, Zemin; Rong, Chunying; Lu, Tian; Ayers, Paul W; Liu, Shubin

    2015-10-28

    As a continuation of our recent efforts to quantify chemical reactivity with quantities from the information-theoretic approach within the framework of density functional reactivity theory, the effectiveness of applying these quantities to quantify electrophilicity for the bimolecular nucleophilic substitution (SN2) reactions in both gas phase and aqueous solvent is presented in this work. We examined a total of 21 self-exchange SN2 reactions for the compound with the general chemical formula of R1R2R3C-F, where R1, R2, and R3 represent substituting alkyl groups such as -H, -CH3, -C2H5, -C3H7, and -C4H9 in both gas and solvent phases. Our findings confirm that scaling properties for information-theoretic quantities found elsewhere are still valid. It has also been verified that the barrier height has the strongest correlation with the electrostatic interaction, but the contributions from the exchange-correlation and steric effects, though less significant, are indispensable. We additionally unveiled that the barrier height of these SN2 reactions can reliably be predicted not only by the Hirshfeld charge and information gain at the regioselective carbon atom, as previously reported by us for other systems, but also by other information-theoretic descriptors such as Shannon entropy, Fisher information, and Ghosh-Berkowitz-Parr entropy on the same atom. These new findings provide further insights for the better understanding of the factors impacting the chemical reactivity of this vastly important category of chemical transformations.

  11. Information Systems Development as a Research Method

    Directory of Open Access Journals (Sweden)

    Helen Hasan

    2003-11-01

    Full Text Available This paper takes the stance that some cases of information systems development can be considered knowledge creating activities, and, in those cases, information systems development can be a legitimate research method. In these cases not only is knowledge created about the development process itself but also a deeper understanding emerges about the organisational problem that the system is designed to solve. The paper begins with a brief overview of research in the design sciences and a comparison of research methods that are concerned with the design, and use, of information systems. This is followed by an assessment of the way systems development as a research method deals with the scientific research processes of data collection, analysis, synthesis and display. A case study, where the systems development research method was use, is described to illustrate the method and give the reader a better understanding of the approach.

  12. A short course in quantum information theory. An approach from theoretical physics. 2. ed.

    Energy Technology Data Exchange (ETDEWEB)

    Diosi, Lajos [KFKI Research Institute for Particle and Nuclear Physics (RMKI), Budapest (Hungary). MTA Budapest

    2011-07-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. For the second edition, the authors has succeeded in adding many new topics while sticking to the conciseness of the overall approach. A new chapter on qubit thermodynamics has been added, while new sections and subsections have been incorporated in various chapter to deal with weak and time-continuous measurements, period-finding quantum algorithms and quantum error corrections. From the reviews of the first edition: ''The best things about this book are its brevity and clarity. In around 100 pages it provides a tutorial introduction to quantum information theory, including problems and solutions.. it's worth a look if you want to quickly get up to speed with the language and central concepts of quantum information theory, including the background classical information theory.'' (Craig Savage, Australian Physics, Vol. 44 (2), 2007). (orig.)

  13. Method of and System for Information Retrieval

    DEFF Research Database (Denmark)

    2015-01-01

    This invention relates to a system for and a method (100) of searching a collection of digital information (150) comprising a number of digital documents (110), the method comprising receiving or obtaining (102) a search query, the query comprising a number of search terms, searching (103) an index...... (300) using the search terms thereby providing information (301) about which digital documents (110) of the collection of digital information (150) that contains a given search term and one or more search related metrics (302; 303; 304; 305; 306), ranking (105) at least a part of the search result......, a method of and a system for information retrieval or searching is readily provided that enhances the searching quality (i.e. the number of relevant documents retrieved and such documents being ranked high) when (also) using queries containing many search terms....

  14. Oil monitoring methods based on information theory

    Institute of Scientific and Technical Information of China (English)

    XIA Yan-chun; HUO Hua

    2009-01-01

    To evaluate the Wear condition of machines accurately,oil spectrographic entropy,mutual information and ICA analysis methods based on information theory are presented.A full-scale diagnosis utilizing all channels of spectrographic analysis can be obtained.By measuring the complexity and correlativity,the characteristics of wear condition of machines can be shown clearly.The diagnostic quality is improved.The analysis processes of these monitoring methods are given through the explanation of examples.The availability of these methods is validated and further research fields are demonstrated.

  15. Exploring methods in information literacy research

    CERN Document Server

    Lipu, Suzanne; Lloyd, Annemaree

    2007-01-01

    This book provides an overview of approaches to assist researchers and practitioners to explore ways of undertaking research in the information literacy field. The first chapter provides an introductory overview of research by Dr Kirsty Williamson (author of Research Methods for Students, Academics and Professionals: Information Management and Systems) and this sets the scene for the rest of the chapters where each author explores the key aspects of a specific method and explains how it may be applied in practice. The methods covered include those representing qualitative, quantitative and mix

  16. Diagnostic yield of targeted next generation sequencing in various cancer types: an information-theoretic approach.

    Science.gov (United States)

    Hagemann, Ian S; O'Neill, Patrick K; Erill, Ivan; Pfeifer, John D

    2015-09-01

    The information-theoretic concept of Shannon entropy can be used to quantify the information provided by a diagnostic test. We hypothesized that in tumor types with stereotyped mutational profiles, the results of NGS testing would yield lower average information than in tumors with more diverse mutations. To test this hypothesis, we estimated the entropy of NGS testing in various cancer types, using results obtained from clinical sequencing. A set of 238 tumors were subjected to clinical targeted NGS across all exons of 27 genes. There were 120 actionable variants in 109 cases, occurring in the genes KRAS, EGFR, PTEN, PIK3CA, KIT, BRAF, NRAS, IDH1, and JAK2. Sequencing results for each tumor were modeled as a dichotomized genotype (actionable mutation detected or not detected) for each of the 27 genes. Based upon the entropy of these genotypes, sequencing was most informative for colorectal cancer (3.235 bits of information/case) followed by high grade glioma (2.938 bits), lung cancer (2.197 bits), pancreatic cancer (1.339 bits), and sarcoma/STTs (1.289 bits). In the most informative cancer types, the information content of NGS was similar to surgical pathology examination (modeled at approximately 2-3 bits). Entropy provides a novel measure of utility for laboratory testing in general and for NGS in particular. This metric is, however, purely analytical and does not capture the relative clinical significance of the identified variants, which may also differ across tumor types.

  17. THEORETICAL AND NUMERICAL COMPARISON ON DOUBLE-PROJECTION METHODS FOR VARIATIONAL INEQUALITIES

    Institute of Scientific and Technical Information of China (English)

    WANG Yiju; SUN Wenyu

    2003-01-01

    Recently, double projection methods for solving variational inequalities have received much attention due to their fewer projection times at each iteration. In this paper, we unify these double projection methods within two unified frameworks, which contain the existing double projection methods as special cases. On the basis of this unification, theoretical and numerical comparison between these double projection methods is presented.

  18. E-loyalty towards a cancer information website: applying a theoretical framework.

    Science.gov (United States)

    Crutzen, Rik; Beekers, Nienke; van Eenbergen, Mies; Becker, Monique; Jongen, Lilian; van Osch, Liesbeth

    2014-06-01

    To provide more insight into user perceptions related to e-loyalty towards a cancer information website. This is needed to assure adequate provision of high quality information during the full process of cancer treatment-from diagnosis to after care-and an important first step towards optimizing cancer information websites in order to promote e-loyalty. Participants were cancer patients (n = 63) and informal caregivers (n = 202) that visited a website providing regional information about cancer care for all types of cancer. Subsequently, they filled out a questionnaire assessing e-loyalty towards the website and user perceptions (efficiency, effectiveness, active trust and enjoyment) based on a theoretical framework derived from the field of e-commerce. A structural equation model was constructed to test the relationships between user perceptions and e-loyalty. Participants in general could find the information they were looking for (efficiency), thought it was relevant (effectiveness) and that they could act upon it (active trust) and thought the visit itself was pleasant (enjoyment). Effectiveness and enjoyment were both positively related with e-loyalty, but this was mediated by active trust. Efficiency was positively related with e-loyalty. The explained variance of e-loyalty was high (R(2)  = 0.70). This study demonstrates that the importance of user perceptions is not limited to fields such as e-commerce but is also present within the context of cancer information websites. The high information need among participants might explain the positive relationship between efficiency and e-loyalty. Therefore, cancer information websites need to foster easy search and access of information provided. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Phenomenological description of a three-center insertion reaction: an information-theoretic study.

    Science.gov (United States)

    Esquivel, Rodolfo O; Flores-Gallegos, Nelson; Dehesa, Jesús S; Angulo, Juan Carlos; Antolín, Juan; López-Rosa, Sheila; Sen, K D

    2010-02-04

    Information-theoretic measures are employed to describe the course of a three-center chemical reaction in terms of detecting the transition state and the stationary points unfolding the bond-forming and bond-breaking regions which are not revealed in the energy profile. The information entropy profiles for the selected reactions are generated by following the intrinsic-reaction-coordinate (IRC) path calculated at the MP2 level of theory from which Shannon entropies in position and momentum spaces at the QCISD(T)/6-311++G(3df,2p) level are determined. Several complementary reactivity descriptors are also determined, such as the dipole moment, the molecular electrostatic potential (MEP) obtained through a multipole expansion (DMA), the atomic charges and electric potentials fitted to the MEP, the hardness and softness DFT descriptors, and several geometrical parameters which support the information-theoretic analysis. New density-based structures related to the bond-forming and bond-breaking regions are proposed. Our results support the concept of a continuum of transient of Zewail and Polanyi for the transition state rather than a single state, which is also in agreement with reaction-force analyses.

  20. A Theoretical Review on Organizational Information Systems for Analysing Spatial Issues: A Perspective of Modern Business

    Directory of Open Access Journals (Sweden)

    Abdul Manaf Bohari

    2012-01-01

    Full Text Available Organizational Information Systems (OIS is established purposely to support an organization in managerial and routine works, as well as important to decision making process. Theoretically, there are sixth types of OIS, common identified as Executive Information System (EIS, Management Information System (MIS, Decision Support System (DSS, Knowledge Information System (KWS, Office Automation System (OAS and Transaction Information System (TPS. Some of organization has developed OIS for achieved their strategic advantages where final aimed to sustain the performance of organisation. However, there are questions arises about OIS uses in analyzing a spatial issue which is refers to geographical locations. The objective of this paper is to overview the fundamental concept, capability and constrains of every types of OIS with specific reference to spatial issues. Spatial issues is refers to concurrent and real-time based issues that arisen from external environment where can affect directly on organization performance. This literature review is driven from business perspective where OIS capability and ability of handle spatial issues will discuss. In general, this study found that all most OIS have limited capability to handle and manage spatial issues. Although DSS can support decision making for execute specific decision related to spatial issues, however DSS still lacks  on visualized spatial issues geographically, as well as spatial issues tied to geographical location. This paper is suggest to developed Geographical Information System (GIS as one of OIS for discover all kind of spatial issues.

  1. THEORETICAL FRAMEWORK FOR INFORMATION AND EDUCATIONAL COMPLEX DEVELOPMENT OF AN ACADEMIC DISCIPLINE AT A HIGHER INSTITUTION

    Directory of Open Access Journals (Sweden)

    Evgeniia Nikolaevna Kikot

    2015-05-01

    Full Text Available The question of organization of contemporary education process is getting more important nowadays in the conditions of ICT (information and communication technologies and e-education usage.This defines one of the most important methodological and research directions in the university – creation of informational-educational course unit complex as the foundation of e-University resource.The foundation of informational-educational course unit complex creation are the concepts of openness, accessibility, clearness, personalisation and that allow to built the requirements system to the complex creation and its substantial content.The main functions of informational educational complex are detected: informational, educational, controlling and communicative.It’s defined that into the basis of scientific justification of new structure elements of informational-educational of course unit complex development and introduction is necessary to include creation of e-workbook, e-workshops in order to organize theoretical and practical e-conferences.Development of ICT in education that provides e-education application assume establishment of distance learning techno-logies for educational programme implementation.

  2. Information Theoretic Approaches to Rapid Discovery of Relationships in Large Climate Data Sets

    Science.gov (United States)

    Knuth, Kevin H.; Rossow, William B.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Mutual information as the asymptotic Bayesian measure of independence is an excellent starting point for investigating the existence of possible relationships among climate-relevant variables in large data sets, As mutual information is a nonlinear function of of its arguments, it is not beholden to the assumption of a linear relationship between the variables in question and can reveal features missed in linear correlation analyses. However, as mutual information is symmetric in its arguments, it only has the ability to reveal the probability that two variables are related. it provides no information as to how they are related; specifically, causal interactions or a relation based on a common cause cannot be detected. For this reason we also investigate the utility of a related quantity called the transfer entropy. The transfer entropy can be written as a difference between mutual informations and has the capability to reveal whether and how the variables are causally related. The application of these information theoretic measures is rested on some familiar examples using data from the International Satellite Cloud Climatology Project (ISCCP) to identify relation between global cloud cover and other variables, including equatorial pacific sea surface temperature (SST), over seasonal and El Nino Southern Oscillation (ENSO) cycles.

  3. A theoretical extraction scheme of transport information based on exclusion models

    Institute of Scientific and Technical Information of China (English)

    Chen Hua; Du Lei; Qu Cheng-Li; Li Wei-Hua; He Liang; Chen Wen-Hao; Sun Peng

    2010-01-01

    In order to explore how to extract more transport information from current fluctuation, a theoretical extraction scheme is presented in a single barrier structure based on exclusion models, which include counter-flows model and tunnel model. The first four cumulants of these two exclusion models are computed in a single barrier structure, and their characteristics are obtained. A scheme with the help of the first three cumulants is devised to check a transport process to follow the counter-flows model, the tunnel model or neither of them. Time series generated by Monte Carlo techniques is adopted to validate the abstraction procedure, and the result is reasonable.

  4. Analysis of methods. [information systems evolution environment

    Science.gov (United States)

    Mayer, Richard J. (Editor); Ackley, Keith A.; Wells, M. Sue; Mayer, Paula S. D.; Blinn, Thomas M.; Decker, Louis P.; Toland, Joel A.; Crump, J. Wesley; Menzel, Christopher P.; Bodenmiller, Charles A.

    1991-01-01

    Information is one of an organization's most important assets. For this reason the development and maintenance of an integrated information system environment is one of the most important functions within a large organization. The Integrated Information Systems Evolution Environment (IISEE) project has as one of its primary goals a computerized solution to the difficulties involved in the development of integrated information systems. To develop such an environment a thorough understanding of the enterprise's information needs and requirements is of paramount importance. This document is the current release of the research performed by the Integrated Development Support Environment (IDSE) Research Team in support of the IISEE project. Research indicates that an integral part of any information system environment would be multiple modeling methods to support the management of the organization's information. Automated tool support for these methods is necessary to facilitate their use in an integrated environment. An integrated environment makes it necessary to maintain an integrated database which contains the different kinds of models developed under the various methodologies. In addition, to speed the process of development of models, a procedure or technique is needed to allow automatic translation from one methodology's representation to another while maintaining the integrity of both. The purpose for the analysis of the modeling methods included in this document is to examine these methods with the goal being to include them in an integrated development support environment. To accomplish this and to develop a method for allowing intra-methodology and inter-methodology model element reuse, a thorough understanding of multiple modeling methodologies is necessary. Currently the IDSE Research Team is investigating the family of Integrated Computer Aided Manufacturing (ICAM) DEFinition (IDEF) languages IDEF(0), IDEF(1), and IDEF(1x), as well as ENALIM, Entity

  5. Fish stock assessment under data limitations developing a new method based on a size-structured theoretical ecology framework

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros

    -series. The method provides estimates of fishing mortality and the FMSY reference point, it is tested and validated, and is implemented as software package making it easy to use by stakeholders of different levels. The basis of the method is a size-based theoretical ecology framework that describes exploited fish...... for fish stocks that are not assessed. The goal of the thesis is to develop a new data-limited stock assessment method that is: rooted in theoretical ecology, requires only information about the size composition of the catch or surveys (i.e. aging is not required), and does not require time......-likelihood optimisation framework to estimate model parameters. The data-limited method estimates at the same time the fishing mortality rate and the biological reference point FMSY . Minimum data requirements consist of a single size frequency distribution fromthe commercial catch or a scientific survey. If the total...

  6. Theoretical frameworks informing family-based child and adolescent obesity interventions

    DEFF Research Database (Denmark)

    Alulis, Sarah; Grabowski, Dan

    2017-01-01

    BACKGROUND: Child and adolescent obesity trends are rising throughout the world, revealing treatment difficulties and a lack of consensus about treatment. The family system is broadly viewed as a potential setting for facilitation of behaviour change. Therefore, family-based interventions have come...... into focus. However, the use of theoretical frameworks to strengthen these interventions is rare and very uneven. OBJECTIVE AND METHOD: To conduct a qualitative meta-synthesis of family-based interventions for child and adolescent obesity to identify the theoretical frameworks applied, thus understanding how...... theory is used in practice. A literature review was conducted between January and March 2016. A total of 35 family-based interventions were selected for analysis. RESULTS: Eleven interventions explicitly stated that theory guided the development and were classified as theory-inspired. The social...

  7. Theoretical and Experimental Research of Error of Method of Thermocouple with Controlled Profile of Temperature Field

    Science.gov (United States)

    Jun, Su; Kochan, O.; Chunzhi, Wang; Kochan, R.

    2015-12-01

    The method of study and experimental researches of the error of method of the thermocouple with controlled profile of temperature field along the main thermocouple are considered in this paper. Experimentally determined values of error of method are compared to the theoretical estimations done using Newton's law of cooling. They converge well.

  8. Theoretical and Experimental Research of Error of Method of Thermocouple with Controlled Profile of Temperature Field

    OpenAIRE

    Jun Su; Kochan O.; Chunzhi Wang; Kochan R.

    2015-01-01

    The method of study and experimental researches of the error of method of the thermocouple with controlled profile of temperature field along the main thermocouple are considered in this paper. Experimentally determined values of error of method are compared to the theoretical estimations done using Newton’s law of cooling. They converge well.

  9. A Practice Theoretical Exploration of Information Sharing and Trust in a Dispersed Community of Design Scholars

    Science.gov (United States)

    Pilerot, Ola

    2013-01-01

    Introduction: This paper presents an exploration of information sharing and trust in a geographically dispersed network of design scholars. Method: The study used a practice theory approach to identify aspects of trust in relation to information sharing. The empirical material consists of 15 in-depth interviews with design scholars from four…

  10. A Theoretical Model of Health Information Technology Usage Behaviour with Implications for Patient Safety

    Science.gov (United States)

    Holden, Richard J.; Karsh, Ben-Tzion

    2009-01-01

    Primary objective: much research and practice related to the design and implementation of information technology in health care has been atheoretical. It is argued that using extant theory to develop testable models of health information technology (HIT) benefits both research and practice. Methods and procedures: several theories of motivation,…

  11. Information propagation and collective consensus in blogosphere: a game-theoretical approach

    CERN Document Server

    Liu, L; Wang, L; Fu, Feng; Liu, Lianghuan; Wang, Long

    2007-01-01

    In this paper, we study the information propagation in an empirical blogging network by game-theoretical approach. The blogging network has small-world property and is scale-free. Individuals in the blogosphere coordinate their decisions according to their idiosyncratic preferences and the choices of their neighbors. We find that corresponding to different initial conditions and weights, the equilibrium frequency of discussions has a transition from high to low as a result of the common interest in the topics specified by payoff matrices. Furthermore, under recommendation, namely, individuals in blogging networks refer to additional bloggers' resources besides their nearest neighbors preferentially according to the popularity of the blogs, the whole blogging network ultrafastly evolves into consensus state (absorbing state). Our results reflect the dynamic pattern of information propagation in blogging networks.

  12. Information-Theoretic Measures Predict the Human Judgment of Rhythm Complexity.

    Science.gov (United States)

    de Fleurian, Remi; Blackwell, Tim; Ben-Tal, Oded; Müllensiefen, Daniel

    2017-04-01

    To formalize the human judgment of rhythm complexity, we used five measures from information theory and algorithmic complexity to measure the complexity of 48 artificially generated rhythmic sequences. We compared these measurements to human prediction accuracy and easiness judgments obtained from a listening experiment, in which 32 participants guessed the last beat of each sequence. We also investigated the modulating effects of musical expertise and general pattern identification ability. Entropy rate and Kolmogorov complexity were correlated with prediction accuracy, and highly correlated with easiness judgments. A logistic regression showed main effects of musical training, entropy rate, and Kolmogorov complexity, and an interaction between musical training and both entropy rate and Kolmogorov complexity. These results indicate that information-theoretic concepts capture some salient features of the human judgment of rhythm complexity, and they confirm the influence of musical expertise on complexity judgments. Copyright © 2016 Cognitive Science Society, Inc.

  13. Non-Gaussian Information-Theoretical Analytics for Diagnostic and Inference of Hydroclimatic Extremes

    Science.gov (United States)

    Pires, Carlos A. L.; Perdigão, Rui A. P.

    2016-04-01

    Hydroclimatic spatiotemporal distributions exhibit significant non-Gaussianity with particular emphasis to overweight extremes, rendering their diagnostic and inference suboptimal with traditional statistical techniques. In order to overcome that limitation, we introduce and discuss a set of information-theoretic methodologies for statistical diagnostic and inference issued from exploratory variables of the general atmospheric and oceanic circulation in the cases of non-Gaussian joint probability distributions. Moreover, the nonlinear information among various large-scale ocean-atmospheric processes is explored, bringing out added predictability to elusive weather and hydrologic extremes relative to the current state of the art in nonlinear geophysics. The methodologies are illustrated with the analysis and prediction of resonant ocean-atmospheric thermodynamic anomaly spells underneath high-profile floods and droughts.

  14. Information theoretical approach to discovering solar wind drivers of the outer radiation belt

    Science.gov (United States)

    Wing, Simon; Johnson, Jay R.; Camporeale, Enrico; Reeves, Geoffrey D.

    2016-10-01

    The solar wind-magnetosphere system is nonlinear. The solar wind drivers of geosynchronous electrons with energy range of 1.8-3.5 MeV are investigated using mutual information, conditional mutual information (CMI), and transfer entropy (TE). These information theoretical tools can establish linear and nonlinear relationships as well as information transfer. The information transfer from solar wind velocity (Vsw) to geosynchronous MeV electron flux (Je) peaks with a lag time of 2 days. As previously reported, Je is anticorrelated with solar wind density (nsw) with a lag of 1 day. However, this lag time and anticorrelation can be attributed at least partly to the Je(t + 2 days) correlation with Vsw(t) and nsw(t + 1 day) anticorrelation with Vsw(t). Analyses of solar wind driving of the magnetosphere need to consider the large lag times, up to 3 days, in the (Vsw, nsw) anticorrelation. Using CMI to remove the effects of Vsw, the response of Je to nsw is 30% smaller and has a lag time Je. Nonstationarity in the system dynamics is investigated using windowed TE. When the data are ordered according to transfer entropy value, it is possible to understand details of the triangle distribution that has been identified between Je(t + 2 days) versus Vsw(t).

  15. Several foundational and information theoretic implications of Bell’s theorem

    Science.gov (United States)

    Kar, Guruprasad; Banik, Manik

    2016-08-01

    In 1935, Albert Einstein and two colleagues, Boris Podolsky and Nathan Rosen (EPR) developed a thought experiment to demonstrate what they felt was a lack of completeness in quantum mechanics (QM). EPR also postulated the existence of more fundamental theory where physical reality of any system would be completely described by the variables/states of that fundamental theory. This variable is commonly called hidden variable and the theory is called hidden variable theory (HVT). In 1964, John Bell proposed an empirically verifiable criterion to test for the existence of these HVTs. He derived an inequality, which must be satisfied by any theory that fulfill the conditions of locality and reality. He also showed that QM, as it violates this inequality, is incompatible with any local-realistic theory. Later it has been shown that Bell’s inequality (BI) can be derived from different set of assumptions and it also find applications in useful information theoretic protocols. In this review, we will discuss various foundational as well as information theoretic implications of BI. We will also discuss about some restricted nonlocal feature of quantum nonlocality and elaborate the role of Uncertainty principle and Complementarity principle in explaining this feature.

  16. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  17. Methods to determine stratification efficiency of thermal energy storage processes–Review and theoretical comparison

    DEFF Research Database (Denmark)

    Haller, Michel; Cruickshank, Chynthia; Streicher, Wolfgang;

    2009-01-01

    This paper reviews different methods that have been proposed to characterize thermal stratification in energy storages from a theoretical point of view. Specifically, this paper focuses on the methods that can be used to determine the ability of a storage to promote and maintain stratification...

  18. Information-theoretic indices and an approximate significance test for testing the molecular clock hypothesis with genetic distances.

    Science.gov (United States)

    Xia, Xuhua

    2009-09-01

    Distance-based phylogenetic methods are widely used in biomedical research. However, distance-based dating of speciation events and the test of the molecular clock hypothesis are relatively underdeveloped. Here I develop an approximate test of the molecular clock hypothesis for distance-based trees, as well as information-theoretic indices that have been used frequently in model selection, for use with distance matrices. The results are in good agreement with the conventional sequence-based likelihood ratio test. Among the information-theoretic indices, AICu is the most consistent with the sequence-based likelihood ratio test. The confidence in model selection by the indices can be evaluated by bootstrapping. I illustrate the usage of the indices and the approximate significance test with both empirical and simulated sequences. The tests show that distance matrices from protein gel electrophoresis and from genome rearrangement events do not violate the molecular clock hypothesis, and that the evolution of the third codon position conforms to the molecular clock hypothesis better than the second codon position in vertebrate mitochondrial genes. I outlined evolutionary distances that are appropriate for phylogenetic reconstruction and dating.

  19. A new method for precursory information extraction: Slope-difference information method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new method for precursory information extraction, i.e.,slope-difference information method is proposed in the paper for the daily-mean-value precursory data sequence. Taking Tangshan station as an example, the calculation of full-time-domain leveling data is made, which is tested and compared with several other methods. The results indicate that the method is very effective for extracting short-term precursory information from the daily mean values after the optimization is made. Therefore, it is valuable for popularization and application.

  20. An Information Theoretic Model of Information Processing in the Drosophila Olfactory System: the Role of Inhibitory Neurons for System Efficiency

    Directory of Open Access Journals (Sweden)

    Faramarz eFaghihi

    2013-12-01

    Full Text Available Fruit flies (Drosophila melanogaster rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the fly's olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons (Kenyon cells was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the system's efficiency will be substantially reduced.

  1. An information theoretic approach to select alternate subsets of predictors for data-driven hydrological models

    Science.gov (United States)

    Taormina, R.; Galelli, S.; Karakaya, G.; Ahipasaoglu, S. D.

    2016-11-01

    This work investigates the uncertainty associated to the presence of multiple subsets of predictors yielding data-driven models with the same, or similar, predictive accuracy. To handle this uncertainty effectively, we introduce a novel input variable selection algorithm, called Wrapper for Quasi Equally Informative Subset Selection (W-QEISS), specifically conceived to identify all alternate subsets of predictors in a given dataset. The search process is based on a four-objective optimization problem that minimizes the number of selected predictors, maximizes the predictive accuracy of a data-driven model and optimizes two information theoretic metrics of relevance and redundancy, which guarantee that the selected subsets are highly informative and with little intra-subset similarity. The algorithm is first tested on two synthetic test problems and then demonstrated on a real-world streamflow prediction problem in the Yampa River catchment (US). Results show that complex hydro-meteorological datasets are characterized by a large number of alternate subsets of predictors, which provides useful insights on the underlying physical processes. Furthermore, the presence of multiple subsets of predictors-and associated models-helps find a better trade-off between different measures of predictive accuracy commonly adopted for hydrological modelling problems.

  2. Information theoretical approaches to chick-a-dee calls of Carolina chickadees (Poecile carolinensis).

    Science.gov (United States)

    Freeberg, Todd M; Lucas, Jeffrey R

    2012-02-01

    One aim of this study was to apply information theoretical analyses to understanding the structural complexity of chick-a-dee calls of Carolina chickadees, Poecile carolinensis. A second aim of this study was to compare this structural complexity to that of the calls of black-capped chickadees, P. atricapillus, described in an earlier published report (Hailman, Ficken, & Ficken, 1985). Chick-a-dee calls were recorded from Carolina chickadees in a naturalistic observation study in eastern Tennessee. Calls were analyzed using approaches from information theory, including transition probability matrices, Zipf's rules, entropies, and information coding capacities of calls and notes of calls. As described for black-capped chickadees, calls of Carolina chickadees exhibited considerable structural complexity. Most results suggested that the call of Carolina chickadees is more structurally complex than that of black-capped chickadees. These findings add support to the growing literature on the complexity of this call system in Paridae species. Furthermore, these results point to the feasibility of detailed cross-species comparative analyses that may allow strong testing of hypotheses regarding signal evolution.

  3. Scaling properties of information-theoretic quantities in density functional reactivity theory.

    Science.gov (United States)

    Rong, Chunying; Lu, Tian; Ayers, Paul W; Chattaraj, Pratim K; Liu, Shubin

    2015-02-21

    Density functional reactivity theory (DFRT) employs the electron density and its related quantities to describe reactivity properties of a molecular system. Quantities from information theory such as Shannon entropy, Fisher information, and Ghosh-Berkowitz-Parr entropy are natural descriptors within the DFRT framework. They have been previously employed to quantify electrophilicity, nucleophilicity and the steric effect. In this work, we examine their scaling properties with respect to the total number of electrons. To that end, we considered their representations in terms of both the electron density and the shape function for isolated atoms and neutral molecules. We also investigated their atomic behaviors in different molecules with three distinct partitioning schemes: Bader's zero-flux, Becke's fuzzy atom, and Hirshfeld's stockholder partitioning. Strong linear relationships of these quantities as a function of the total electron population are reported for atoms, molecules, and atoms in molecules. These relationships reveal how these information-theoretic quantities depend on the molecular environment and the electron population. These trends also indicate how these quantities can be used to explore chemical reactivity for real chemical processes.

  4. Practical Methods for Information Security Risk Management

    Directory of Open Access Journals (Sweden)

    Cristian AMANCEI

    2011-01-01

    Full Text Available The purpose of this paper is to present some directions to perform the risk man-agement for information security. The article follows to practical methods through question-naire that asses the internal control, and through evaluation based on existing controls as part of vulnerability assessment. The methods presented contains all the key elements that concurs in risk management, through the elements proposed for evaluation questionnaire, list of threats, resource classification and evaluation, correlation between risks and controls and residual risk computation.

  5. RESEARCH ON THEORETIC EVIDENCE AND REALIZATION OF DIRECTLY-MEAN EMD METHOD

    Institute of Scientific and Technical Information of China (English)

    Zhong Youming; Qin Shuren; Tang Baoping

    2004-01-01

    Emprical mode decomposition(EMD) is a method and principle of decomposing signal dealing with Hilbert-Huang transform (HHT) in signal analysis, while directly-mean EMD is an improved EMD method presented by N.E.Huang, the inventor of HHT, which is aimed at solving the problems of EMD principle. Although the directly-mean EMD method is very remarkable with its advantages and N. E. Huang has given a method to realize it, he did not find the theoretic evidence of the method so that the feasibility of the idea and correctness of realizing the directly-mean EMD method is still indeterminate. For this a deep research on the forming process of complex signal is made and the involved stationary point principle and asymptotic stationary point principle are demonstrated, thus some theoretic evidences and the correct realizing way of directly-mean EMD method is firstly presented. Some simulation examples for demonstrating the idea presented are given.

  6. New theoretical methods for studies on electrons and positrons scattering involving multichannel processes

    CERN Document Server

    Lara, O

    1995-01-01

    continued fractions are now in progress. It is well known that multichannel effects strongly influence the low-energy electron scattering by atoms and molecules. Nevertheless, the inclusion of such effects on the calculations of scattering cross sections remains a considerable task for the area researches due to the complexity of the problem. In the present study we aim to develop a new theoretical method which can be efficiently applied to the multichannel scattering studies. Two new theoretical formalisms namely the Multichannel sup - C-Functional Method have been proposed. Both methods were developed on the base of well-known distorted-wave method combined with Schwinger variational principle. In addition, an integrative method proposed by Horacek and Sasakawa in 1983, the method of continued fractions is adapted by the first time to multichannel scatterings. Numerical test of these three methods were carried out through applications to solve the multichannel scattering problems involving the interaction o...

  7. Information-theoretic equilibration: the appearance of irreversibility under complex quantum dynamics.

    Science.gov (United States)

    Ududec, Cozmin; Wiebe, Nathan; Emerson, Joseph

    2013-08-23

    The question of how irreversibility can emerge as a generic phenomenon when the underlying mechanical theory is reversible has been a long-standing fundamental problem for both classical and quantum mechanics. We describe a mechanism for the appearance of irreversibility that applies to coherent, isolated systems in a pure quantum state. This equilibration mechanism requires only an assumption of sufficiently complex internal dynamics and natural information-theoretic constraints arising from the infeasibility of collecting an astronomical amount of measurement data. Remarkably, we are able to prove that irreversibility can be understood as typical without assuming decoherence or restricting to coarse-grained observables, and hence occurs under distinct conditions and time scales from those implied by the usual decoherence point of view. We illustrate the effect numerically in several model systems and prove that the effect is typical under the standard random-matrix conjecture for complex quantum systems.

  8. A Theoretical Framework for Soft-Information-Based Synchronization in Iterative (Turbo Receivers

    Directory of Open Access Journals (Sweden)

    Lottici Vincenzo

    2005-01-01

    Full Text Available This contribution considers turbo synchronization, that is to say, the use of soft data information to estimate parameters like carrier phase, frequency, or timing offsets of a modulated signal within an iterative data demodulator. In turbo synchronization, the receiver exploits the soft decisions computed at each turbo decoding iteration to provide a reliable estimate of some signal parameters. The aim of our paper is to show that such "turbo-estimation" approach can be regarded as a special case of the expectation-maximization (EM algorithm. This leads to a general theoretical framework for turbo synchronization that allows to derive parameter estimation procedures for carrier phase and frequency offset, as well as for timing offset and signal amplitude. The proposed mathematical framework is illustrated by simulation results reported for the particular case of carrier phase and frequency offsets estimation of a turbo-coded 16-QAM signal.

  9. Some Observations on the Concepts of Information-Theoretic Entropy and Randomness

    Directory of Open Access Journals (Sweden)

    Jonathan D.H. Smith

    2001-02-01

    Full Text Available Abstract: Certain aspects of the history, derivation, and physical application of the information-theoretic entropy concept are discussed. Pre-dating Shannon, the concept is traced back to Pauli. A derivation from first principles is given, without use of approximations. The concept depends on the underlying degree of randomness. In physical applications, this translates to dependence on the experimental apparatus available. An example illustrates how this dependence affects Prigogine's proposal for the use of the Second Law of Thermodynamics as a selection principle for the breaking of time symmetry. The dependence also serves to yield a resolution of the so-called ``Gibbs Paradox.'' Extension of the concept from the discrete to the continuous case is discussed. The usual extension is shown to be dimensionally incorrect. Correction introduces a reference density, leading to the concept of Kullback entropy. Practical relativistic considerations suggest a possible proper reference density.

  10. Understanding confounding effects in linguistic coordination: an information-theoretic approach

    CERN Document Server

    Gao, Shuyang; Galstyan, Aram

    2014-01-01

    We suggest an information-theoretic approach for measuring linguistic style coordination in dialogues. The proposed measure has a simple predictive interpretation and can account for various confounding factors through proper conditioning. We revisit some of the previous studies that reported strong sig- natures of stylistic accommodation, and find that a significant part of the observed coordination can be attributed to a simple confounding effect - length coordination. Specifically, longer utterances tend to be followed by longer responses, which gives rise to spurious correlations in the other stylistic features. We propose a test to distinguish correlations in length due to contextual factors (topic of conversation, user verbosity, etc.) and turn-by-turn coordination. We also suggest a test to identify whether stylistic coordination persists even after accounting for length coordination and contextual factors.

  11. An Information-Theoretic Link Between Spacetime Symmetries and Quantum Linearity

    CERN Document Server

    Parwani, R R

    2004-01-01

    A nonlinear generalisation of Schrodinger's equation is obtained using information-theoretic arguments. The nonlinearities are controlled by an intrinsic length scale and involve derivatives to all orders thus making the equation mildly nonlocal. The nonlinear equation is homogeneous, separable, conserves probability, but is not invariant under spacetime symmetries. Spacetime symmetries are recovered when a dimensionless parameter is tuned to vanish, whereby linearity is simultaneously established and the length scale becomes hidden. It is thus suggested that if, in the search for a more basic foundation for Nature's Laws, an inference principle is given precedence over symmetry requirements, then the symmetries of spacetime and the linearity of quantum theory might both be emergent properties that are intrinsically linked. Supporting arguments are provided for this point of view and some testable phenomenological consequences highlighted. The generalised Klien-Gordon and Dirac equations are also studied, lea...

  12. Thermodynamic analysis of the theoretical energy consumption in the removal of organic contaminants by physical methods

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The essential requirements for evaluating the sustainable development of a system and the thermodynamic framework of the energy conservation mechanism in the waste-removal process are proposed.A thermodynamic method of analysis based on the first and second laws of thermodynamics is suggested as a means to analyze the theoretical energy consumption for the removal of organic contaminants by physical methods.Moreover,the theoretical energy consumption for the removal by physical methods of different kinds of representative organic contaminants with different initial concentrations and amounts is investigated at 298.15 K and 1.01325 × 105 Pa.The results show that the waste treatment process has a high energy consumption and that the theoretical energy consumption for the removal of organic contaminants increases with the decrease of their initial concentrations in aqueous solutions.The theoretical energy consumption for the removal of different organic contaminants varies dramatically.Furthermore,the theoretical energy consumption increases greatly with the increase in the amount to be removed.

  13. Integrated Modeling in Earth and Space Sciences: An Information Theoretic Framework

    Science.gov (United States)

    Sharma, A. S.; Kalnay, E.

    2011-12-01

    Most natural phenomena exhibit multiscale behavior, which is an underlying reason for the challenges in modeling them. The recognition that the key problems, such as extreme events, natural hazards and climate change, require multi-disciplinary approaches to develop models that integrate many natural and anthropogenic phenomena, demand new approaches in the modeling of such systems. Information theory, which emphasizes the inherent features in observational data independent of modeling assumptions, can be used to develop a framework for multi-disciplinary models by integrating the data of the leading processes in multiple systems. An important measure of the inter-relationship among the different phenomena is the lead time among them. The widely used quantities such as the cross-correlation function represent the linear dependence among the variables and are limited in their ability to describe complex driven systems which are essentially nonlinear. The mutual information function, which represents the expectation of the average degree of dependence incorporating all orders of nonlinearity, provides the characteristic times inherent in the data and can be used as the first step to the development of integrated models. This function is used in two systems with widely separated time scales. The first case is the solar wind - magnetosphere interaction and the correlated data yield ~ 5 hr as the inherent time scale for the magnetospheric processes. The second case is a study of the inter-relationship between natural and anthropogenic phenomena and the mutual information functions were computed from the data of the global gross product, temperature and population. These functions show a time delay of ~15 yrs between the changes in global temperature and population as well as gross product, thus providing a measure of the interdependency among the variables underlying climate change. The results from studies of extreme events and an information theoretic modeling

  14. Statistical Mechanics and Information-Theoretic Perspectives on Complexity in the Earth System

    Directory of Open Access Journals (Sweden)

    Konstantinos Eftaxias

    2013-11-01

    Full Text Available This review provides a summary of methods originated in (non-equilibrium statistical mechanics and information theory, which have recently found successful applications to quantitatively studying complexity in various components of the complex system Earth. Specifically, we discuss two classes of methods: (i entropies of different kinds (e.g., on the one hand classical Shannon and R´enyi entropies, as well as non-extensive Tsallis entropy based on symbolic dynamics techniques and, on the other hand, approximate entropy, sample entropy and fuzzy entropy; and (ii measures of statistical interdependence and causality (e.g., mutual information and generalizations thereof, transfer entropy, momentary information transfer. We review a number of applications and case studies utilizing the above-mentioned methodological approaches for studying contemporary problems in some exemplary fields of the Earth sciences, highlighting the potentials of different techniques.

  15. Information theoretic approach using neural network for determining radiometer observations from radar and vice versa

    Science.gov (United States)

    Kannan, Srinivasa Ramanujam; Chandrasekar, V.

    2016-05-01

    Even though both the rain measuring instruments, radar and radiometer onboard the TRMM observe the same rain scenes, they both are fundamentally different instruments. Radar is an active instrument and measures backscatter component from vertical rain structure; whereas radiometer is a passive instrument that obtains integrated observation of full depth of the cloud and rain structure. Further, their spatial resolutions on ground are different. Nevertheless, both the instruments are observing the same rain scene and retrieve three dimensional rainfall products. Hence it is only natural to seek answer to the question, what type of information about radiometric observations can be directly retrieved from radar observations. While there are several ways to answer this question, an informational theoretic approach using neural networks has been described in the present work to find if radiometer observations can be predicted from radar observations. A database of TMI brightness temperature and collocated TRMM vertical attenuation corrected reflectivity factor from the year 2012 was considered. The entire database is further classified according to surface type. Separate neural networks were trained for land and ocean and the results are presented.

  16. Mobile Sensors in a CDMA System For Environmental Monitoring: An Information-Theoretic Analysis

    CERN Document Server

    Mohammadi, Elaheh

    2011-01-01

    Use of sensors implanted on cell-phones to monitor environmental parameters such as air pollution, temperature, humidity, noise level, etc has been the subject of several studies. For instance, it has been demonstrated in \\cite{SteedMilton} that the use of cheap measurement devices installed on personal cell-phones allows for a fine reconstruction of geographic maps of environmental effects. In some applications one might be interested in certain statistics of the measurements, rather than the whole data. In fact, transmission of the whole data may compromise the privacy of cell-phone users. It is therefore desirable for the cell-phones to transmit just enough information about their measurements such that the desired statistics could be computed. Furthermore because the cell-phones have already the capability to transmit data to the base stations, it is desirable to use the \\emph{same} communication architecture to transmit the sensed data. To the best of our knowledge, information-theoretic limits of data t...

  17. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Smith, Ralph [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Williams, Brian [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Figueroa, Victor [Sandia National Laboratories, Albuquerque, NM 87185 (United States)

    2016-11-01

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is to employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.

  18. A theoretical drought classification method for the multivariate drought index based on distribution properties of standardized drought indices

    Science.gov (United States)

    Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.; Xia, Youlong; Ouyang, Wei; Shen, Xinyi

    2016-06-01

    Drought indices have been commonly used to characterize different properties of drought and the need to combine multiple drought indices for accurate drought monitoring has been well recognized. Based on linear combinations of multiple drought indices, a variety of multivariate drought indices have recently been developed for comprehensive drought monitoring to integrate drought information from various sources. For operational drought management, it is generally required to determine thresholds of drought severity for drought classification to trigger a mitigation response during a drought event to aid stakeholders and policy makers in decision making. Though the classification of drought categories based on the univariate drought indices has been well studied, drought classification method for the multivariate drought index has been less explored mainly due to the lack of information about its distribution property. In this study, a theoretical drought classification method is proposed for the multivariate drought index, based on a linear combination of multiple indices. Based on the distribution property of the standardized drought index, a theoretical distribution of the linear combined index (LDI) is derived, which can be used for classifying drought with the percentile approach. Application of the proposed method for drought classification of LDI, based on standardized precipitation index (SPI), standardized soil moisture index (SSI), and standardized runoff index (SRI) is illustrated with climate division data from California, United States. Results from comparison with the empirical methods show a satisfactory performance of the proposed method for drought classification.

  19. Protein folding, protein structure and the origin of life: Theoretical methods and solutions of dynamical problems

    Science.gov (United States)

    Weaver, D. L.

    1982-01-01

    Theoretical methods and solutions of the dynamics of protein folding, protein aggregation, protein structure, and the origin of life are discussed. The elements of a dynamic model representing the initial stages of protein folding are presented. The calculation and experimental determination of the model parameters are discussed. The use of computer simulation for modeling protein folding is considered.

  20. Validation of a Theoretical Model of Diagnostic Classroom Assessment: A Mixed Methods Study

    Science.gov (United States)

    Koh, Nancy

    2012-01-01

    The purpose of the study was to validate a theoretical model of diagnostic, formative classroom assessment called, "Proximal Assessment for Learner Diagnosis" (PALD). To achieve its purpose, the study employed a two-stage, mixed-methods design. The study utilized multiple data sources from 11 elementary level mathematics teachers who…

  1. Synergy between experimental and theoretical methods in the exploration of homogeneous transition metal catalysis

    DEFF Research Database (Denmark)

    Lupp, Daniel; Christensen, Niels Johan; Fristrup, Peter

    2014-01-01

    n this Perspective, we will focus on the use of both experimental and theoretical methods in the exploration of reaction mechanisms in homogeneous transition metal catalysis. We briefly introduce the use of Hammett studies and kinetic isotope effects (KIE). Both of these techniques can be complem...

  2. Using multilevel modeling and mixed methods to make theoretical progress in microfoundations for strategy research

    OpenAIRE

    Aguinis, Herman; Molina-Azorín, José F.

    2015-01-01

    The microfoundations research agenda presents an expanded theoretical perspective because it considers individuals, their characteristics, and their interactions as relevant variables to help us understand firm-level strategic issues. However, microfoundations empirical research faces unique challenges because processes take place at different levels of analysis and these multilevel processes must be considered simultaneously. We describe multilevel modeling and mixed methods as methodologica...

  3. Using multilevel modeling and mixed methods to make theoretical progress in microfoundations for strategy research

    OpenAIRE

    Aguinis, Herman; Molina-Azorín, José F.

    2015-01-01

    The microfoundations research agenda presents an expanded theoretical perspective because it considers individuals, their characteristics, and their interactions as relevant variables to help us understand firm-level strategic issues. However, microfoundations empirical research faces unique challenges because processes take place at different levels of analysis and these multilevel processes must be considered simultaneously. We describe multilevel modeling and mixed methods as methodologica...

  4. Information Theoretic Measures to Infer Feedback Dynamics in Coupled Logistic Networks

    Directory of Open Access Journals (Sweden)

    Allison Goodwell

    2015-10-01

    Full Text Available A process network is a collection of interacting time series nodes, in which interactions can range from weak dependencies to complete synchronization. Between these extremes, nodes may respond to each other or external forcing at certain time scales and strengths. Identification of such dependencies from time series can reveal the complex behavior of the system as a whole. Since observed time series datasets are often limited in length, robust measures are needed to quantify strengths and time scales of interactions and their unique contributions to the whole system behavior. We generate coupled chaotic logistic networks with a range of connectivity structures, time scales, noise, and forcing mechanisms, and compute variance and lagged mutual information measures to evaluate how detected time dependencies reveal system behavior. When a target node is detected to receive information from multiple sources, we compute conditional mutual information and total shared information between each source node pair to identify unique or redundant sources. While variance measures capture synchronization trends, combinations of information measures provide further distinctions regarding drivers, redundancies, and time dependencies within the network. We find that imposed network connectivity often leads to induced feedback that is identified as redundant links, and cannot be distinguished from imposed causal linkages. We find that random or external driving nodes are more likely to provide unique information than mutually dependent nodes in a highly connected network. In process networks constructed from observed data, the methods presented can be used to infer connectivity, dominant interactions, and systemic behavioral shift.

  5. SOLVING PROBLEMS OF STATISTICS WITH THE METHODS OF INFORMATION THEORY

    Directory of Open Access Journals (Sweden)

    Lutsenko Y. V.

    2015-02-01

    Full Text Available The article presents a theoretical substantiation, methods of numerical calculations and software implementation of the decision of problems of statistics, in particular the study of statistical distributions, methods of information theory. On the basis of empirical data by calculation we have determined the number of observations used for the analysis of statistical distributions. The proposed method of calculating the amount of information is not based on assumptions about the independence of observations and the normal distribution, i.e., is non-parametric and ensures the correct modeling of nonlinear systems, and also allows comparable to process heterogeneous (measured in scales of different types data numeric and non-numeric nature that are measured in different units. Thus, ASC-analysis and "Eidos" system is a modern innovation (ready for implementation technology solving problems of statistical methods of information theory. This article can be used as a description of the laboratory work in the disciplines of: intelligent systems; knowledge engineering and intelligent systems; intelligent technologies and knowledge representation; knowledge representation in intelligent systems; foundations of intelligent systems; introduction to neuromaturation and methods neural networks; fundamentals of artificial intelligence; intelligent technologies in science and education; knowledge management; automated system-cognitive analysis and "Eidos" intelligent system which the author is developing currently, but also in other disciplines associated with the transformation of data into information, and its transformation into knowledge and application of this knowledge to solve problems of identification, forecasting, decision making and research of the simulated subject area (which is virtually all subjects in all fields of science

  6. Distributed optical fiber-based theoretical and empirical methods monitoring hydraulic engineering subjected to seepage velocity

    Science.gov (United States)

    Su, Huaizhi; Tian, Shiguang; Cui, Shusheng; Yang, Meng; Wen, Zhiping; Xie, Wei

    2016-09-01

    In order to systematically investigate the general principle and method of monitoring seepage velocity in the hydraulic engineering, the theoretical analysis and physical experiment were implemented based on distributed fiber-optic temperature sensing (DTS) technology. During the coupling influence analyses between seepage field and temperature field in the embankment dam or dike engineering, a simplified model was constructed to describe the coupling relationship of two fields. Different arrangement schemes of optical fiber and measuring approaches of temperature were applied on the model. The inversion analysis idea was further used. The theoretical method of monitoring seepage velocity in the hydraulic engineering was finally proposed. A new concept, namely the effective thermal conductivity, was proposed referring to the thermal conductivity coefficient in the transient hot-wire method. The influence of heat conduction and seepage could be well reflected by this new concept, which was proved to be a potential approach to develop an empirical method monitoring seepage velocity in the hydraulic engineering.

  7. Understanding uncertainty in seagrass injury recovery: an information-theoretic approach.

    Science.gov (United States)

    Uhrin, Amy V; Kenworthy, W Judson; Fonseca, Mark S

    2011-06-01

    Vessel groundings cause severe, persistent gaps in seagrass beds. Varying degrees of natural recovery have been observed for grounding injuries, limiting recovery prediction capabilities, and therefore, management's ability to focus restoration efforts where natural recovery is unlikely. To improve our capacity for predicting seagrass injury recovery, we used an information-theoretic approach to evaluate the relative contribution of specific injury attributes to the natural recovery of 30 seagrass groundings in Florida Keys National Marine Sanctuary, Florida, USA. Injury recovery was defined by three response variables examined independently: (1) initiation of seagrass colonization, (2) areal contraction, and (3) sediment in-filling. We used a global model and all possible subsets for four predictor variables: (1) injury age, (2) original injury volume, (3) original injury perimeter-to-area ratio, and (4) wave energy. Successional processes were underway for many injuries with fast-growing, opportunistic seagrass species contributing most to colonization. The majority of groundings that exhibited natural seagrass colonization also exhibited areal contraction and sediment in-filling. Injuries demonstrating colonization, contraction, and in-filling were on average older and smaller, and they had larger initial perimeter-to-area ratios. Wave energy was highest for colonizing injuries. The information-theoretic approach was unable to select a single "best" model for any response variable. For colonization and contraction, injury age had the highest relative importance as a predictor variable; wave energy appeared to be associated with second-order effects, such as sediment in-filling, which in turn, facilitated seagrass colonization. For sediment in-filling, volume and perimeter-to-area ratio had similar relative importance as predictor variables with age playing a lesser role than seen for colonization and contraction. Our findings confirm that these injuries

  8. A Method for Analyzing Volunteered Geographic Information ...

    Science.gov (United States)

    Volunteered geographic information (VGI) can be used to identify public valuation of ecosystem services in a defined geographic area using photos as a representation of lived experiences. This method can help researchers better survey and report on the values and preferences of stakeholders involved in rehabilitation and revitalization projects. Current research utilizes VGI in the form of geotagged social media photos from three platforms: Flickr, Instagram, and Panaramio. Social media photos have been obtained for the neighborhoods next to the St. Louis River in Duluth, Minnesota, and are being analyzed along several dimensions. These dimensions include the spatial distribution of each platform, the characteristics of the physical environment portrayed in the photos, and finally, the ecosystem service depicted. In this poster, we focus on the photos from the Irving and Fairmount neighborhoods of Duluth, MN to demonstrate the method at the neighborhood scale. This study demonstrates a method for translating the values expressed in social media photos into ecosystem services and spatially-explicit data to be used in multiple settings, including the City of Duluth’s Comprehensive Planning and community revitalization efforts, habitat restoration in a Great Lakes Area of Concern, and the USEPA’s Office of Research and Development. This poster will demonstrate a method for translating values expressed in social media photos into ecosystem services and spatially

  9. A Method for Analyzing Volunteered Geographic Information ...

    Science.gov (United States)

    Volunteered geographic information (VGI) can be used to identify public valuation of ecosystem services in a defined geographic area using photos as a representation of lived experiences. This method can help researchers better survey and report on the values and preferences of stakeholders involved in rehabilitation and revitalization projects. Current research utilizes VGI in the form of geotagged social media photos from three platforms: Flickr, Instagram, and Panaramio. Social media photos have been obtained for the neighborhoods next to the St. Louis River in Duluth, Minnesota, and are being analyzed along several dimensions. These dimensions include the spatial distribution of each platform, the characteristics of the physical environment portrayed in the photos, and finally, the ecosystem service depicted. In this poster, we focus on the photos from the Irving and Fairmount neighborhoods of Duluth, MN to demonstrate the method at the neighborhood scale. This study demonstrates a method for translating the values expressed in social media photos into ecosystem services and spatially-explicit data to be used in multiple settings, including the City of Duluth’s Comprehensive Planning and community revitalization efforts, habitat restoration in a Great Lakes Area of Concern, and the USEPA’s Office of Research and Development. This poster will demonstrate a method for translating values expressed in social media photos into ecosystem services and spatially

  10. Development of Theoretical Methods for Predicting Solvent Effects on Reaction Rates in Supercritical Water Oxidation Processes

    Science.gov (United States)

    2007-11-02

    Tucker, manuscript in preparation. “Examination of Nonequilibrium Solvent Effects on an SN2 Reaction in Supercritical Water,” R. Behera, B...DATES COVERED Final: 7/1/99 - 12/31/02 4. TITLE AND SUBTITLE Development of theoretical methods for predicting solvent effects on reactions ...computational methods for predicting how reaction rate constants will vary with thermodynamic condition in supercritical water (SCW). Towards this

  11. Theoretical Re-formulation of the Clinical Method: The Therapeutic Clinical Diagnostic Method

    Directory of Open Access Journals (Sweden)

    Luis Alberto Corona Martínez

    2010-12-01

    Full Text Available This article shows a theoretical model of the solution state or tretament of the medical attention process as a complementary phase of diganosis of the mentioned process.the development of this stage has been carried out according to the medical assistance conception as a taken decisions process and which is formed by the following elements, 1 Development of solution options, 2 Evaluation, selection, and application of options, 3 Evaluation of taken solutions. Diferences between behaviour options and therapeutic management are established, and the importance of using diverse factors to give individuality to the process is reinforced. Some limitations of this elaborated model is also shown.

  12. Theoretical Re-formulation of the Clinical Method: The Therapeutic Clinical Diagnostic Method

    Directory of Open Access Journals (Sweden)

    Luis Alberto Corona Martínez

    2007-04-01

    Full Text Available This article shows a theoretical model of the solution state or tretament of the medical attention process as a complementary phase of diganosis of the mentioned process.the development of this stage has been carried out according to the medical assistance conception as a taken decisions process and which is formed by the following elements, 1 Development of solution options, 2 Evaluation, selection, and application of options, 3 Evaluation of taken solutions. Diferences between behaviour options and therapeutic management are established, and the importance of using diverse factors to give individuality to the process is reinforced. Some limitations of this elaborated model is also shown.

  13. Information-Theoretic Limits on Broadband Multi-Antenna Systems in the Presence of Mutual Coupling

    Science.gov (United States)

    Taluja, Pawandeep Singh

    2011-12-01

    Multiple-input, multiple-output (MIMO) systems have received considerable attention over the last decade due to their ability to provide high throughputs and mitigate multipath fading effects. While most of these benefits are obtained for ideal arrays with large separation between the antennas, practical devices are often constrained in physical dimensions. With smaller inter-element spacings, signal correlation and mutual coupling between the antennas start to degrade the system performance, thereby limiting the deployment of a large number of antennas. Various studies have proposed transceiver designs based on optimal matching networks to compensate for this loss. However, such networks are considered impractical due to their multiport structure and sensitivity to the RF bandwidth of the system. In this dissertation, we investigate two aspects of compact transceiver design. First, we consider simpler architectures that exploit coupling between the antennas, and second, we establish information-theoretic limits of broadband communication systems with closely-spaced antennas. We begin with a receiver model of a diversity antenna selection system and propose novel strategies that make use of inactive elements by virtue of mutual coupling. We then examine the limits on the matching efficiency of a single antenna system using broadband matching theory. Next, we present an extension to this theory for coupled MIMO systems to elucidate the impact of coupling on the RF bandwidth of the system, and derive optimal transceiver designs. Lastly, we summarize the main findings of this dissertation and suggest open problems for future work.

  14. Model-free information-theoretic approach to infer leadership in pairs of zebrafish

    Science.gov (United States)

    Butail, Sachit; Mwaffo, Violet; Porfiri, Maurizio

    2016-04-01

    Collective behavior affords several advantages to fish in avoiding predators, foraging, mating, and swimming. Although fish schools have been traditionally considered egalitarian superorganisms, a number of empirical observations suggest the emergence of leadership in gregarious groups. Detecting and classifying leader-follower relationships is central to elucidate the behavioral and physiological causes of leadership and understand its consequences. Here, we demonstrate an information-theoretic approach to infer leadership from positional data of fish swimming. In this framework, we measure social interactions between fish pairs through the mathematical construct of transfer entropy, which quantifies the predictive power of a time series to anticipate another, possibly coupled, time series. We focus on the zebrafish model organism, which is rapidly emerging as a species of choice in preclinical research for its genetic similarity to humans and reduced neurobiological complexity with respect to mammals. To overcome experimental confounds and generate test data sets on which we can thoroughly assess our approach, we adapt and calibrate a data-driven stochastic model of zebrafish motion for the simulation of a coupled dynamical system of zebrafish pairs. In this synthetic data set, the extent and direction of the coupling between the fish are systematically varied across a wide parameter range to demonstrate the accuracy and reliability of transfer entropy in inferring leadership. Our approach is expected to aid in the analysis of collective behavior, providing a data-driven perspective to understand social interactions.

  15. Underwriting information-theoretic accounts of quantum mechanics with a realist, psi-epistemic model

    Science.gov (United States)

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    2016-05-01

    We propose an adynamical interpretation of quantum theory called Relational Blockworld (RBW) where the fundamental ontological element is a 4D graphical amalgam of space, time and sources called a “spacetimesource element.” These are fundamental elements of space, time and sources, not source elements in space and time. The transition amplitude for a spacetimesource element is computed using a path integral with discrete graphical action. The action for a spacetimesource element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint between sources, the spacetime metric and the energy-momentum content of the spacetimesource element, rather than a dynamical law for time-evolved entities. To illustrate this interpretation, we explain the simple EPR-Bell and twin-slit experiments. This interpretation of quantum mechanics constitutes a realist, psi-epistemic model that might underwrite certain information-theoretic accounts of the quantum.

  16. Information-Theoretic Properties of the Half-Line Coulomb Potential

    CERN Document Server

    Omiste, J J; Dehesa, J S

    2009-01-01

    The half-line one-dimensional Coulomb potential is possibly the simplest D-dimensional model with physical solutions which has been proved to be successful to describe the behaviour of Rydberg atoms in external fields and the dynamics of surface-state electrons in liquid helium, with potential applications in constructing analog quantum computers and other fields. Here, we investigate the spreading and uncertaintylike properties for the ground and excited states of this system by means of the logarithmic measure and the information-theoretic lengths of Renyi, Shannon and Fisher types; so, far beyond the Heisenberg measure. In particular, the Fisher length (which is a local quantity of internal disorder) is shown to be the proper measure of uncertainty for our system in both position and momentum spaces. Moreover the position Fisher length of a given physical state turns out to be not only directly proportional to the number of nodes of its associated wavefunction, but also it follows a square-root energy law.

  17. An Information-Theoretic Approach for Energy-Efficient Collaborative Tracking in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Arienzo Loredana

    2010-01-01

    Full Text Available The problem of collaborative tracking of mobile nodes in wireless sensor networks is addressed. By using a novel metric derived from the energy model in LEACH (W.B. Heinzelman, A.P. Chandrakasan and H. Balakrishnan, Energy-Efficient Communication Protocol for Wireless Microsensor Networks, in: Proceedings of the 33rd Hawaii International Conference on System Sciences (HICSS '00, 2000 and aiming at an efficient resource solution, the approach adopts a strategy of combining target tracking with node selection procedures in order to select informative sensors to minimize the energy consumption of the tracking task. We layout a cluster-based architecture to address the limitations in computational power, battery capacity and communication capacities of the sensor devices. The computation of the posterior Cramer-Rao bound (PCRB based on received signal strength measurements has been considered. To track mobile nodes two particle filters are used: the bootstrap particle filter and the unscented particle filter, both in the centralized and in the distributed manner. Their performances are compared with the theoretical lower bound PCRB. To save energy, a node selection procedure based on greedy algorithms is proposed. The node selection problem is formulated as a cross-layer optimization problem and it is solved using greedy algorithms.

  18. Understanding Neural Population Coding: Information Theoretic Insights from the Auditory System

    Directory of Open Access Journals (Sweden)

    Arno Onken

    2014-01-01

    Full Text Available In recent years, our research in computational neuroscience has focused on understanding how populations of neurons encode naturalistic stimuli. In particular, we focused on how populations of neurons use the time domain to encode sensory information. In this focused review, we summarize this recent work from our laboratory. We focus in particular on the mathematical methods that we developed for the quantification of how information is encoded by populations of neurons and on how we used these methods to investigate the encoding of complex naturalistic sounds in auditory cortex. We review how these methods revealed a complementary role of low frequency oscillations and millisecond precise spike patterns in encoding complex sounds and in making these representations robust to imprecise knowledge about the timing of the external stimulus. Further, we discuss challenges in extending this work to understand how large populations of neurons encode sensory information. Overall, this previous work provides analytical tools and conceptual understanding necessary to study the principles of how neural populations reflect sensory inputs and achieve a stable representation despite many uncertainties in the environment.

  19. A quantum-information theoretic analysis of three-flavor neutrino oscillations. Quantum entanglement, nonlocal and nonclassical features of neutrinos

    Science.gov (United States)

    Banerjee, Subhashish; Alok, Ashutosh Kumar; Srikanth, R.; Hiesmayr, Beatrix C.

    2015-10-01

    Correlations exhibited by neutrino oscillations are studied via quantum-information theoretic quantities. We show that the strongest type of entanglement, genuine multipartite entanglement, is persistent in the flavor changing states. We prove the existence of Bell-type nonlocal features, in both its absolute and genuine avatars. Finally, we show that a measure of nonclassicality, dissension, which is a generalization of quantum discord to the tripartite case, is nonzero for almost the entire range of time in the evolution of an initial electron-neutrino. Via these quantum-information theoretic quantities, capturing different aspects of quantum correlations, we elucidate the differences between the flavor types, shedding light on the quantum-information theoretic aspects of the weak force.

  20. A quantum-information theoretic analysis of three-flavor neutrino oscillations. Quantum entanglement, nonlocal and nonclassical features of neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Subhashish; Alok, Ashutosh Kumar [Indian Institute of Technology Jodhpur, Jodhpur (India); Srikanth, R. [Poornaprajna Institute of Scientific Research, Banglore (India); Hiesmayr, Beatrix C. [University of Vienna, Vienna (Austria)

    2015-10-15

    Correlations exhibited by neutrino oscillations are studied via quantum-information theoretic quantities. We show that the strongest type of entanglement, genuine multipartite entanglement, is persistent in the flavor changing states. We prove the existence of Bell-type nonlocal features, in both its absolute and genuine avatars. Finally, we show that a measure of nonclassicality, dissension, which is a generalization of quantum discord to the tripartite case, is nonzero for almost the entire range of time in the evolution of an initial electron-neutrino. Via these quantum-information theoretic quantities, capturing different aspects of quantum correlations, we elucidate the differences between the flavor types, shedding light on the quantum-information theoretic aspects of the weak force. (orig.)

  1. An information-theoretic machine learning approach to expression QTL analysis.

    Directory of Open Access Journals (Sweden)

    Tao Huang

    Full Text Available Expression Quantitative Trait Locus (eQTL analysis is a powerful tool to study the biological mechanisms linking the genotype with gene expression. Such analyses can identify genomic locations where genotypic variants influence the expression of genes, both in close proximity to the variant (cis-eQTL, and on other chromosomes (trans-eQTL. Many traditional eQTL methods are based on a linear regression model. In this study, we propose a novel method by which to identify eQTL associations with information theory and machine learning approaches. Mutual Information (MI is used to describe the association between genetic marker and gene expression. MI can detect both linear and non-linear associations. What's more, it can capture the heterogeneity of the population. Advanced feature selection methods, Maximum Relevance Minimum Redundancy (mRMR and Incremental Feature Selection (IFS, were applied to optimize the selection of the affected genes by the genetic marker. When we applied our method to a study of apoE-deficient mice, it was found that the cis-acting eQTLs are stronger than trans-acting eQTLs but there are more trans-acting eQTLs than cis-acting eQTLs. We compared our results (mRMR.eQTL with R/qtl, and MatrixEQTL (modelLINEAR and modelANOVA. In female mice, 67.9% of mRMR.eQTL results can be confirmed by at least two other methods while only 14.4% of R/qtl result can be confirmed by at least two other methods. In male mice, 74.1% of mRMR.eQTL results can be confirmed by at least two other methods while only 18.2% of R/qtl result can be confirmed by at least two other methods. Our methods provide a new way to identify the association between genetic markers and gene expression. Our software is available from supporting information.

  2. Mathematical Evidence-theoretic Framework for Information Fusion of Disaster Scene Big Data

    Science.gov (United States)

    Chen, Z.

    2014-12-01

    The remote sensing community and geospatial industries are embracing the paradigm of 'big data'. This trend is in one hand due to the fact that heterogeneous remote sensors are producing tremendous amounts of earth observation (EO) data every day; on the other hand it is aspired by the promise that big data computing may be the fourth paradigm for scientific discovery. Many traditional techniques have been developed and will continue to be useful to deal with earth-observation big data, for examples, pansharpening, fusion of data with different electromagnetic nature (e.g. color images and SAR images), and use of multi-sensor data for improved land-cover classification. However, two limitations are recognized for these techniques, which include: (1) first, these methods are tightly dependent on a two-dimensional grid scale; and (2) second, temporal, spatial and causal relations are not intelligently treated. These limitations render them insufficient when used to attack the emerging Disaster Scene Big Data (DSBD). DSBD emerges as a geological or climatic hazard unfolds into a disaster. In the example of an earthquake, disaster data starts accruing as the ground shaking is being monitored. Along the time scale, heterogeneous multi-sensor data arise: EO data with various electromagnetic nature, oblique images, airborne/terrestrial active (Lidar) data, and the recently emerged crowdsourcing data. Neither theoretical models nor effective methods exist to date that can sufficiently fuse these data towards revealing the 'ground-truth' of the disaster effects, for example, damage to built objects. This presentation will present an augmented evidence-theoretic framework based on the classical Dempster-Shafer theory. With a focus on reasoning the ground-truth of build-object damage, causal, correlational and relational evidences will be defined considering their temporal and spatial scales. The newly developed graph-based learning approach will be explored for estimating

  3. A unifying theoretical and algorithmic framework for least squares methods of estimation in diffusion tensor imaging.

    Science.gov (United States)

    Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J

    2006-09-01

    A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (

  4. A graph-theoretic method to quantify the airline route authority

    Science.gov (United States)

    Chan, Y.

    1979-01-01

    The paper introduces a graph-theoretic method to quantify the legal statements in route certificate which specifies the airline routing restrictions. All the authorized nonstop and multistop routes, including the shortest time routes, can be obtained, and the method suggests profitable route structure alternatives to airline analysts. This method to quantify the C.A.B. route authority was programmed in a software package, Route Improvement Synthesis and Evaluation, and demonstrated in a case study with a commercial airline. The study showed the utility of this technique in suggesting route alternatives and the possibility of improvements in the U.S. route system.

  5. 3D nonrigid medical image registration using a new information theoretic measure

    Science.gov (United States)

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  6. THEORETICAL METHOD FOR PREDICTION OF THE CUTTING EDGE RECESSION DURING MILLING WOOD AND SECONDARY WOOD PRODUCTS

    Directory of Open Access Journals (Sweden)

    Bolesław Porankiewicz

    2008-11-01

    Full Text Available A theoretical method for prediction of cutting edge recession during milling wood and wood-based products, due to the presence of hard mineral contamination, High Temperature Tribochemical Reactions (HTTR, and frictional wearing, based on 3D random distribution of contaminant particles is presented and positively verified based on the example of three experiments from the literature, showing good correlation between the predicted and observed cutting edge recession.

  7. A multi-sequential number-theoretic optimization algorithm using clustering methods

    Institute of Scientific and Technical Information of China (English)

    XU Qing-song; LIANG Yi-zeng; HOU Zhen-ting

    2005-01-01

    A multi-sequential number-theoretic optimization method based on clustering was developed and applied to the optimization of functions with many local extrema. Details of the procedure to generate the clusters and the sequential schedules were given. The algorithm was assessed by comparing its performance with generalized simulated annealing algorithm in a difficult instructive example and a D-optimum experimental design problem. It is shown the presented algorithm to be more effective and reliable based on the two examples.

  8. Eclecticism as the foundation of meta-theoretical, mixed methods and interdisciplinary research in social sciences.

    Science.gov (United States)

    Kroos, Karmo

    2012-03-01

    This article examines the value of "eclecticism" as the foundation of meta-theoretical, mixed methods and interdisciplinary research in social sciences. On the basis of the analysis of the historical background of the concept, it is first suggested that eclecticism-based theoretical scholarship in social sciences could benefit from the more systematic research method that has been developed for synthesizing theoretical works under the name metatheorizing. Second, it is suggested that the mixed methods community could base its research approach on philosophical eclecticism instead of pragmatism because the basic idea of eclecticism is much more in sync with the nature of the combined research tradition. Finally, the Kuhnian frame is used to support the argument for interdisciplinary research and, hence, eclecticism in social sciences (rather than making an argument against multiple paradigms). More particularly, it is suggested that integrating the different (inter)disciplinary traditions and schools into one is not necessarily desirable at all in social sciences because of the complexity and openness of the research field. If it is nevertheless attempted, experience in economics suggests that paradigmatic unification comes at a high price.

  9. A Theoretical Method for Characterizing Nonlinear Effects in Paul Traps with Added Octopole Field.

    Science.gov (United States)

    Xiong, Caiqiao; Zhou, Xiaoyu; Zhang, Ning; Zhan, Lingpeng; Chen, Yongtai; Chen, Suming; Nie, Zongxiu

    2015-08-01

    In comparison with numerical methods, theoretical characterizations of ion motion in the nonlinear Paul traps always suffer from low accuracy and little applicability. To overcome the difficulties, the theoretical harmonic balance (HB) method was developed, and was validated by the numerical fourth-order Runge-Kutta (4th RK) method. Using the HB method, analytical ion trajectory and ion motion frequency in the superimposed octopole field, ε, were obtained by solving the nonlinear Mathieu equation (NME). The obtained accuracy of the HB method was comparable with that of the 4th RK method at the Mathieu parameter, q = 0.6, and the applicable q values could be extended to the entire first stability region with satisfactory accuracy. Two sorts of nonlinear effects of ion motion were studied, including ion frequency shift, Δβ, and ion amplitude variation, Δ(C(2n)/C0) (n ≠ 0). New phenomena regarding Δβ were observed, although extensive studies have been performed based on the pseudo-potential well (PW) model. For instance, the |Δβ| at ε = 0.1 and ε = -0.1 were found to be different, but they were the same in the PW model. This is the first time the nonlinear effects regarding Δ(C(2n)/C0) (n ≠ 0) are studied, and the associated study has been a challenge for both theoretical and numerical methods. The nonlinear effects of Δ(C(2n)/C0) (n ≠ 0) and Δβ were found to share some similarities at q < 0.6: both of them were proportional to ε, and the square of the initial ion displacement, z(0)(2).

  10. THEORETICAL ASPECTS AND METHODS OF PARAMETERS IDENTIFICATION OF ELECTRIC TRACTION SYSTEM DEVICES. METHOD OF WEIGHT FUNCTION

    Directory of Open Access Journals (Sweden)

    T. N. Mishchenko

    2014-10-01

    Full Text Available Purpose. Development and substantiation of a new method of structural identification of electrical devices of electric traction systems for both DC and AC current. Methodology. To solve this problem the following methods are used: the methods and techniques of the linear electrical engineering, in particular, the Laplace operator method; the numerical method for solving the integral equation, which is based on the representation of the Wiener-Hopf linear equations system (this allows forming the solutions of the problem in a mathematical form of the correlation and weight functions; the factorization method, which provides certain partition of the correlation functions of the stochastic processes. Findings. It was developed the method of weight function of the electrical devices identification, which can be fully used in the systems of electric traction. As the use example of the developed method it was considered a feeder section of DC electric traction with the single power supply. On this section move two electric locomotives of the type DE 1, they have been identified by the weighting functions. The required currents and voltages of electric locomotives are also formulated in the electric traction network in probabilistic and statistical form, that is, the functions of mathematical expectation and the correlation functions are determined. At this, it is taken into account that the correlation function of the sum of random functions is equal to the sum of the correlation functions of additives, and the correlation function of the integral of a random function is defined as the double integral of the correlation function of the output of a random function. Originality. Firstly, originality consists of the adaption of the developed method of structural identification for the devices of electric traction system. Secondly, it lies in the proper development of the new method of weight function. And finally, it lies in the solution of the Wiener

  11. An Information-Theoretical View of Network-Aware Malware Attacks

    CERN Document Server

    Chen, Zesheng

    2008-01-01

    This work investigates three aspects: (a) a network vulnerability as the non-uniform vulnerable-host distribution, (b) threats, i.e., intelligent malwares that exploit such a vulnerability, and (c) defense, i.e., challenges for fighting the threats. We first study five large data sets and observe consistent clustered vulnerable-host distributions. We then present a new metric, referred to as the non-uniformity factor, which quantifies the unevenness of a vulnerable-host distribution. This metric is essentially the Renyi information entropy and better characterizes the non-uniformity of a distribution than the Shannon entropy. Next, we analyze the propagation speed of network-aware malwares in view of information theory. In particular, we draw a relationship between Renyi entropies and randomized epidemic malware-scanning algorithms. We find that the infection rates of malware-scanning methods are characterized by the Renyi entropies that relate to the information bits in a non-unform vulnerable-host distribut...

  12. Approximating model probabilities in Bayesian information criterion and decision-theoretic approaches to model selection in phylogenetics.

    Science.gov (United States)

    Evans, Jason; Sullivan, Jack

    2011-01-01

    A priori selection of models for use in phylogeny estimation from molecular sequence data is increasingly important as the number and complexity of available models increases. The Bayesian information criterion (BIC) and the derivative decision-theoretic (DT) approaches rely on a conservative approximation to estimate the posterior probability of a given model. Here, we extended the DT method by using reversible jump Markov chain Monte Carlo approaches to directly estimate model probabilities for an extended candidate pool of all 406 special cases of the general time reversible + Γ family. We analyzed 250 diverse data sets in order to evaluate the effectiveness of the BIC approximation for model selection under the BIC and DT approaches. Model choice under DT differed between the BIC approximation and direct estimation methods for 45% of the data sets (113/250), and differing model choice resulted in significantly different sets of trees in the posterior distributions for 26% of the data sets (64/250). The model with the lowest BIC score differed from the model with the highest posterior probability in 30% of the data sets (76/250). When the data indicate a clear model preference, the BIC approximation works well enough to result in the same model selection as with directly estimated model probabilities, but a substantial proportion of biological data sets lack this characteristic, which leads to selection of underparametrized models.

  13. Experimental method for and theoretical research on defect tolerance of fixed plate based on damage mechanics

    Institute of Scientific and Technical Information of China (English)

    Zhan Zhixin; Hu Weiping; Zhang Miao; Zhu Yuefa; Meng Qingchun

    2013-01-01

    An experimental method and a theoretical analysis based on continuum damage mechan-ics are applied for the defects tolerance of fixed plate. The defects type studied in this article is scratch, which is considered a typical defect on fixed plate according to the engineering practice. The general approach to the defects tolerance analysis of scratched fixed plate is presented. The method of fatigue life prediction for standard notched specimens has been established on the basis of continuum damage mechanics. For the purpose of obtaining the influence law of fatigue life in consequence of scratches, fatigue experiments of standard notched specimens and scratched specimens have been done. Evalu-ation of the fatigue life of scratched fixed plate has been carried out. And the value of scratch defects permissible to the condition of safety service life has been worked out. According to the results of the-oretical calculations, the fatigue experiment of scratched fixed plate has been performed. The outcome shows that the theoretical prediction tallies with the experimental results.

  14. Theoretical Methods of Domain Structures in Ultrathin Ferroelectric Films: A Review

    Directory of Open Access Journals (Sweden)

    Jianyi Liu

    2014-09-01

    Full Text Available This review covers methods and recent developments of the theoretical study of domain structures in ultrathin ferroelectric films. The review begins with an introduction to some basic concepts and theories (e.g., polarization and its modern theory, ferroelectric phase transition, domain formation, and finite size effects, etc. that are relevant to the study of domain structures in ultrathin ferroelectric films. Basic techniques and recent progress of a variety of important approaches for domain structure simulation, including first-principles calculation, molecular dynamics, Monte Carlo simulation, effective Hamiltonian approach and phase field modeling, as well as multiscale simulation are then elaborated. For each approach, its important features and relative merits over other approaches for modeling domain structures in ultrathin ferroelectric films are discussed. Finally, we review recent theoretical studies on some important issues of domain structures in ultrathin ferroelectric films, with an emphasis on the effects of interfacial electrostatics, boundary conditions and external loads.

  15. Synergy between experimental and theoretical methods in the exploration of homogeneous transition metal catalysis.

    Science.gov (United States)

    Lupp, D; Christensen, N J; Fristrup, P

    2014-08-01

    In this Perspective, we will focus on the use of both experimental and theoretical methods in the exploration of reaction mechanisms in homogeneous transition metal catalysis. We briefly introduce the use of Hammett studies and kinetic isotope effects (KIE). Both of these techniques can be complemented by computational chemistry - in particular in cases where interpretation of the experimental results is not straightforward. The good correspondence between experiment and theory is only possible due to recent advances within the applied theoretical framework. We therefore also highlight the innovations made in the last decades with emphasis on dispersion-corrected DFT and solvation models. The current state-of-the-art is highlighted using examples from the literature with particular focus on the synergy between experiment and theory.

  16. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    Science.gov (United States)

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  17. Image subband coding using an information-theoretic subband splitting criterion

    Science.gov (United States)

    Bayazit, Ulug; Pearlman, William A.

    1995-03-01

    It has been proved recently that for Gaussian sources with memory an ideal subband split will produce a coding gain for scalar or vector quantization of the subbands. Following the methodology of the proofs, we outline a method for successively splitting the subbands of a source, one at a time to obtain the largest coding gain. The subband with the largest theoretical rate reduction (TRR) is determined and split at each step of the decomposition process. The TRR is the difference between the rate in optimal encoding of N-tuples from a Gaussian source (or subband) and the rate for the same encoding of its subband decomposition. The TRR is a monotone increasing function of a so-called spectral flatness ratio, which involves the products of the eigenvalues of the source (subband) and subband decomposition covariance matrices of order N. These eigenvalues are estimated by the variances of the Discrete Cosine Transform, which approximates those of the optimal Karhunen Loeve Transform. After the subband decomposition hierarchy or tree is determined through the criterion of maximal TRR, each subband is encoded with a variable rate entropy constrained vector quantizer. Optimal rate allocation to subbands is done with the BFOS algorithm which does not require any source modelling. We demonstrate the benefit of using the criterion by comparing coding results on a two-level low-pass pyramidal decomposition with coding results on a two-level decomposition obtained using the criterion. For 60 MCFD (Motion Compensated Frame Difference) frames of the Salesman sequence an average rate- distortion advantage of 0.73 dB and 0.02 bpp and for 30 FD (Frame Difference) frames of Caltrain image sequence an average rate-distortion advantage of 0.41 dB and 0.013 bpp are obtained with the optimal decomposition over low-pass pyramidal decomposition.

  18. A parallel method for enumerating amino acid compositions and masses of all theoretical peptides

    Directory of Open Access Journals (Sweden)

    Nefedov Alexey V

    2011-11-01

    Full Text Available Abstract Background Enumeration of all theoretically possible amino acid compositions is an important problem in several proteomics workflows, including peptide mass fingerprinting, mass defect labeling, mass defect filtering, and de novo peptide sequencing. Because of the high computational complexity of this task, reported methods for peptide enumeration were restricted to cover limited mass ranges (below 2 kDa. In addition, implementation details of these methods as well as their computational performance have not been provided. The increasing availability of parallel (multi-core computers in all fields of research makes the development of parallel methods for peptide enumeration a timely topic. Results We describe a parallel method for enumerating all amino acid compositions up to a given length. We present recursive procedures which are at the core of the method, and show that a single task of enumeration of all peptide compositions can be divided into smaller subtasks that can be executed in parallel. The computational complexity of the subtasks is compared with the computational complexity of the whole task. Pseudocodes of processes (a master and workers that are used to execute the enumerating procedure in parallel are given. We present computational times for our method executed on a computer cluster with 12 Intel Xeon X5650 CPUs (72 cores running Windows HPC Server. Our method has been implemented as a 32- and 64-bit Windows application using Microsoft Visual C++ and the Message Passing Interface. It is available for download at https://ispace.utmb.edu/users/rgsadygo/Proteomics/ParallelMethod. Conclusion We describe implementation of a parallel method for generating mass distributions of all theoretically possible amino acid compositions.

  19. A parallel method for enumerating amino acid compositions and masses of all theoretical peptides.

    Science.gov (United States)

    Nefedov, Alexey V; Sadygov, Rovshan G

    2011-11-07

    Enumeration of all theoretically possible amino acid compositions is an important problem in several proteomics workflows, including peptide mass fingerprinting, mass defect labeling, mass defect filtering, and de novo peptide sequencing. Because of the high computational complexity of this task, reported methods for peptide enumeration were restricted to cover limited mass ranges (below 2 kDa). In addition, implementation details of these methods as well as their computational performance have not been provided. The increasing availability of parallel (multi-core) computers in all fields of research makes the development of parallel methods for peptide enumeration a timely topic. We describe a parallel method for enumerating all amino acid compositions up to a given length. We present recursive procedures which are at the core of the method, and show that a single task of enumeration of all peptide compositions can be divided into smaller subtasks that can be executed in parallel. The computational complexity of the subtasks is compared with the computational complexity of the whole task. Pseudocodes of processes (a master and workers) that are used to execute the enumerating procedure in parallel are given. We present computational times for our method executed on a computer cluster with 12 Intel Xeon X5650 CPUs (72 cores) running Windows HPC Server. Our method has been implemented as a 32- and 64-bit Windows application using Microsoft Visual C++ and the Message Passing Interface. It is available for download at https://ispace.utmb.edu/users/rgsadygo/Proteomics/ParallelMethod. We describe implementation of a parallel method for generating mass distributions of all theoretically possible amino acid compositions.

  20. Data, Information, Knowledge, Wisdom (DIKW: A Semiotic Theoretical and Empirical Exploration of the Hierarchy and its Quality Dimension

    Directory of Open Access Journals (Sweden)

    Sasa Baskarada

    2013-03-01

    Full Text Available What exactly is the difference between data and information? What is the difference between data quality and information quality; is there any difference between the two? And, what are knowledge and wisdom? Are there such things as knowledge quality and wisdom quality? As these primitives are the most basic axioms of information systems research, it is somewhat surprising that consensus on exact definitions seems to be lacking. This paper presents a theoretical and empirical exploration of the sometimes directly quoted, and often implied Data, Information, Knowledge, Wisdom (DIKW hierarchy and its quality dimension. We first review relevant literature from a range of perspectives and develop and contextualise a theoretical DIKW framework through semiotics. The literature review identifies definitional commonalities and divergences from a scholarly perspective; the theoretical discussion contextualises the terms and their relationships within a semiotic framework and proposes relevant definitions grounded in that framework. Next, rooted in Wittgenstein’s ordinary language philosophy, we analyse 20 online news articles for their uses of the terms and present the results of an online focus group discussion comprising 16 information systems experts. The empirical exploration identifies a range of definitional ambiguities from a practical perspective.

  1. Theoretical evaluation of a method for locating gaseous emission hot spots.

    Science.gov (United States)

    Hashmonay, Ram A

    2008-08-01

    This paper describes and theoretically evaluates a recently developed method that provides a unique methodology for mapping gaseous emissions from non-point pollutant sources. The horizontal radial plume mapping (HRPM) methodology uses an open-path, path-integrated optical remote sensing (PI-ORS) system in a horizontal plane to directly identify emission hot spots. The radial plume mapping methodology has been well developed, evaluated, and demonstrated. In this paper, the theoretical basis of the HRPM method is explained in the context of the method's reliability and robustness to reconstruct spatially resolved plume maps. Calculation of the condition number of the inversion's kernel matrix showed that this method has minimal error magnification (EM) when the beam geometry is optimized. Minimizing the condition number provides a tool for such optimization of the beam geometry because it indicates minimized EM. Using methane concentration data collected from a landfill with a tunable diode laser absorption spectroscopy (TDLAS) system, it is demonstrated that EM is minimal because the averaged plume map of many reconstructed plume maps is very similar to a plume map generated by the averaged concentration data. It is also shown in the analysis of this dataset that the reconstructions of plume maps are unique for the optimized HRPM beam geometry and independent of the actual algorithm applied.

  2. The UBI-QEP method. A practical theoretical approach to understanding chemistry on transition metal surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Shustorovich, Evgeny [American Scientific Materials Technologies, Inc., New York, NY (United States); Sellers, Harrell [Department of Chemistry and Biochemistry, South Dakota State University Brooking, SD (United States)

    1998-04-01

    In this review we examine the presently available theoretical techniques for determining metal surface reaction energetics. The unity bond index-quadratic exponential potential (UBI-QEP) method, which provides heats of adsorption and reaction activation barriers with a typical accuracy of 1-3 Kcal/mol, emerges as the method with the widest applicability for complex and practically important reaction systems. We discuss in detail the theoretical foundations of the analytic UBI-QEP method which employs the most general two-body interaction potentials. The potential variable, named a bond index, is a general exponential function of the two-center bond distance. The bond indices of interacting bonds are assumed to be conserved at unity (up to the dissociation point), and we cite state-of-the-art ab initio calculations to support this assumption. The UBI-QEP method allows one to calculate the reaction energetics in a straightforward variational way. We summarize the analytic formulas for adsorbate binding energies in various coordination modes and for intrinsic and diffusion activation barriers. We also describe a computer program which makes UBI-QEP calculations fully automated. The normalized bond index-molecular dynamics (NBI-MD) simulation technique, which is an adaptation of the UBI-QEP reactive potential functions to molecular dynamics, is described. Detailed summaries of applications are given which include the Fischer-Tropsch synthesis, oxygen assisted X-H bond cleavage, hydrogen peroxide, methanol and ammonia syntheses, decomposition and reduction of NO, and SO{sub x} chemistry

  3. The UBI-QEP method: A practical theoretical approach to understanding chemistry on transition metal surfaces

    Science.gov (United States)

    Shustorovich, Evgeny; Sellers, Harrell

    In this review we examine the presently available theoretical techniques for determining metal surface reaction energetics. The unity bond index-quadratic exponential potential (UBI-QEP) method, which provides heats of adsorption and reaction activation barriers with a typical accuracy of 1-3 kcal/mol, emerges as the method with the widest applicability for complex and practically important reaction systems. We discuss in detail the theoretical foundations of the analytic UBI-QEP method which employs the most general two-body interaction potentials. The potential variable, named a bond index, is a general exponential function of the two-center bond distance. The bond indices of interacting bonds are assumed to be conserved at unity (up to the dissociation point), and we cite state-of-the-art ab initio calculations to support this assumption. The UBI-QEP method allows one to calculate the reaction energetics in a straightforward variational way. We summarize the analytic formulas for adsorbate binding energies in various coordination modes and for intrinsic and diffusion activation barriers. We also describe a computer program which makes UBI-QEP calculations fully automated. The normalized bond index-molecular dinamics, (NBI-MD) simulation technique, which is an adaptation of the UBI-QEP reactive potential functions to molecular dynamics, is described. Detailed summaries of applications are given which include the Fischer-Tropsch synthesis, oxygen assisted X-H bond cleavage, hydrogen peroxide, methanol and ammonia syntheses, decomposition and reduction of NO, and SO x chemistry.

  4. Keywords and Co-Occurrence Patterns in the Voynich Manuscript: An Information-Theoretic Analysis.

    Science.gov (United States)

    Montemurro, Marcelo A; Zanette, Damián H

    2013-01-01

    The Voynich manuscript has remained so far as a mystery for linguists and cryptologists. While the text written on medieval parchment -using an unknown script system- shows basic statistical patterns that bear resemblance to those from real languages, there are features that suggested to some researches that the manuscript was a forgery intended as a hoax. Here we analyse the long-range structure of the manuscript using methods from information theory. We show that the Voynich manuscript presents a complex organization in the distribution of words that is compatible with those found in real language sequences. We are also able to extract some of the most significant semantic word-networks in the text. These results together with some previously known statistical features of the Voynich manuscript, give support to the presence of a genuine message inside the book.

  5. Parenting Practices of Anxious and Nonanxious Mothers: A Multi-Method, Multi-Informant Approach

    Science.gov (United States)

    Drake, Kelly L.; Ginsburg, Golda S.

    2011-01-01

    Anxious and nonanxious mothers were compared on theoretically derived parenting and family environment variables (i.e., overcontrol, warmth, criticism, anxious modeling) using multiple informants and methods. Mother-child dyads completed questionnaires about parenting and were observed during an interactional task. Findings reveal that, after…

  6. The More You Know, the More You Can Grow: An Information Theoretic Approach to Growth in the Information Age

    Directory of Open Access Journals (Sweden)

    Martin Hilbert

    2017-02-01

    Full Text Available In our information age, information alone has become a driver of social growth. Information is the fuel of “big data” companies, and the decision-making compass of policy makers. Can we quantify how much information leads to how much social growth potential? Information theory is used to show that information (in bits is effectively a quantifiable ingredient of growth. The article presents a single equation that allows both to describe hands-off natural selection of evolving populations and to optimize population fitness in uncertain environments through intervention. The setup analyzes the communication channel between the growing population and its uncertain environment. The role of information in population growth can be thought of as the optimization of information flow over this (more or less noisy channel. Optimized growth implies that the population absorbs all communicated environmental structure during evolutionary updating (measured by their mutual information. This is achieved by endogenously adjusting the population structure to the exogenous environmental pattern (through bet-hedging/portfolio management. The setup can be applied to decompose the growth of any discrete population in stationary, stochastic environments (economic, cultural, or biological. Two empirical examples from the information economy reveal inherent trade-offs among the involved information quantities during growth optimization.

  7. Improving Decision Making with Information Systems Technology – A theoretical approach

    OpenAIRE

    Dr.Sc. Mihane Berisha- Namani; Mr.Sc. Albana Qehaja

    2013-01-01

    Traditionally, information systems were used to support operational functions and to reduce costs by automating many of business operations. As business has become more aware of the importance of information systems, the role of information systems has changed. From its conventional function of supporting business operations, today information systems are used to reduce business risks and to ensure that correct information is made available, so managers can make better decisions. The purpose ...

  8. A Survey of Game Theoretic Approaches to Modelling Decision-Making in Information Warfare Scenarios

    OpenAIRE

    Kathryn Merrick; Medria Hardhienata; Kamran Shafi; Jiankun Hu

    2016-01-01

    Our increasing dependence on information technologies and autonomous systems has escalated international concern for information- and cyber-security in the face of politically, socially and religiously motivated cyber-attacks. Information warfare tactics that interfere with the flow of information can challenge the survival of individuals and groups. It is increasingly important that both humans and machines can make decisions that ensure the trustworthiness of information, communication and ...

  9. Concepts and methods in modern theoretical chemistry electronic structure and reactivity

    CERN Document Server

    Ghosh, Swapan Kumar

    2013-01-01

    Concepts and Methods in Modern Theoretical Chemistry: Electronic Structure and Reactivity, the first book in a two-volume set, focuses on the structure and reactivity of systems and phenomena. A new addition to the series Atoms, Molecules, and Clusters, this book offers chapters written by experts in their fields. It enables readers to learn how concepts from ab initio quantum chemistry and density functional theory (DFT) can be used to describe, understand, and predict electronic structure and chemical reactivity. This book covers a wide range of subjects, including discussions on the followi

  10. Theoretical investigation on a general class of 2D quasicrystals with the rectangular projection method

    Science.gov (United States)

    Yue, Yang-Yang; Lu, Rong-er; Yang, Bo; Huang, Huang; Hong, Xu-Hao; Zhang, Chao; Qin, Yi-Qiang; Zhu, Yong-Yuan

    2016-10-01

    We take a theoretical investigation on the reciprocal property of a class of 2D nonlinear photonic quasicrystal proposed by Lifshitz et al. in PRL 95, 133901 (2005). Using the rectangular projection method, the analytical expression for the Fourier spectrum of the quasicrystal structure is obtained explicitly. It is interesting to find that the result has a similar form to the corresponding expression of the well-known 1D Fibonacci lattice. In addition, we predict a further extension of the result to higher dimensions. This work is of practical importance for the photonic device design in nonlinear optical conversion progresses.

  11. The theoretical preconditions for problem situation realization while studying information technology at school

    Directory of Open Access Journals (Sweden)

    Ольга Александровна Прусакова

    2012-03-01

    Full Text Available Within the framework of modern pedagogy and educational practice there have been worked out and realized various theoretical conceptions, theories, educational approaches including humanistic, personality-oriented, activity-oriented, competence-oriented. One of such approaches to education and personality development is the problem-solving approach.

  12. Automated Segmentation of MS Lesions in MR Images Based on an Information Theoretic Clustering and Contrast Transformations

    Directory of Open Access Journals (Sweden)

    Jason Hill

    2015-06-01

    Full Text Available Magnetic Resonance Imaging (MRI plays a significant role in the current characterization and diagnosis of multiple sclerosis (MS in radiological imaging. However, early detection of MS lesions from MRI still remains a challenging problem. In the present work, an information theoretic approach to cluster the voxels in MS lesions for automatic segmentation of lesions of various sizes in multi-contrast (T1, T2, PD-weighted MR images, is applied. For accurate detection of MS lesions of various sizes, the skull-stripped brain data are rescaled and histogram manipulated prior to mapping the multi-contrast data to pseudo-color images. For automated segmentation of multiple sclerosis (MS lesions in multi-contrast MRI, the improved jump method (IJM clustering method has been enhanced via edge suppression for improved segmentation of white matter (WM, gray matter (GM, cerebrospinal fluid (CSF and MS lesions if present. From this preliminary clustering, a pseudo-color to grayscale conversion is designed to equalize the intensities of the normal brain tissues, leaving the MS lesions as outliers. Binary discrete and 8-bit fuzzy labels are then assigned to segment the MS lesions throughout the full brain. For validation of the proposed method, three brains, with mild, moderate and severe hyperintense MS lesions labeled as ground truth, were selected. The MS lesions of mild, moderate and severe categories were detected with a sensitivity of 80%, and 96%, and 94%, and with the corresponding Dice similarity coefficient (DSC of 0.5175, 0.8739, and 0.8266 respectively. The MS lesions can also be clearly visualized in a transparent pseudo-color computer rendered 3D brain.

  13. Main ideas to consider in the elaboration of a new theoretical model of the clinical method.

    Directory of Open Access Journals (Sweden)

    Luis Alberto Corona Martínez

    2006-12-01

    Full Text Available In this article are exposed the main ideas that has been used for the elaboration of a new theoretical model of the clinical process, with the aim of being used as a teaching content in the Medicine career. Such ideas are: the necessity of a total correspondence between the process method and its objective, the conception of the medical attention process as a decision-taking process, the necessity of highlighting the human dimension of the medical practice, and the convenience of counting with a synthetic representation of the clinical method that facilitates its learning and comprehension. These elements will make possible that the clinical method model facilitates the learning and posterior performance of a more effective and efficient medical attention process, with the following betterment of the medical services’ quality and of the patients’ satisfaction.

  14. Improving Decision Making with Information Systems Technology – A theoretical approach

    Directory of Open Access Journals (Sweden)

    Dr.Sc. Mihane Berisha- Namani

    2013-06-01

    Full Text Available Traditionally, information systems were used to support operational functions and to reduce costs by automating many of business operations. As business has become more aware of the importance of information systems, the role of information systems has changed. From its conventional function of supporting business operations, today information systems are used to reduce business risks and to ensure that correct information is made available, so managers can make better decisions. The purpose of this paper is to give an understanding how businesses are using information systems to achieve their goals. It specifically addresses more closely the impact that information systems have in improving the decision making. Althought limited this paper sets out to explore the importance of information systems in decision making and concludes that more attention should be paid to information systems usage for decision making purposes. Finally, suggestions for further research are made.

  15. Method for measuring the alignment between information technology strategic planning and actions of information technology governance

    National Research Council Canada - National Science Library

    Silva, Lúcio Melre da; Souza Neto, João

    2014-01-01

    The purpose of this research is to present a method for measuring the degree of alignment between Strategic Planning and Information Technology Management practices and Information Technology Governance...

  16. Method for Measuring the Alignment Between Information Technology Strategic Planning and Actions of Information Technology Governance

    National Research Council Canada - National Science Library

    Lúcio Melre da Silva; João Souza Neto

    2014-01-01

    The purpose of this research is to present a method for measuring the degree of alignment between Strategic Planning and Information Technology Management practices and Information Technology Governance...

  17. The successful merger of theoretical thermochemistry with fragment-based methods in quantum chemistry.

    Science.gov (United States)

    Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-12-16

    CONSPECTUS: Quantum chemistry and electronic structure theory have proven to be essential tools to the experimental chemist, in terms of both a priori predictions that pave the way for designing new experiments and rationalizing experimental observations a posteriori. Translating the well-established success of electronic structure theory in obtaining the structures and energies of small chemical systems to increasingly larger molecules is an exciting and ongoing central theme of research in quantum chemistry. However, the prohibitive computational scaling of highly accurate ab initio electronic structure methods poses a fundamental challenge to this research endeavor. This scenario necessitates an indirect fragment-based approach wherein a large molecule is divided into small fragments and is subsequently reassembled to compute its energy accurately. In our quest to further reduce the computational expense associated with the fragment-based methods and overall enhance the applicability of electronic structure methods to large molecules, we realized that the broad ideas involved in a different area, theoretical thermochemistry, are transferable to the area of fragment-based methods. This Account focuses on the effective merger of these two disparate frontiers in quantum chemistry and how new concepts inspired by theoretical thermochemistry significantly reduce the total number of electronic structure calculations needed to be performed as part of a fragment-based method without any appreciable loss of accuracy. Throughout, the generalized connectivity based hierarchy (CBH), which we developed to solve a long-standing problem in theoretical thermochemistry, serves as the linchpin in this merger. The accuracy of our method is based on two strong foundations: (a) the apt utilization of systematic and sophisticated error-canceling schemes via CBH that result in an optimal cutting scheme at any given level of fragmentation and (b) the use of a less expensive second

  18. The role of information systems in management decision making-an theoretical approach

    Directory of Open Access Journals (Sweden)

    PhD. Associate Professor Department of Management & Informatics Mihane Berisha-Namani

    2010-12-01

    Full Text Available In modern conditions of globalisation and development of information technology, information processing activities have come to be seen as essential to successful of businesses and organizations. Information has become essential to make decisions and crucial asset in organisations, whereas information systems is technology required for information processing. The application of information systems technology in business and organisations has opened up new possibilities for running and managing organisations, as well as has improved management decision making. The purpose of this paper is to give an understanding of the role that information systems have in management decision making and to discuss the possibilities how managers of organisations can make best use of information systems. The paper starts with identifying the functions of management and managerial roles and continue with information systems usage in three levels of decision making. It specifically addresses the way how information systems can help managers reduce uncertainty in decision making and includes some important implications of information systems usage for managers. Thus, this study provide a framework of effective use of information systems generally and offers an alternative approach to investigate the impact that information systems technology have in management decision making specifically

  19. A Comprehensive Theoretical Framework for Personal Information-Related Behaviors on the Internet

    NARCIS (Netherlands)

    Beldad, Ardion; Jong, de Menno; Steehouder, Michaël

    2011-01-01

    Although there is near consensus on the need for privacy, the reality is that people's attitude toward their personal information privacy is complex. For instance, even when people claim that they value their information privacy, they often trade their personal information for tangible or intangible

  20. When the Mannequin Dies, Creation and Exploration of a Theoretical Framework Using a Mixed Methods Approach.

    Science.gov (United States)

    Tripathy, Shreepada; Miller, Karen H; Berkenbosch, John W; McKinley, Tara F; Boland, Kimberly A; Brown, Seth A; Calhoun, Aaron W

    2016-06-01

    Controversy exists in the simulation community as to the emotional and educational ramifications of mannequin death due to learner action or inaction. No theoretical framework to guide future investigations of learner actions currently exists. The purpose of our study was to generate a model of the learner experience of mannequin death using a mixed methods approach. The study consisted of an initial focus group phase composed of 11 learners who had previously experienced mannequin death due to action or inaction on the part of learners as defined by Leighton (Clin Simul Nurs. 2009;5(2):e59-e62). Transcripts were analyzed using grounded theory to generate a list of relevant themes that were further organized into a theoretical framework. With the use of this framework, a survey was generated and distributed to additional learners who had experienced mannequin death due to action or inaction. Results were analyzed using a mixed methods approach. Forty-one clinicians completed the survey. A correlation was found between the emotional experience of mannequin death and degree of presession anxiety (P framework. Using the previous approach, we created a model of the effect of mannequin death on the educational and psychological state of learners. We offer the final model as a guide to future research regarding the learner experience of mannequin death.

  1. A novel game theoretic approach for modeling competitive information diffusion in social networks with heterogeneous nodes

    Science.gov (United States)

    Agha Mohammad Ali Kermani, Mehrdad; Fatemi Ardestani, Seyed Farshad; Aliahmadi, Alireza; Barzinpour, Farnaz

    2017-01-01

    Influence maximization deals with identification of the most influential nodes in a social network given an influence model. In this paper, a game theoretic framework is developed that models a competitive influence maximization problem. A novel competitive influence model is additionally proposed that incorporates user heterogeneity, message content, and network structure. The proposed game-theoretic model is solved using Nash Equilibrium in a real-world dataset. It is shown that none of the well-known strategies are stable and at least one player has the incentive to deviate from the proposed strategy. Moreover, violation of Nash equilibrium strategy by each player leads to their reduced payoff. Contrary to previous works, our results demonstrate that graph topology, as well as the nodes' sociability and initial tendency measures have an effect on the determination of the influential node in the network.

  2. A theoretical perspective to inform assessment and treatment strategies for animal hoarders.

    Science.gov (United States)

    Patronek, Gary J; Nathanson, Jane N

    2009-04-01

    Animal hoarding is a poorly understood, maladaptive, destructive behavior whose etiology and pathology are only beginning to emerge. We compare and contrast animal hoarding to the compulsive hoarding of objects and proceed to draw upon attachment theory, the literature of personality disorder and trauma, and our own clinical experience to propose a developmental trajectory. Throughout life, there is a persistent struggle to form a functional attachment style and achieve positive social integration. For some people, particularly those affected by a dysfunctional primary attachment experience in childhood, a protective, comforting relationship with animals may form an indelible imprint. In adulthood, when human attachment has been chronically problematic, compulsive caregiving of animals can become the primary means of maintaining or building a sense of self. Improving assessment and treatment of animal hoarders requires attention to contributing psychosocial conditions, while taking into account the centrality of the animals to the hoarder's identity, self-esteem and sense of control. It is our hope that the information presented will provide a basis upon which clinicians can focus their own counseling style, assessment, and methods of treatment.

  3. Reference group theory with implications for information studies: a theoretical essay

    Directory of Open Access Journals (Sweden)

    E. Murell Dawson

    2001-01-01

    Full Text Available This article explores the role and implications of reference group theory in relation to the field of library and information science. Reference group theory is based upon the principle that people take the standards of significant others as a basis for making self-appraisals, comparisons, and choices regarding need and use of information. Research that applies concepts of reference group theory to various sectors of library and information studies can provide data useful in enhancing areas such as information-seeking research, special populations, and uses of information. Implications are promising that knowledge gained from like research can be beneficial in helping information professionals better understand the role theory plays in examining ways in which people manage their information and social worlds.

  4. A criterion based on an information theoretic measure for goodness of fit between classifier and data base

    Science.gov (United States)

    Eigen, D. J.; Davida, G. I.; Northouse, R. A.

    1973-01-01

    A criterion for characterizing an iteratively trained classifier is presented. The criterion is based on an information theoretic measure that is developed from modeling classifier training iterations as a set of cascaded channels. The criterion is formulated as a figure of merit and as a performance index to check the appropriateness of application of the characterized classifier to an unknown data base and for implementing classifier updates and data selection respectively.

  5. A criterion based on an information theoretic measure for goodness of fit between classifier and data base. [in pattern recognition

    Science.gov (United States)

    Eigen, D. J.; Davida, G. I.; Northouse, R. A.

    1974-01-01

    A criterion for characterizing an iteratively trained classifier is presented. The criterion is based on an information theoretic measure that is developed from modeling classifier training iterations as a set of cascaded channels. The criterion is formulated as a figure of merit and as a performance index to check the appropriateness of application of the characterized classifier to an unknown data base and for implementing classifier updates and data selection, respectively.

  6. Surviving Grounded Theory Research Method in an Academic World: Proposal Writing and Theoretical Frameworks

    Directory of Open Access Journals (Sweden)

    Naomi Elliott

    2012-12-01

    Full Text Available Grounded theory research students are frequently faced with the challenge of writing a research proposal and using a theoretical framework as part of the academic requirements for a degree programme. Drawing from personal experiences of two PhD graduates who used classic grounded theory in two different universities, this paper highlights key lessons learnt which may help future students who are setting out to use grounded theory method. It identifies key discussion points that students may find useful when engaging with critical audiences, and defending their grounded theory thesis at final examination. Key discussion points included are: the difference between inductive and deductive inquiry; how grounded theory method of data gathering and analysis provide researchers with a viable means of generating new theory; the primacy of the questions used in data gathering and data analysis; and, the research-theory link as opposed to the theory-research link.

  7. EQUIVALENT EXCITATION METHOD FOR VIBRATION ISOLATION DESIGN:THEORETICAL ANALYSIS AND EXPERIMENTAL RESULTS

    Institute of Scientific and Technical Information of China (English)

    Huo Rui; Shi Yin

    2005-01-01

    In view of difficulties concerned with direct measurement of excitations inside source equipments and their significant influence on vibration isolation effectiveness, a dynamical model, for vibration isolation of a rigid machine with six-degree-of-freedom mounted on a flexible foundation through multiple mounts, is analyzed, in which the complicated and multiple disturbances inside the machine are described as an equivalent excitation spectrum. And a method for the estimation of the equivalent excitation spectrum according to system dynamic responses is discussed for the quantitative prediction of isolation effectiveness.Both theoretical analysis and experimental results are demonstrated. Further work shows the quantitative prediction of transmitted power flow in a flexible vibration isolation experiment system using the proposed equivalent excitation spectrum method, by comparison with its testing results.

  8. Automated segmentation of MS lesions in FLAIR, DIR and T2-w MR images via an information theoretic approach

    Science.gov (United States)

    Hill, Jason E.; Matlock, Kevin; Pal, Ranadip; Nutter, Brian; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) is a vital tool in the diagnosis and characterization of multiple sclerosis (MS). MS lesions can be imaged with relatively high contrast using either Fluid Attenuated Inversion Recovery (FLAIR) or Double Inversion Recovery (DIR). Automated segmentation and accurate tracking of MS lesions from MRI remains a challenging problem. Here, an information theoretic approach to cluster the voxels in pseudo-colorized multispectral MR data (FLAIR, DIR, T2-weighted) is utilized to automatically segment MS lesions of various sizes and noise levels. The Improved Jump Method (IJM) clustering, assisted by edge suppression, is applied to the segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF) and MS lesions, if present, into a subset of slices determined to be the best MS lesion candidates via Otsu's method. From this preliminary clustering, the modal data values for the tissues can be determined. A Euclidean distance is then used to estimate the fuzzy memberships of each brain voxel for all tissue types and their 50/50 partial volumes. From these estimates, binary discrete and fuzzy MS lesion masks are constructed. Validation is provided by using three synthetic MS lesions brains (mild, moderate and severe) with labeled ground truths. The MS lesions of mild, moderate and severe designations were detected with a sensitivity of 83.2%, and 88.5%, and 94.5%, and with the corresponding Dice similarity coefficient (DSC) of 0.7098, 0.8739, and 0.8266, respectively. The effect of MRI noise is also examined by simulated noise and the application of a bilateral filter in preprocessing.

  9. Principles and methods of quantum information technologies

    CERN Document Server

    Semba, Kouichi

    2016-01-01

    This book presents the research and development-related results of the “FIRST” Quantum Information Processing Project, which was conducted from 2010 to 2014 with the support of the Council for Science, Technology and Innovation of the Cabinet Office of the Government of Japan. The project supported 33 research groups and explored five areas: quantum communication, quantum metrology and sensing, coherent computing, quantum simulation, and quantum computing. The book is divided into seven main sections. Parts I through V, which consist of twenty chapters, focus on the system and architectural aspects of quantum information technologies, while Parts VI and VII, which consist of eight chapters, discuss the superconducting quantum circuit, semiconductor spin and molecular spin technologies.   Readers will be introduced to new quantum computing schemes such as quantum annealing machines and coherent Ising machines, which have now arisen as alternatives to standard quantum computers and are designed to successf...

  10. Theoretical Study of Similar Experimental Method for Durability of Concrete under Artiifcial Climate Environment

    Institute of Scientific and Technical Information of China (English)

    GENG Ou; FENG Tai; LI Debao; LI Qingtao

    2016-01-01

    Based on the similarity theory, a new experimental method named Similar Experimental Method for Durability of Concrete (SEMDC) was established. The existing experimental methods for durability of concrete were summarized, and the merits and demerits of these experimental methods were analyzed. Major factors affecting the durability of concrete were found through literature review. These factors were analyzed and the similarity criteria were established according to the similarity theory, and then the SEMDC was established according to the rules of these criteria. The various influential factors of the experimental method were analyzed and the merits and demerits of this new experimental method were discussed. According to SEMDC, changing the geometry shrinkage ratio was the only way to accelerate the test in order to keep the experiment similar to the reality. There were few other parameters which need to be changed in SEMDC, making the test easy to be achieved. According to SEMDC, time shrinkage ratio was the square of geometric shrinkage ratio, so an appropriate increase of the geometric shrinkage ratio could accelerate the test. Finally, an example of experimental design for durability of concrete was devised theoretically base on SEMDC theory.

  11. Uncontrolled inexact information within bundle methods

    OpenAIRE

    Malick, Jérôme; Welington De Oliveira, ·; Zaourar-Michel, Sofia

    2016-01-01

    International audience; We consider convex nonsmooth optimization problems where additional information with uncontrolled accuracy is readily available. It is often the case when the objective function is itself the output of an optimization solver, as for large-scale energy optimization problems tackled by decomposition. In this paper, we study how to incorporate the uncontrolled linearizations into (proximal and level) bundle algorithms in view of generating better iterates and possibly acc...

  12. A Survey of Game Theoretic Approaches to Modelling Decision-Making in Information Warfare Scenarios

    Directory of Open Access Journals (Sweden)

    Kathryn Merrick

    2016-07-01

    Full Text Available Our increasing dependence on information technologies and autonomous systems has escalated international concern for information- and cyber-security in the face of politically, socially and religiously motivated cyber-attacks. Information warfare tactics that interfere with the flow of information can challenge the survival of individuals and groups. It is increasingly important that both humans and machines can make decisions that ensure the trustworthiness of information, communication and autonomous systems. Subsequently, an important research direction is concerned with modelling decision-making processes. One approach to this involves modelling decision-making scenarios as games using game theory. This paper presents a survey of information warfare literature, with the purpose of identifying games that model different types of information warfare operations. Our contribution is a systematic identification and classification of information warfare games, as a basis for modelling decision-making by humans and machines in such scenarios. We also present a taxonomy of games that map to information warfare and cyber crime problems as a precursor to future research on decision-making in such scenarios. We identify and discuss open research questions including the role of behavioural game theory in modelling human decision making and the role of machine decision-making in information warfare scenarios.

  13. Theoretical analysis of three methods for calculating thermal insulation of clothing from thermal manikin.

    Science.gov (United States)

    Huang, Jianhua

    2012-07-01

    There are three methods for calculating thermal insulation of clothing measured with a thermal manikin, i.e. the global method, the serial method, and the parallel method. Under the condition of homogeneous clothing insulation, these three methods yield the same insulation values. If the local heat flux is uniform over the manikin body, the global and serial methods provide the same insulation value. In most cases, the serial method gives a higher insulation value than the global method. There is a possibility that the insulation value from the serial method is lower than the value from the global method. The serial method always gives higher insulation value than the parallel method. The insulation value from the parallel method is higher or lower than the value from the global method, depending on the relationship between the heat loss distribution and the surface temperatures. Under the circumstance of uniform surface temperature distribution over the manikin body, the global and parallel methods give the same insulation value. If the constant surface temperature mode is used in the manikin test, the parallel method can be used to calculate the thermal insulation of clothing. If the constant heat flux mode is used in the manikin test, the serial method can be used to calculate the thermal insulation of clothing. The global method should be used for calculating thermal insulation of clothing for all manikin control modes, especially for thermal comfort regulation mode. The global method should be chosen by clothing manufacturers for labelling their products. The serial and parallel methods provide more information with respect to the different parts of clothing.

  14. Theoretical reflections on the paradigmatic construction of Information Science: considerations about the (s paradigm (s cognitive (s and social

    Directory of Open Access Journals (Sweden)

    Jonathas Luiz Carvalho Silva

    2013-07-01

    Full Text Available It presents a research about the theoretical and epistemological processes that influence the formation of the cognitive paradigm of Information Science (IS, noting the emergence of social paradigm within the domain analysis and hermeneutics of information. For this, we adopted the reflections of classical and contemporary authors, like Thomas Kuhn, Boaventura Santos, Capurro, Hjørland and Albrechtsen. We conclude that the perception paradigm in IS is a consolidated issue, however the social paradigm is still under construction, which will allow the creation of perceptions, interpretations and contributions in order to fill gaps left by other paradigms.

  15. Aromaticity and antiaromaticity of substituted fulvene derivatives: perspectives from the information-theoretic approach in density functional reactivity theory.

    Science.gov (United States)

    Yu, Donghai; Rong, Chunying; Lu, Tian; Chattaraj, Pratim K; De Proft, Frank; Liu, Shubin

    2017-07-19

    Even though the concept of aromaticity and antiaromaticity is extremely important and widely used, there still exist lots of controversies in the literature, which are believed to be originated from the fact that there are so many aromatic types discovered and at the same time there are many aromaticity indexes proposed. In this work, using seven series of substituted fulvene derivatives as an example and with the information-theoretic approach in density functional reactivity theory, we examine these concepts from a different perspective. We investigate the changing patterns of Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Renyi entropy on the ring carbon atoms of these systems. Meanwhile, we also consider variation trends of four representative kinds of aromaticity indexes such as FLU, HOMA, ASE and NICS. Statistical analyses among these quantities show that with the same ring structure of the derivatives, both information-theoretic quantities and aromaticity indexes obey the same changing pattern, which are valid across all seven systems studied. However, cross correlations between these two sets of quantities yield two completely opposite patterns. These ring-structure dependent correlations are in good agreement with Hückel's 4n + 2 rule of aromaticity and 4n rule of antiaromaticity. Our results should provide a novel and complementary viewpoint on how aromaticity and antiaromaticity should be appreciated and categorized. More studies are in progress to further our understanding about the matter.

  16. Theoretical and practical implacations from the use of structuration theory in public sector information systems research

    NARCIS (Netherlands)

    Veenstra, A.F.E. van; Melin, U.; Axelson, K.

    2014-01-01

    To gain better understanding of the development, implementation and use of information technology (IT), many scholars in the field of information systems (IS) use structuration theory (ST). However, ST is, so far, more seldom applied to, and reflected upon, in studies of public sector IS to account

  17. Method for Extracting Product Information from TV Commercial

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2011-09-01

    Full Text Available Television (TV Commercial program contains important product information that displayed only in seconds. People who need that information has no insufficient time for noted it, even just for reading that information. This research work focus on automatically detect text and extract important information from a TV commercial to provide information in real time and for video indexing. We propose method for product information extraction from TV commercial using knowledge based system with pattern matching rule based method. Implementation and experiments on 50 commercial screenshot images achieved a high accuracy result on text extraction and information recognition.

  18. A general information theoretical proof for the second law of thermodynamics

    Institute of Scientific and Technical Information of China (English)

    ZHANG QiRen

    2008-01-01

    It is shown that the conservation and the non-additivity of the information, together with the additivity of the entropy, make the entropy increase in an isolated system.The collapse of the entangled quantum state offers an example of the information non-additivity. Nevertheless, the non-additivity of information is also true in other fields in which the interaction information is important. Examples are classical statistical mechanics, social statistics and financial processes. The second law of thermodynamics is thus proven in its most general form. It is exactly true not only in quantum and classical physics but also in other processes in which the information is conservative and non-additive.

  19. A general information theoretical proof for the second law of thermodynamics

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    It is shown that the conservation and the non-additivity of the information, together with the additivity of the entropy, make the entropy increase in an isolated system. The collapse of the entangled quantum state offers an example of the information non-additivity. Nevertheless, the non-additivity of information is also true in other fields in which the interaction information is important. Examples are classical statistical mechanics, social statistics and financial processes. The second law of thermodynamics is thus proven in its most general form. It is exactly true not only in quantum and classical physics but also in other processes in which the information is conservative and non-additive.

  20. Using Qualitative Methods to Inform Scale Development

    Science.gov (United States)

    Rowan, Noell; Wulff, Dan

    2007-01-01

    This article describes the process by which one study utilized qualitative methods to create items for a multi dimensional scale to measure twelve step program affiliation. The process included interviewing fourteen addicted persons while in twelve step focused treatment about specific pros (things they like or would miss out on by not being…

  1. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  2. Information Fusion Methods in Computer Pan-vision System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Aiming at concrete tasks of information fusion in computer pan-vision (CPV) system, information fusion methods are studied thoroughly. Some research progresses are presented. Recognizing of vision testing object is realized by fusing vision information and non-vision auxiliary information, which contain recognition of material defects, intelligent robot's autonomous recognition for parts and computer to defect image understanding and recognition automatically.

  3. Bootstrap Methods for the Empirical Study of Decision-Making and Information Flows in Social Systems

    CERN Document Server

    DeDeo, Simon; Klingenstein, Sara; Hitchcock, Tim

    2013-01-01

    We characterize the statistical bootstrap for the estimation of information-theoretic quantities from data, with particular reference to its use in the study of large-scale social phenomena. Our methods allow one to preserve, approximately, the underlying axiomatic relationships of information theory---in particular, consistency under arbitrary coarse-graining---that motivate use of these quantities in the first place, while providing reliability comparable to the state of the art for Bayesian estimators. We show how information-theoretic quantities allow for rigorous empirical study of the decision-making capacities of rational agents, and the time-asymmetric flows of information in distributed systems. We provide illustrative examples by reference to ongoing collaborative work on the semantic structure of the British Criminal Court system and the conflict dynamics of the contemporary Afghanistan insurgency.

  4. Information in Our World: Conceptions of Information and Problems of Method in Information Science

    Science.gov (United States)

    Ma, Lai

    2012-01-01

    Many concepts of information have been proposed and discussed in library and information science. These concepts of information can be broadly categorized as empirical and situational information. Unlike nomenclatures in many sciences, however, the concept of information in library and information science does not bear a generally accepted…

  5. Compensating for electrode polarization in dielectric spectroscopy studies of colloidal suspensions: theoretical assessment of existing methods

    Directory of Open Access Journals (Sweden)

    Claire Chassagne

    2016-07-01

    Full Text Available Dielectric spectroscopy can be used to determine the dipole moment of colloidal particles from which important interfacial electrokinetic properties, for instance their zeta potential, can be deduced. Unfortunately, dielectric spectroscopy measurements are hampered by electrode polarization (EP. In this article, we review several procedures to compensate for this effect. First EP in electrolyte solutions is described: the complex conductivity is derived as function of frequency, for two cell geometries (planar and cylindrical with blocking electrodes. The corresponding equivalent circuit for the electrolyte solution is given for each geometry. This equivalent circuit model is extended to suspensions. The complex conductivity of a suspension, in the presence of EP, is then calculated from the impedance measured. Different methods for compensating for EP are critically assessed, with the help of the theoretical findings. Their limit of validity is given in terms of characteristic frequencies. We can identify with one of these frequencies the frequency range within which data uncorrected for EP may be used to assess the dipole moment of colloidal particles. In order to extract this dipole moment from the measured data, two methods are reviewed: one is based on the use of existing models for the complex conductivity of suspensions, the other is the logarithmic derivative method. An extension to multiple relaxations of the logarithmic derivative method is proposed.

  6. Theoretical study of tautomers and photoisomers of avobenzone by DFT methods.

    Science.gov (United States)

    Trossini, Gustavo H G; Maltarollo, Vinicius G; Garcia, Ricardo D'A; Pinto, Claudinéia A S O; Velasco, Maria V R; Honorio, Kathia M; Baby, André R

    2015-12-01

    Organic ultraviolet (UV) filters such as cinnamates, benzophenones, p-aminobenzoic derivatives, and avobenzone (which have well-established and recognized UV-filtering efficacies) are employed in cosmetic/pharmaceutical products to minimize the harm caused by exposure of the skin to sunlight. In this study, a detailed investigation of the photostability and tautomerism mechanisms of avobenzone was performed utilizing DFT methods. The UV spectral profile of avobenzone was also simulated, and the results showed good agreement with experimental data. Furthermore, the calculations were able to distinguish tautomers and photoisomers of the studied organic filter based on their properties, thus showing the potential to develop new organic UV filters. Graphical Abstract Theoretical studies of avobenzone and its tautomers by TD-DFT.

  7. Valence and lowest Rydberg electronic states of phenol investigated by synchrotron radiation and theoretical methods

    Science.gov (United States)

    Limão-Vieira, P.; Duflot, D.; Ferreira da Silva, F.; Lange, E.; Jones, N. C.; Hoffmann, S. V.; Śmiałek, M. A.; Jones, D. B.; Brunger, M. J.

    2016-07-01

    We present the experimental high-resolution vacuum ultraviolet (VUV) photoabsorption spectra of phenol covering for the first time the full 4.3-10.8 eV energy-range, with absolute cross sections determined. Theoretical calculations on the vertical excitation energies and oscillator strengths were performed using time-dependent density functional theory and the equation-of-motion coupled cluster method restricted to single and double excitations level. These have been used in the assignment of valence and Rydberg transitions of the phenol molecule. The VUV spectrum reveals several new features not previously reported in the literature, with particular reference to the 6.401 eV transition, which is here assigned to the 3sσ/σ∗(OH)←3π(3a″) transition. The measured absolute photoabsorption cross sections have been used to calculate the photolysis lifetime of phenol in the earth's atmosphere (0-50 km).

  8. Theoretical study on real tooth surface of novel toroidal worm by the forming method

    Directory of Open Access Journals (Sweden)

    Zhang Xiaoping

    2012-08-01

    Full Text Available The novel toroidal worm-gearing is a kind of worm transmission with spherical meshing elements, which is made up of worm, steel balls and worm gear, and its loading capacity and adaptability to errors can be improved by the mismatched technology applied to the meshing pair. Based on the directrix of worm surface, the mathematic model of worm surface is established, and the directrix-based forming method for machining worm surface is proposed. Further, the principle error in the machining process is analyzed, and the theoretic and real tooth surfaces of worm are fitted and compared on OpenGL platform. The results show that the tooth profile error can be controlled at the range of 0~1×10-5mm, and it is always 0 at the pressure angle.

  9. Quantitative assessment of drivers of recent climate variability: An information theoretic approach

    CERN Document Server

    Bhaskar, Ankush; Vichare, Geeta; Koganti, Triven; Gurubaran, S

    2016-01-01

    Identification and quantification of possible drivers of recent climate variability remain a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases, CO2, CH4, and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance (TSI ) and cosmic ray flux (CR); El Nino Southern Oscillation (ENSO) and Global Mean Temperature Anomaly (GMTA) made during 1984-2005 are utilized to distinguish driving and responding climate signals. Estimates of their relative contributions reveal that CO 2 (~24%), CH 4 (~19%) and volcanic aerosols (~23%) are the primary contributors to the observed variations in GMTA. While, UV (~9%) and ENSO (~12%) act as secondary dri...

  10. Stripping syntax from complexity: An information-theoretical perspective on complex systems

    CERN Document Server

    Quax, Rick; Thurner, Stefan; Sloot, Peter M A

    2016-01-01

    Claude Shannons information theory (1949) has had a revolutionary impact on communication science. A crucial property of his framework is that it decouples the meaning of a message from the mechanistic details from the actual communication process itself, which opened the way to solve long-standing communication problems. Here we argue that a similar impact could be expected by applying information theory in the context of complexity science to answer long-standing, cross-domain questions about the nature of complex systems. This happens by decoupling the domain-specific model details (e.g., neuronal networks, ecosystems, flocks of birds) from the cross-domain phenomena that characterize complex systems (e.g., criticality, robustness, tipping points). This goes beyond using information theory as a non-linear correlation measure, namely it allows describing a complex system entirely in terms of the storage, transfer, and modification of informational bits. After all, a phenomenon that does not depend on model ...

  11. Formulating research methods for information systems v.1

    CERN Document Server

    Willcocks, Leslie P; Lacity, Mary C

    2015-01-01

    This edited two-volume collection presents the most interesting and compelling articles pertaining to the formulation of research methods used to study information systems from the 30-year publication history of the Journal of Information Technology.

  12. Information theoretic analysis of proprioceptive encoding during finger flexion in the monkey sensorimotor system.

    Science.gov (United States)

    Witham, Claire L; Baker, Stuart N

    2015-01-01

    There is considerable debate over whether the brain codes information using neural firing rate or the fine-grained structure of spike timing. We investigated this issue in spike discharge recorded from single units in the sensorimotor cortex, deep cerebellar nuclei, and dorsal root ganglia in macaque monkeys trained to perform a finger flexion task. The task required flexion to four different displacements against two opposing torques; the eight possible conditions were randomly interleaved. We used information theory to assess coding of task condition in spike rate, discharge irregularity, and spectral power in the 15- to 25-Hz band during the period of steady holding. All three measures coded task information in all areas tested. Information coding was most often independent between irregularity and 15-25 Hz power (60% of units), moderately redundant between spike rate and irregularity (56% of units redundant), and highly redundant between spike rate and power (93%). Most simultaneously recorded unit pairs coded using the same measure independently (86%). Knowledge of two measures often provided extra information about task, compared with knowledge of only one alone. We conclude that sensorimotor systems use both rate and temporal codes to represent information about a finger movement task. As well as offering insights into neural coding, this work suggests that incorporating spike irregularity into algorithms used for brain-machine interfaces could improve decoding accuracy. Copyright © 2015 the American Physiological Society.

  13. Versatile Formal Methods Applied to Quantum Information.

    Energy Technology Data Exchange (ETDEWEB)

    Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.

  14. Some theoretical aspects of elastic wave modeling with a recently developed spectral element method

    Institute of Scientific and Technical Information of China (English)

    WANG XiuMing; SERIANI Geza; LIN WeiJun

    2007-01-01

    A spectral element method has been recently developed for solving elastodynamic problems. The numerical solutions are obtained by using the weak formulation of the elastodynamic equation for heterogeneous media, based on the Galerkin approach applied to a partition, in small subdomains, of the original physical domain. In this work, some mathematical aspects of the method and the associated algorithm implementation are systematically investigated. Two kinds of orthogonal basis functions, constructed with Legendre and Chebyshev polynomials, and their related Gauss-Lobatto collocation points are introduced. The related integration formulas are obtained. The standard error estimations and expansion convergence are discussed. An element-by-element pre-conditioned conjugate gradient linear solver in the space domain and a staggered predictor/multi-corrector algorithm in the time integration are used for strong heterogeneous elastic media. As a consequence, neither the global matrices nor the effective force vector is assembled. When analytical formulas are used for the element quadrature, there is even no need for forming element matrix in order to further save memory without losing much in computational efficiency. The element-by-element algorithm uses an optimal tensor product scheme which makes this method much more efficient than finite-element methods from the point of view of both memory storage and computational time requirements. This work is divided into two parts. The first part mainly focuses on theoretical studies with a simple numerical result for the Chebyshev spectral element, and the second part, mainly with the Legendre spectral element, will give the algorithm implementation, numerical accuracy and efficiency analyses, and then the detailed modeling example comparisons of the proposed spectral element method with a pseudo-spectral method, which will be seen in another work by Lin, Wang and Zhang.

  15. Some theoretical aspects of elastic wave modeling with a recently developed spectral element method

    Institute of Scientific and Technical Information of China (English)

    SERIANI; Geza

    2007-01-01

    A spectral element method has been recently developed for solving elastodynamic problems. The numerical solutions are obtained by using the weak formulation of the elastodynamic equation for heterogeneous media, based on the Galerkin approach applied to a partition, in small subdomains, of the original physical domain. In this work, some mathematical aspects of the method and the associated algorithm implementation are systematically investigated. Two kinds of orthogonal basis functions, constructed with Legendre and Chebyshev polynomials, and their related Gauss-Lobatto collocation points are introduced. The related integration formulas are obtained. The standard error estimations and expansion convergence are discussed. An element-by-element pre-conditioned conjugate gradient linear solver in the space domain and a staggered predictor/multi-corrector algorithm in the time integration are used for strong heterogeneous elastic media. As a consequence, neither the global matrices nor the effective force vector is assembled. When analytical formulas are used for the element quadrature, there is even no need for forming element matrix in order to further save memory without losing much in computational efficiency. The element-by-element algorithm uses an optimal tensor product scheme which makes this method much more efficient than finite-element methods from the point of view of both memory storage and computational time requirements. This work is divided into two parts. The first part mainly focuses on theoretical studies with a simple numerical result for the Che-byshev spectral element, and the second part, mainly with the Legendre spectral element, will give the algorithm implementation, numerical accuracy and efficiency analyses, and then the detailed modeling example comparisons of the proposed spectral element method with a pseudo-spectral method, which will be seen in another work by Lin, Wang and Zhang.

  16. Computational Study of Chemical Reactivity Using Information-Theoretic Quantities from Density Functional Reactivity Theory for Electrophilic Aromatic Substitution Reactions.

    Science.gov (United States)

    Wu, Wenjie; Wu, Zemin; Rong, Chunying; Lu, Tian; Huang, Ying; Liu, Shubin

    2015-07-23

    The electrophilic aromatic substitution for nitration, halogenation, sulfonation, and acylation is a vastly important category of chemical transformation. Its reactivity and regioselectivity is predominantly determined by nucleophilicity of carbon atoms on the aromatic ring, which in return is immensely influenced by the group that is attached to the aromatic ring a priori. In this work, taking advantage of recent developments in quantifying nucleophilicity (electrophilicity) with descriptors from the information-theoretic approach in density functional reactivity theory, we examine the reactivity properties of this reaction system from three perspectives. These include scaling patterns of information-theoretic quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy and information gain at both molecular and atomic levels, quantitative predictions of the barrier height with both Hirshfeld charge and information gain, and energetic decomposition analyses of the barrier height for the reactions. To that end, we focused in this work on the identity reaction of the monosubstituted-benzene molecule reacting with hydrogen fluoride using boron trifluoride as the catalyst in the gas phase. We also considered 19 substituting groups, 9 of which are ortho/para directing and the other 9 meta directing, besides the case of R = -H. Similar scaling patterns for these information-theoretic quantities found for stable species elsewhere were disclosed for these reactions systems. We also unveiled novel scaling patterns for information gain at the atomic level. The barrier height of the reactions can reliably be predicted by using both the Hirshfeld charge and information gain at the regioselective carbon atom. The energy decomposition analysis ensued yields an unambiguous picture about the origin of the barrier height, where we showed that it is the electrostatic interaction that plays the dominant role, while the roles played by exchange-correlation and

  17. Information-theoretic indices usage for the prediction and calculation of octanol-water partition coefficient.

    Science.gov (United States)

    Persona, Marek; Kutarov, Vladimir V; Kats, Boris M; Persona, Andrzej; Marczewska, Barbara

    2007-01-01

    The paper describes the new prediction method of octanol-water partition coefficient, which is based on molecular graph theory. The results obtained using the new method are well correlated with experimental values. These results were compared with the ones obtained by use of ten other structure correlated methods. The comparison shows that graph theory can be very useful in structure correlation research.

  18. Decomposition of overlapping protein complexes: A graph theoretical method for analyzing static and dynamic protein associations

    Directory of Open Access Journals (Sweden)

    Guimarães Katia S

    2006-04-01

    Full Text Available Abstract Background Most cellular processes are carried out by multi-protein complexes, groups of proteins that bind together to perform a specific task. Some proteins form stable complexes, while other proteins form transient associations and are part of several complexes at different stages of a cellular process. A better understanding of this higher-order organization of proteins into overlapping complexes is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Results We propose a new method for identifying and representing overlapping protein complexes (or larger units called functional groups within a protein interaction network. We develop a graph-theoretical framework that enables automatic construction of such representation. We illustrate the effectiveness of our method by applying it to TNFα/NF-κB and pheromone signaling pathways. Conclusion The proposed representation helps in understanding the transitions between functional groups and allows for tracking a protein's path through a cascade of functional groups. Therefore, depending on the nature of the network, our representation is capable of elucidating temporal relations between functional groups. Our results show that the proposed method opens a new avenue for the analysis of protein interaction networks.

  19. Theoretical and experimental study of a new method for prediction of profile drag of airfoil sections

    Science.gov (United States)

    Goradia, S. H.; Lilley, D. E.

    1975-01-01

    Theoretical and experimental studies are described which were conducted for the purpose of developing a new generalized method for the prediction of profile drag of single component airfoil sections with sharp trailing edges. This method aims at solution for the flow in the wake from the airfoil trailing edge to the large distance in the downstream direction; the profile drag of the given airfoil section can then easily be obtained from the momentum balance once the shape of velocity profile at a large distance from the airfoil trailing edge has been computed. Computer program subroutines have been developed for the computation of the profile drag and flow in the airfoil wake on CDC6600 computer. The required inputs to the computer program consist of free stream conditions and the characteristics of the boundary layers at the airfoil trailing edge or at the point of incipient separation in the neighborhood of airfoil trailing edge. The method described is quite generalized and hence can be extended to the solution of the profile drag for multi-component airfoil sections.

  20. Theoretical determination of chemical rate constants using novel time-dependent methods

    Science.gov (United States)

    Dateo, Christopher E.

    1994-01-01

    The work completed within the grant period 10/1/91 through 12/31/93 falls primarily in the area of reaction dynamics using both quantum and classical mechanical methodologies. Essentially four projects have been completed and have been or are in preparation of being published. The majority of time was spent in the determination of reaction rate coefficients in the area of hydrocarbon fuel combustion reactions which are relevant to NASA's High Speed Research Program (HSRP). These reaction coefficients are important in the design of novel jet engines with low NOx emissions, which through a series of catalytic reactions contribute to the deterioration of the earth's ozone layer. A second area of research studied concerned the control of chemical reactivity using ultrashort (femtosecond) laser pulses. Recent advances in pulsed-laser technologies have opened up a vast new field to be investigated both experimentally and theoretically. The photodissociation of molecules adsorbed on surfaces using novel time-independent quantum mechanical methods was a third project. And finally, using state-of-the-art, high level ab initio electronic structure methods in conjunction with accurate quantum dynamical methods, the rovibrational energy levels of a triatomic molecule with two nonhydrogen atoms (HCN) were calculated to unprecedented levels of agreement between theory and experiment.

  1. An information-theoretic characterization of the optimal gradient sensing response of cells.

    Directory of Open Access Journals (Sweden)

    Burton W Andrews

    2007-08-01

    Full Text Available Many cellular systems rely on the ability to interpret spatial heterogeneities in chemoattractant concentration to direct cell migration. The accuracy of this process is limited by stochastic fluctuations in the concentration of the external signal and in the internal signaling components. Here we use information theory to determine the optimal scheme to detect the location of an external chemoattractant source in the presence of noise. We compute the minimum amount of mutual information needed between the chemoattractant gradient and the internal signal to achieve a prespecified chemotactic accuracy. We show that more accurate chemotaxis requires greater mutual information. We also demonstrate that a priori information can improve chemotaxis efficiency. We compare the optimal signaling schemes with existing experimental measurements and models of eukaryotic gradient sensing. Remarkably, there is good quantitative agreement between the optimal response when no a priori assumption is made about the location of the existing source, and the observed experimental response of unpolarized Dictyostelium discoideum cells. In contrast, the measured response of polarized D. discoideum cells matches closely the optimal scheme, assuming prior knowledge of the external gradient-for example, through prolonged chemotaxis in a given direction. Our results demonstrate that different observed classes of responses in cells (polarized and unpolarized are optimal under varying information assumptions.

  2. Locating Sensors for Detecting Source-to-Target Patterns of Special Nuclear Material Smuggling: A Spatial Information Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Xuesong Zhou

    2010-08-01

    Full Text Available In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  3. Locating sensors for detecting source-to-target patterns of special nuclear material smuggling: a spatial information theoretic approach.

    Science.gov (United States)

    Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong

    2010-01-01

    In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  4. Understanding intention to use electronic information resources: A theoretical extension of the technology acceptance model (TAM).

    Science.gov (United States)

    Tao, Donghua

    2008-11-06

    This study extended the Technology Acceptance Model (TAM) by examining the roles of two aspects of e-resource characteristics, namely, information quality and system quality, in predicting public health students' intention to use e-resources for completing research paper assignments. Both focus groups and a questionnaire were used to collect data. Descriptive analysis, data screening, and Structural Equation Modeling (SEM) techniques were used for data analysis. The study found that perceived usefulness played a major role in determining students' intention to use e-resources. Perceived usefulness and perceived ease of use fully mediated the impact that information quality and system quality had on behavior intention. The research model enriches the existing technology acceptance literature by extending TAM. Representing two aspects of e-resource characteristics provides greater explanatory information for diagnosing problems of system design, development, and implementation.

  5. Role-based Integration Method of Enterprise Information System

    Institute of Scientific and Technical Information of China (English)

    YU Ming-hui; FEI Qi; CHEN Xue-guang

    2002-01-01

    This paper analyzes the current situation of enterprise information system and methods of system integration at first. Then a role-based analyzing method is proposed. It can help confirm the keystone of the construction of information system and the direction of system integration. At last, a case study on theintegration of material dispatching information system in a large-scale project is presented briefly. It shows that this new method is more effective than the others are.

  6. Theoretical Mathematics

    Science.gov (United States)

    Stöltzner, Michael

    Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.

  7. Innovation in Information Technology: Theoretical and Empirical Study in SMQR Section of Export Import in Automotive Industry

    Science.gov (United States)

    Edi Nugroho Soebandrija, Khristian; Pratama, Yogi

    2014-03-01

    This paper has the objective to provide the innovation in information technology in both theoretical and empirical study. Precisely, both aspects relate to the Shortage Mispacking Quality Report (SMQR) Claims in Export and Import in Automotive Industry. This paper discusses the major aspects of Innovation, Information Technology, Performance and Competitive Advantage. Furthermore, In the empirical study of PT. Astra Honda Motor (AHM) refers to SMQR Claims, Communication Systems, Analysis and Design Systems. Briefly both aspects of the major aspects and its empirical study are discussed in the Introduction Session. Furthermore, the more detail discussion is conducted in the related aspects in other sessions of this paper, in particular in Literature Review in term classical and updated reference of current research. The increases of SMQR claim and communication problem at PT. Astra Daihatsu Motor (PT. ADM) which still using the email cause the time of claim settlement become longer and finally it causes the rejected of SMQR claim by supplier. With presence of this problem then performed to design the integrated communication system to manage the communication process of SMQR claim between PT. ADM with supplier. The systems was analyzed and designed is expected to facilitate the claim communication process so that can be run in accordance with the procedure and fulfill the target of claim settlement time and also eliminate the difficulties and problems on the previous manual communication system with the email. The design process of the system using the approach of system development life cycle method by Kendall & Kendall (2006)which design process covers the SMQR problem communication process, judgment process by the supplier, claim process, claim payment process and claim monitoring process. After getting the appropriate system designs for managing the SMQR claim, furthermore performed the system implementation and can be seen the improvement in claim communication

  8. Innovation in Information Technology: Theoretical and Empirical Study in SMQR Section of Export Import in Automotive Industry

    Directory of Open Access Journals (Sweden)

    Soebandrija Khristian Edi Nugroho

    2014-03-01

    Full Text Available This paper has the objective to provide the innovation in information technology in both theoretical and empirical study. Precisely, both aspects relate to the Shortage Mispacking Quality Report (SMQR Claims in Export and Import in Automotive Industry. This paper discusses the major aspects of Innovation, Information Technology, Performance and Competitive Advantage. Furthermore, In the empirical study of PT. Astra Honda Motor (AHM refers to SMQR Claims, Communication Systems, Analysis and Design Systems. Briefly both aspects of the major aspects and its empirical study are discussed in the Introduction Session. Furthermore, the more detail discussion is conducted in the related aspects in other sessions of this paper, in particular in Literature Review in term classical and updated reference of current research. The increases of SMQR claim and communication problem at PT. Astra Daihatsu Motor (PT. ADM which still using the email cause the time of claim settlement become longer and finally it causes the rejected of SMQR claim by supplier. With presence of this problem then performed to design the integrated communication system to manage the communication process of SMQR claim between PT. ADM with supplier. The systems was analyzed and designed is expected to facilitate the claim communication process so that can be run in accordance with the procedure and fulfill the target of claim settlement time and also eliminate the difficulties and problems on the previous manual communication system with the email. The design process of the system using the approach of system development life cycle method by Kendall & Kendall (2006which design process covers the SMQR problem communication process, judgment process by the supplier, claim process, claim payment process and claim monitoring process. After getting the appropriate system designs for managing the SMQR claim, furthermore performed the system implementation and can be seen the improvement in

  9. Theoretical framework for government information service delivery to deep rural communities in South Africa

    CSIR Research Space (South Africa)

    Mvelase, PS

    2009-10-01

    Full Text Available the information from both the community and SMMEs on their needs. The questionnaire was administered to three communities in KwaNongoma rural. These were KwaKhangela, KwaMeme and KwaSomkhele. Respondents were asked to give their opinions on how far their needs...

  10. Theoretical and methodological significance of Information and Communication Technology in educational practice.

    NARCIS (Netherlands)

    Mooij, Ton

    2016-01-01

    In September 1998 the Research Network ‘ICT in Education and Training’ was initiated at the conference of the European Educational Research Association (EERA). The new network reflected the recognition and growing importance of information and communication technology (ICT) with respect to education

  11. Population dynamics of mottled sculpin (PISCES) in a variable environment: information theoretic approaches

    Science.gov (United States)

    Gary D. Grossman; Robert E Ratajczak; J. Todd Petty; Mark D. Hunter; James T. Peterson; Gael Grenouillet

    2006-01-01

    We used strong inference with Akaike's Information Criterion (AIC) to assess the processes capable of explaining long-term (1984-1995) variation in the per capita rate of change of mottled sculpin (Cottus bairdi) populations in the Coweeta Creek drainage (USA). We sampled two fourth- and one fifth-order sites (BCA [uppermost], BCB, and CC [lowermost])...

  12. A geo-information theoretical approach to inductive erosion modelling based on terrain mapping units.

    NARCIS (Netherlands)

    Suryana, N.

    1997-01-01

    Three main aspects of the research, namely the concept of object orientation, the development of an Inductive Erosion Model (IEM) and the development of a framework for handling uncertainty in the data or information resulting from a GIS are interwoven in this thesis. The first and the second aspect

  13. Information Theoretic Characterizations of Coded Imaging-based Space Object Identification

    Science.gov (United States)

    2010-09-01

    measurements. An important parameter that will concern us is the minimum number of spectral measurements required to unam - biguously identify each...required to unam - biguously identify each material in the scene, given some level of noise in the data. Due to our information-based approach, we have

  14. An Informational-Theoretical Formulation of the Second Law of Thermodynamics

    Science.gov (United States)

    Ben-Naim, Arieh

    2009-01-01

    This paper presents a formulation of the second law of thermodynamics couched in terms of Shannon's measure of information. This formulation has an advantage over other formulations of the second law. First, it shows explicitly what is the thing that changes in a spontaneous process in an isolated system, which is traditionally referred to as the…

  15. The theoretical foundations of value-informed pricing in the service-dominant logic of marketing

    NARCIS (Netherlands)

    Ingenbleek, P.T.M.

    2014-01-01

    Purpose – In the mainstream normative pricing literature, value assessment is virtually non-existent. Although the resource-based literature recognizes that pricing is a competence, value-informed pricing practices are still weakly grounded in theory. The purpose of this paper is to strengthen the t

  16. Developing and theoretically justifying innovative organizational practices in health information assurance

    Science.gov (United States)

    Collmann, Jeff R.

    2003-05-01

    This paper justifies and explains current efforts in the Military Health System (MHS) to enhance information assurance in light of the sociological debate between "Normal Accident" (NAT) and "High Reliability" (HRT) theorists. NAT argues that complex systems such as enterprise health information systems display multiple, interdependent interactions among diverse parts that potentially manifest unfamiliar, unplanned, or unexpected sequences that operators may not perceive or immediately understand, especially during emergencies. If the system functions rapidly with few breaks in time, space or process development, the effects of single failures ramify before operators understand or gain control of the incident thus producing catastrophic accidents. HRT counters that organizations with strong leadership support, continuous training, redundant safety features and "cultures of high reliability" contain the effects of component failures even in complex, tightly coupled systems. Building highly integrated, enterprise-wide computerized health information management systems risks creating the conditions for catastrophic breaches of data security as argued by NAT. The data security regulations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) implicitly depend on the premises of High Reliability Theorists. Limitations in HRT thus have implications for both safe program design and compliance efforts. MHS and other health care organizations should consider both NAT and HRT when designing and deploying enterprise-wide computerized health information systems.

  17. The theoretical foundations of value-informed pricing in the service-dominant logic of marketing

    NARCIS (Netherlands)

    Ingenbleek, P.T.M.

    2014-01-01

    Purpose – In the mainstream normative pricing literature, value assessment is virtually non-existent. Although the resource-based literature recognizes that pricing is a competence, value-informed pricing practices are still weakly grounded in theory. The purpose of this paper is to strengthen the

  18. Quantitative assessment of drivers of recent global temperature variability: an information theoretic approach

    Science.gov (United States)

    Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.

    2017-02-01

    Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2 , CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance (TSI) and cosmic ray flux (CR); El Niño Southern Oscillation (ENSO) and Global Mean Temperature Anomaly (GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 % ), CH4 ({˜ } 19 % ) and volcanic aerosols ({˜ }23 % ) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 % ) and ENSO ({˜ } 12 % ) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.

  19. The Approach Towards Equilibrium in a Reversible Ising Dynamics Model: An Information-Theoretic Analysis Based on an Exact Solution

    Science.gov (United States)

    Lindgren, Kristian; Olbrich, Eckehard

    2017-08-01

    We study the approach towards equilibrium in a dynamic Ising model, the Q2R cellular automaton, with microscopic reversibility and conserved energy for an infinite one-dimensional system. Starting from a low-entropy state with positive magnetisation, we investigate how the system approaches equilibrium characteristics given by statistical mechanics. We show that the magnetisation converges to zero exponentially. The reversibility of the dynamics implies that the entropy density of the microstates is conserved in the time evolution. Still, it appears as if equilibrium, with a higher entropy density is approached. In order to understand this process, we solve the dynamics by formally proving how the information-theoretic characteristics of the microstates develop over time. With this approach we can show that an estimate of the entropy density based on finite length statistics within microstates converges to the equilibrium entropy density. The process behind this apparent entropy increase is a dissipation of correlation information over increasing distances. It is shown that the average information-theoretic correlation length increases linearly in time, being equivalent to a corresponding increase in excess entropy.

  20. Theoretical Study of Palladium Membrane Reactor Performance During Propane Dehydrogenation Using CFD Method

    Directory of Open Access Journals (Sweden)

    Kamran Ghasemzadeh

    2017-04-01

    Full Text Available This study presents a 2D-axisymmetric computational fluid dynamic (CFD model to investigate the performance Pd membrane reactor (MR during propane dehydrogenation process for hydrogen production. The proposed CFD model provided the local information of temperature and component concentration for the driving force analysis. After investigation of mesh independency of CFD model, the validation of CFD model results was carried out by other modeling data and a good agreement between CFD model results and theoretical data was achieved. Indeed, in the present model, a tubular reactor with length of 150 mm was considered, in which the Pt-Sn-K/Al2O3 as catalyst were filled in reaction zone. Hence, the effects of the important operating parameter (reaction temperature on the performances of membrane reactor (MR were studied in terms of propane conversion and hydrogen yield. The CFD results showed that the suggested MR system during propane dehydrogenation reaction presents higher performance with respect to once obtained in the conventional reactor (CR. In particular, by applying Pd membrane, was found that propane conversion can be increased from 41% to 49%. Moreover, the highest value of propane conversion (X = 91% was reached in case of Pd-Ag MR. It was also established that the feed flow rate of the MR is to be the one of the most important factors defining efficiency of the propane dehydrogenation process.

  1. Trends in the theoretical and research methodological approaches applied in doctoral studies in information and knowledge management: an exploration of ten years of research in South Africa

    Directory of Open Access Journals (Sweden)

    M. A. Mearns

    2008-01-01

    Full Text Available The past ten years have seen the field of information and knowledge management develop and implement new and improved technologies. Because of the ease with which information is exchanged the contribution to information overload has increased exponentially and the need for information and knowledge management is more real than ever before. Research in itself is a science of knowledge creation that continuously evolves in line with newly developed theories and research methodologies. An investigation of the theories and research methodologies that doctoral theses, completed in South Africa, ascribed to over the past ten years were conducted. Search strings containing 'information management', 'knowledge management' and 'information and knowledge management' were searched within citation, abstract and subject fields. A sample of 30 theses from a possible 47 in the relevant population was identified. Qualitative and mixed methods research design was favoured, making use of case studies and surveys, but paying little attention to theoretical approaches or paradigms. The boundaries between disciplines are continuously re-defined, new disciplines evolve and traditional disciplines suffer under the pressures of changing problems of the world. The importance of research in the field of information and knowledge management being grounded in the most recent scientific thought is emphasized.

  2. Trends in the theoretical and research methodological approaches applied in doctoral studies in information and knowledge management: an exploration of ten years of research in South Africa

    Directory of Open Access Journals (Sweden)

    M. A. Mearns

    2008-01-01

    Full Text Available The past ten years have seen the field of information and knowledge management develop and implement new and improved technologies. Because of the ease with which information is exchanged the contribution to information overload has increased exponentially and the need for information and knowledge management is more real than ever before. Research in itself is a science of knowledge creation that continuously evolves in line with newly developed theories and research methodologies. An investigation of the theories and research methodologies that doctoral theses, completed in South Africa, ascribed to over the past ten years were conducted. Search strings containing 'information management', 'knowledge management' and 'information and knowledge management' were searched within citation, abstract and subject fields. A sample of 30 theses from a possible 47 in the relevant population was identified. Qualitative and mixed methods research design was favoured, making use of case studies and surveys, but paying little attention to theoretical approaches or paradigms. The boundaries between disciplines are continuously re-defined, new disciplines evolve and traditional disciplines suffer under the pressures of changing problems of the world. The importance of research in the field of information and knowledge management being grounded in the most recent scientific thought is emphasized.

  3. BASIC THEORY AND METHOD OF WELDING ARC SPECTRAL INFORMATION

    Institute of Scientific and Technical Information of China (English)

    Li Junyue; Li Zhiyong; Li Huan; Xue Haitao

    2004-01-01

    Arc spectral information is a rising information source which can solve many problems that can not be done with arc electric information and other arc information.It is of important significance to develop automatic control technique of welding process.The basic theory and methods on it play an important role in expounding and applying arc spectral information.Using concerned equation in plasma physics and spectrum theory,a system of equations including 12 equations which serve as basic theory of arc spectral information is set up.Through analyzing of the 12 equations,a basic view that arc spectral information is the reflection of arc state and state variation,and is the most abundant information resource reflecting welding arc process is drawn.Furthermore,based on the basic theory,the basic methods of test and control of arc spectral information and points out some applications of it are discussesed.

  4. Elucidating photoinduced structural changes in phytochromes by the combined application of resonance Raman spectroscopy and theoretical methods

    Science.gov (United States)

    Mroginski, M. A.; von Stetten, D.; Kaminski, S.; Escobar, F. Velazquez; Michael, N.; Daminelli-Widany, G.; Hildebrandt, P.

    2011-05-01

    Phytochromes constitute a family of red-light sensing photoreceptors in plants and microorganisms. The photoactive cofactor is an open-chain methine-bridged tetrapyrrole that, upon light absorption, undergoes a double bond isomerisation followed by series thermal relaxation processes which eventually lead to the functional structural change of the protein. Resonance Raman spectroscopy has contributed significantly to the understanding of the molecular functioning of these proteins although both the experiments and the interpretation of the spectra represent a considerable challenge. This account is dedicated to describe achievements, potential and limitations of combined resonance Raman spectroscopic and theoretical approaches for elucidating cofactor structures in phytochromes. Experimental approaches are discussed paying specific attention on strategies to overcome unwanted photochemical and photophysical processes when probing the various states of the photoinduced reaction cycle of phytochromes. The most comprehensive set of experimental data on phytochromes, including engineered protein variants and adducts formed with isotopically labelled tetrapyrroles, has been obtained by resonance Raman spectroscopy with near-infrared excitation that also allows probing phytochrome crystals without photo-induced destruction. Quantum mechanical calculations of Raman spectra of model compounds represent a first approximation for determining the methine bridge geometry of the protein-bound tetrapyrroles and constitute the basis for the identification of marker bands for specific structural properties such as the protonation state of the cofactor. Drawbacks of this theoretical method that inevitably neglects the protein environment have become evident with the first determinations of three-dimensional structures of phytochromes. These structural models can now be used for employing hybrid methods that combine quantum mechanical and molecular mechanics calculations of the

  5. Application of geo-information science methods in ecotourism exploitation

    Science.gov (United States)

    Dong, Suocheng; Hou, Xiaoli

    2004-11-01

    Application of geo-information science methods in ecotourism development was discussed in the article. Since 1990s, geo-information science methods, which take the 3S (Geographic Information System, Global Positioning System, and Remote Sensing) as core techniques, has played an important role in resources reconnaissance, data management, environment monitoring, and regional planning. Geo-information science methods can easily analyze and convert geographic spatial data. The application of 3S methods is helpful to sustainable development in tourism. Various assignments are involved in the development of ecotourism, such as reconnaissance of ecotourism resources, drawing of tourism maps, dealing with mass data, and also tourism information inquire, employee management, quality management of products. The utilization of geo-information methods in ecotourism can make the development more efficient by promoting the sustainable development of tourism and the protection of eco-environment.

  6. Transceiver Design with Low-Precision Analog-to-Digital Conversion : An Information-Theoretic Perspective

    CERN Document Server

    Singh, Jaspreet; Madhow, Upamanyu

    2008-01-01

    Modern communication receiver architectures center around digital signal processing (DSP), with the bulk of the receiver processing being performed on digital signals obtained after analog-to-digital conversion (ADC). In this paper, we explore Shannon-theoretic performance limits when ADC precision is drastically reduced, from typical values of 8-12 bits used in current communication transceivers, to 1-3 bits. The goal is to obtain insight on whether DSP-centric transceiver architectures are feasible as communication bandwidths scale up, recognizing that high-precision ADC at high sampling rates is either unavailable, or too costly or power-hungry. Specifically, we evaluate the communication limits imposed by low-precision ADC for the ideal real discrete-time Additive White Gaussian Noise (AWGN) channel, under an average power constraint on the input. For an ADC with K quantization bins (i.e., a precision of log2 K bits), we show that the Shannon capacity is achievable by a discrete input distribution with at...

  7. Goal setting and action planning in the rehabilitation setting: development of a theoretically informed practice framework.

    Science.gov (United States)

    Scobbie, Lesley; Dixon, Diane; Wyke, Sally

    2011-05-01

    Setting and achieving goals is fundamental to rehabilitation practice but has been criticized for being a-theoretical and the key components of replicable goal-setting interventions are not well established. To describe the development of a theory-based goal setting practice framework for use in rehabilitation settings and to detail its component parts. Causal modelling was used to map theories of behaviour change onto the process of setting and achieving rehabilitation goals, and to suggest the mechanisms through which patient outcomes are likely to be affected. A multidisciplinary task group developed the causal model into a practice framework for use in rehabilitation settings through iterative discussion and implementation with six patients. Four components of a goal-setting and action-planning practice framework were identified: (i) goal negotiation, (ii) goal identification, (iii) planning, and (iv) appraisal and feedback. The variables hypothesized to effect change in patient outcomes were self-efficacy and action plan attainment. A theory-based goal setting practice framework for use in rehabilitation settings is described. The framework requires further development and systematic evaluation in a range of rehabilitation settings.

  8. Theoretical Treatment of Spin-Forbidden and Electronically Nonadiabatic Processes. Methods and Applications

    Science.gov (United States)

    1993-01-10

    presented at Proceedings of the High Energy Density Matter (HEDM) Conference Albuquerque Marriot Hotel Albuquerque, NM February 24-27, 1991 4. Theoretical...Theoretical Approaches to Energy Transfer and Photochemical Processes Queen Kapioloani Hotel Honolulu, Hawaii December 27-30, 1990 3. Theoretical Studies of...Surfaces Sheraton Hotel San Francisco, CA April 5-10, 1992 5. Electronic Structure and Dynamical Aspects of Non-Born-Oppenheimer Processes in the

  9. Theoretical base of the approach to the representation of aggregate information on the cross sections of the scattering processes

    Directory of Open Access Journals (Sweden)

    Alla A. Mityureva

    2015-12-01

    Full Text Available In the present paper, the approach to the representation of aggregate information on the cross sections of elementary processes is described and its justification within mathematical statistics is given. It is caused by necessity of integrated account of the results obtained by different works at different times, in different groups, based on experimental and theoretical studies in various energy ranges. The main attention is paid to the process of electron-atom scattering. As an example of the proposed approach application, the aggregate result on thus obtained integral cross sections of electron impact excitation of the transitions in the hydrogen atom is presented.

  10. BRIEF INTRODUCTION TO THEORETICAL INTENTION OF "NEEDLING METHOD FOR TRANQUILLIZATION AND CALMING THE MIND" FOR TREATMENT OF INSOMNIA

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A set of scientific theories and an effective acupuncture therapy for insomnia about "the needling method for tranquillization and calming the mind" are gradually formed through many years' theoretical and clinical studies. In this paper, the theoretical intention about "the needling method for tranquillization and calming the mind" for treatment of insomnia are briefly introduced mainly from the cause of disease,pathogenesis, therapeutic method and characteristics of composition of a prescription, etc. in order to provide a new train of thoughts and a new method for working out scientific and standard prescriptions in the treatment of insomnia.

  11. Beyond the SCS-CN method: A theoretical framework for spatially lumped rainfall-runoff response

    Science.gov (United States)

    Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.

    2016-06-01

    Since its introduction in 1954, the Soil Conservation Service curve number (SCS-CN) method has become the standard tool, in practice, for estimating an event-based rainfall-runoff response. However, because of its empirical origins, the SCS-CN method is restricted to certain geographic regions and land use types. Moreover, it does not describe the spatial variability of runoff. To move beyond these limitations, we present a new theoretical framework for spatially lumped, event-based rainfall-runoff modeling. In this framework, we describe the spatially lumped runoff model as a point description of runoff that is upscaled to a watershed area based on probability distributions that are representative of watershed heterogeneities. The framework accommodates different runoff concepts and distributions of heterogeneities, and in doing so, it provides an implicit spatial description of runoff variability. Heterogeneity in storage capacity and soil moisture are the basis for upscaling a point runoff response and linking ecohydrological processes to runoff modeling. For the framework, we consider two different runoff responses for fractions of the watershed area: "prethreshold" and "threshold-excess" runoff. These occur before and after infiltration exceeds a storage capacity threshold. Our application of the framework results in a new model (called SCS-CNx) that extends the SCS-CN method with the prethreshold and threshold-excess runoff mechanisms and an implicit spatial description of runoff. We show proof of concept in four forested watersheds and further that the resulting model may better represent geographic regions and site types that previously have been beyond the scope of the traditional SCS-CN method.

  12. Information-Theoretic Viewpoints on Optimal Causal Coding-Decoding Problems

    CERN Document Server

    Gorantla, Siva

    2011-01-01

    In this paper we consider an interacting two-agent sequential decision-making problem consisting of a Markov source process, a causal encoder with feedback, and a causal decoder. Motivated by a desire to foster links between control and information theory, we augment the standard formulation by considering general alphabets and a cost function operating on current and previous symbols. Using dynamic programming, we provide a structural result whereby an optimal scheme exists that operates on appropriate sufficient statistics. We emphasize an example where the decoder alphabet lies in a space of beliefs on the source alphabet, and the additive cost function is a log likelihood ratio pertaining to sequential information gain. We also consider the inverse optimal control problem, where a fixed encoder/decoder pair satisfying statistical conditions is shown to be optimal for some cost function, using probabilistic matching. We provide examples of the applicability of this framework to communication with feedback,...

  13. INFORMATION AND COMMUNICATION TECHNOLOGIES (ICT) IN PHYSICAL EDUCATION. A THEORETICAL REVIEW

    OpenAIRE

    Mateo Rodríguez Quijada

    2015-01-01

    In this review we tour the treatment of the Information and Communication Technologies (ICT) in the field of physical education by the professed and the students. For this purpose we review the existing lines of research on the topic and how the most remarkable works of different authors, with special attention to the situation in the autonomous community of Galicia. Finally the main problems related to the use of these technologies in classrooms are analyzed. All this in order t to shed ligh...

  14. OVERVIEW OF INFORMATION TECHNOLOGY AND A THEORETICAL MODEL IN SUPPLY CHAIN MANAGEMENT FOR LOCAL SMES

    OpenAIRE

    Kherbach OUALID; Marian Liviu MOCAN; Dumitrache, Cristian; Ghoumrassi AMINE

    2016-01-01

    Most literature on supply chain management (SCM) focuses on large organizations with global operations employing high-level information technology. This creates a gap in the knowledge of how SMEs use and practice (SCM) moreover (SCM) is an area of increasing importance among enterprises and of growing academic interest. It is based on the concept of firms as part of multiple organizations oriented to the provision of goods and services for the final customer. the survival of Small to Medium ...

  15. OVERVIEW OF INFORMATION TECHNOLOGY AND A THEORETICAL MODEL IN SUPPLY CHAIN MANAGEMENT FOR LOCAL SMES

    OpenAIRE

    Kherbach OUALID; Marian Liviu MOCAN; Cristian DUMITRACHE; Ghoumrassi AMINE

    2016-01-01

    Most literature on supply chain management (SCM) focuses on large organizations with global operations employing high-level information technology. This creates a gap in the knowledge of how SMEs use and practice (SCM) moreover (SCM) is an area of increasing importance among enterprises and of growing academic interest. It is based on the concept of firms as part of multiple organizations oriented to the provision of goods and services for the final customer. the survival of Small to Medium ...

  16. Information-theoretic analysis of the dynamics of an executable biological model.

    Directory of Open Access Journals (Sweden)

    Avital Sadot

    Full Text Available To facilitate analysis and understanding of biological systems, large-scale data are often integrated into models using a variety of mathematical and computational approaches. Such models describe the dynamics of the biological system and can be used to study the changes in the state of the system over time. For many model classes, such as discrete or continuous dynamical systems, there exist appropriate frameworks and tools for analyzing system dynamics. However, the heterogeneous information that encodes and bridges molecular and cellular dynamics, inherent to fine-grained molecular simulation models, presents significant challenges to the study of system dynamics. In this paper, we present an algorithmic information theory based approach for the analysis and interpretation of the dynamics of such executable models of biological systems. We apply a normalized compression distance (NCD analysis to the state representations of a model that simulates the immune decision making and immune cell behavior. We show that this analysis successfully captures the essential information in the dynamics of the system, which results from a variety of events including proliferation, differentiation, or perturbations such as gene knock-outs. We demonstrate that this approach can be used for the analysis of executable models, regardless of the modeling framework, and for making experimentally quantifiable predictions.

  17. Discovery and information-theoretic characterization of transcription factor binding sites that act cooperatively.

    Science.gov (United States)

    Clifford, Jacob; Adami, Christoph

    2015-09-02

    Transcription factor binding to the surface of DNA regulatory regions is one of the primary causes of regulating gene expression levels. A probabilistic approach to model protein-DNA interactions at the sequence level is through position weight matrices (PWMs) that estimate the joint probability of a DNA binding site sequence by assuming positional independence within the DNA sequence. Here we construct conditional PWMs that depend on the motif signatures in the flanking DNA sequence, by conditioning known binding site loci on the presence or absence of additional binding sites in the flanking sequence of each site's locus. Pooling known sites with similar flanking sequence patterns allows for the estimation of the conditional distribution function over the binding site sequences. We apply our model to the Dorsal transcription factor binding sites active in patterning the Dorsal-Ventral axis of Drosophila development. We find that those binding sites that cooperate with nearby Twist sites on average contain about 0.5 bits of information about the presence of Twist transcription factor binding sites in the flanking sequence. We also find that Dorsal binding site detectors conditioned on flanking sequence information make better predictions about what is a Dorsal site relative to background DNA than detection without information about flanking sequence features.

  18. Information-theoretic approach to lead-lag effect on financial markets

    Science.gov (United States)

    Fiedor, Paweł

    2014-08-01

    Recently the interest of researchers has shifted from the analysis of synchronous relationships of financial instruments to the analysis of more meaningful asynchronous relationships. Both types of analysis are concentrated mostly on Pearson's correlation coefficient and consequently intraday lead-lag relationships (where one of the variables in a pair is time-lagged) are also associated with them. Under the Efficient-Market Hypothesis such relationships are not possible as all information is embedded in the prices, but in real markets we find such dependencies. In this paper we analyse lead-lag relationships of financial instruments and extend known methodology by using mutual information instead of Pearson's correlation coefficient. Mutual information is not only a more general measure, sensitive to non-linear dependencies, but also can lead to a simpler procedure of statistical validation of links between financial instruments. We analyse lagged relationships using New York Stock Exchange 100 data not only on an intraday level, but also for daily stock returns, which have usually been ignored.

  19. OVERVIEW OF INFORMATION TECHNOLOGY AND A THEORETICAL MODEL IN SUPPLY CHAIN MANAGEMENT FOR LOCAL SMES

    Directory of Open Access Journals (Sweden)

    Kherbach OUALID

    2016-12-01

    Full Text Available Most literature on supply chain management (SCM focuses on large organizations with global operations employing high-level information technology. This creates a gap in the knowledge of how SMEs use and practice (SCM moreover (SCM is an area of increasing importance among enterprises and of growing academic interest. It is based on the concept of firms as part of multiple organizations oriented to the provision of goods and services for the final customer. the survival of Small to Medium Size Enterprises (SMEs will be determined by their ability to produce more, at a lower cost, in less time, and with few defects. The use of information technology (IT is considered a prerequisite for the effective control of today’s complex supply chains. In our research report we first provide a broad over of (SCM in general for SMEs. We further discuss the evolution of the information technology (IT in SCM and the performance parameters of the supply chain processes. In this research, the aim is to introduce a special (SCMtheoretical model in general SCM models which is appropriate for SMEs’ structure due to their operation capacity, numerical condition and other features. In accordance with this aim, taking the conditions of this sector into consideration, a two-stage model which is appropriate for structure of SMEs in Romania is proposed. The first stage of the model consists of “supply and production centers” and second stage consists of “product and customer center”

  20. OVERVIEW OF INFORMATION TECHNOLOGY AND A THEORETICAL MODEL IN SUPPLY CHAIN MANAGEMENT FOR LOCAL SMES

    Directory of Open Access Journals (Sweden)

    Kherbach OUALID

    2016-12-01

    Full Text Available Most literature on supply chain management (SCM focuses on large organizations with global operations employing high-level information technology. This creates a gap in the knowledge of how SMEs use and practice (SCM moreover (SCM is an area of increasing importance among enterprises and of growing academic interest. It is based on the concept of firms as part of multiple organizations oriented to the provision of goods and services for the final customer. the survival of Small to Medium Size Enterprises (SMEs will be determined by their ability to produce more, at a lower cost, in less time, and with few defects. The use of information technology (IT is considered a prerequisite for the effective control of today’s complex supply chains. In our research report we first provide a broad over of (SCM in general for SMEs. We further discuss the evolution of the information technology (IT in SCM and the performance parameters of the supply chain processes. In this research, the aim is to introduce a special (SCMtheoretical model in general SCM models which is appropriate for SMEs’ structure due to their operation capacity, numerical condition and other features. In accordance with this aim, taking the conditions of this sector into consideration, a two-stage model which is appropriate for structure of SMEs in Romania is proposed. The first stage of the model consists of “supply and production centers” and second stage consists of “product and customer center”

  1. Lateral information processing by spiking neurons: a theoretical model of the neural correlate of consciousness.

    Science.gov (United States)

    Ebner, Marc; Hameroff, Stuart

    2011-01-01

    Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on "autopilot"). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the "conscious pilot") suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious "auto-pilot" cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways "gap junctions" in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of

  2. A Game-Theoretic Approach for Opportunistic Spectrum Sharing in Cognitive Radio Networks with Incomplete Information

    Science.gov (United States)

    Tan, Xuesong Jonathan; Li, Liang; Guo, Wei

    One important issue in cognitive transmission is for multiple secondary users to dynamically acquire spare spectrum from the single primary user. The existing spectrum sharing scheme adopts a deterministic Cournot game to formulate this problem, of which the solution is the Nash equilibrium. This formulation is based on two implicit assumptions. First, each secondary user is willing to fully exchange transmission parameters with all others and hence knows their complete information. Second, the unused spectrum of the primary user for spectrum sharing is always larger than the total frequency demand of all secondary users at the Nash equilibrium. However, both assumptions may not be true in general. To remedy this, the present paper considers a more realistic assumption of incomplete information, i.e., each secondary user may choose to conceal their private information for achieving higher transmission benefit. Following this assumption and given that the unused bandwidth of the primary user is large enough, we adopt a probabilistic Cournot game to formulate an opportunistic spectrum sharing scheme for maximizing the total benefit of all secondary users. Bayesian equilibrium is considered as the solution of this game. Moreover, we prove that a secondary user can improve their expected benefit by actively hiding its transmission parameters and increasing their variance. On the other hand, when the unused spectrum of the primary user is smaller than the maximal total frequency demand of all secondary users at the Bayesian equilibrium, we formulate a constrained optimization problem for the primary user to maximize its profit in spectrum sharing and revise the proposed spectrum sharing scheme to solve this problem heuristically. This provides a unified approach to overcome the aforementioned two limitations of the existing spectrum sharing scheme.

  3. Information-theoretic approach to the gravitational-wave burst detection problem

    Science.gov (United States)

    Lynch, Ryan; Vitale, Salvatore; Essick, Reed; Katsavounidis, Erik; Robinet, Florent

    2017-05-01

    The observational era of gravitational-wave astronomy began in the fall of 2015 with the detection of GW150914. One potential type of detectable gravitational wave is short-duration gravitational-wave bursts, whose waveforms can be difficult to predict. We present the framework for a detection algorithm for such burst events—oLIB—that can be used in low latency to identify gravitational-wave transients. This algorithm consists of (1) an excess-power event generator based on the Q transform—Omicron—, (2) coincidence of these events across a detector network, and (3) an analysis of the coincident events using a Markov chain Monte Carlo Bayesian evidence calculator—LALInferenceBurst. These steps compress the full data streams into a set of Bayes factors for each event. Through this process, we use elements from information theory to minimize the amount of information regarding the signal-versus-noise hypothesis that is lost. We optimally extract this information using a likelihood-ratio test to estimate a detection significance for each event. Using representative archival LIGO data across different burst waveform morphologies, we show that the algorithm can detect gravitational-wave burst events of astrophysical strength in realistic instrumental noise. We also demonstrate that the combination of Bayes factors by means of a likelihood-ratio test can improve the detection efficiency of a gravitational-wave burst search. Finally, we show that oLIB's performance is robust against the choice of gravitational-wave populations used to model the likelihood-ratio test likelihoods.

  4. A theoretical study of the interference from chlorine in the oxidative coulometric method for trace determination of sulphur in hydrocarbons.

    Science.gov (United States)

    Cedergren, A

    1975-12-01

    A theoretical investigation has been made of the interference from chlorine in the oxidative coulometric method for trace sulphur determinations. A computer program (SOLGAS), based on the free-energy minimization principle, has been used to predict equilibrium compositions of the products resulting from combustion of a hydrocarbon sample containing sulphur and chlorine. The theoretical possibilities of overcoming the interference from chlorine and maintaining a high recovery of sulphur are described.

  5. Lateral Information Processing by Spiking Neurons: A Theoretical Model of the Neural Correlate of Consciousness

    Directory of Open Access Journals (Sweden)

    Marc Ebner

    2011-01-01

    Full Text Available Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”. Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot” suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of

  6. Theoretical Analysis of a Self-Replicator With Reduced Template Inhibition Based on an Informational Leaving Group.

    Science.gov (United States)

    Bigan, Erwan; Mattelaer, Henri-Philippe; Herdewijn, Piet

    2016-03-01

    The first non-enzymatic self-replicating systems, as proposed by von Kiedrowski (Angew Chem Int Ed Engl 25(10):932-935, 1986) and Orgel (Nature 327(6120):346-347, 1987), gave rise to the analytical background still used today to describe artificial replicators. What separates a self-replicating from an autocatalytic system is the ability to pass on structural information (Orgel, Nature 358(6383):203-209, 1992). Utilising molecular information, nucleic acids were the first choice as prototypical examples. But early self-replicators showed parabolic over exponential growth due to the strongly bound template duplex after template-directed ligation of substrates. We propose a self-replicating scheme with a weakly bound template duplex, using an informational leaving group. Such a scheme is inspired by the role of tRNA as leaving group and information carrier during protein synthesis, and is based on our previous experience with nucleotide chemistry. We analyse theoretically this scheme and compare it to the classical minimal replicator model. We show that for an example hexanucleotide template mirroring that is used by von Kiedrowski (Bioorganic chemistry frontiers, 1993) for the analysis of the classical minimal replicator, the proposed scheme is expected to result in higher template self-replication rate. The proposed self-replicating scheme based on an informational leaving group is expected to outperform the classical minimal replicator because of a weaker template duplex bonding, resulting in reduced template inhibition.

  7. Some Fuzzy Logic Based Methods to Deal with Sensorial Information

    Institute of Scientific and Technical Information of China (English)

    Bernadette Bouchon-Meunier

    2004-01-01

    Sensorial information is very difficult to elicit, to represent and to manage because of its complexity. Fuzzy logic provides an interesting means to deal with such information, since it allows us to represent imprecise, vague or incomplete descriptions, which are very common in the management of subjective information. Aggregation methods proposed by fuzzy logic are further useful to combine the characteristics of the various components of sensorial information.

  8. Exploring super-gaussianity towards robust information-theoretical time delay estimation

    DEFF Research Database (Denmark)

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos;

    2013-01-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far...

  9. Theoretical prediction of hysteretic rubber friction in ball on plate configuration by finite element method

    Directory of Open Access Journals (Sweden)

    2009-11-01

    Full Text Available This paper has investigated theoretically the influence of sliding speed and temperature on the hysteretic friction in case of a smooth, reciprocating steel ball sliding on smooth rubber plate by finite element method (FEM. Generalized Maxwell-models combined with Mooney-Rivlin model have been used to describe the material behaviour of the ethylenepropylene-diene-monomer (EPDM rubber studied. Additionally, the effect of the technique applied at the parameter identification of the material model and the number of Maxwell elements on the coefficient of friction (COF was also investigated. Finally, the open parameter of the Greenwood-Tabor analytical model has been determined from a fit to the FE results. By fitting, as usual, the Maxwell-model to the storage modulus master curve the predicted COF, in a broad frequency range, will be underestimated even in case of 40-term Maxwell-model. To obtain more accurate numerical prediction or to provide an upper limit for the hysteretic friction, in the interesting frequency range, the Maxwell parameters should be determined, as proposed, from a fit to the measured loss factor master curve. This conclusion can be generalized for all the FE simulations where the hysteresis plays an important role.

  10. Information-Theoretic Analysis of Underwater Acoustic OFDM Systems in Highly Dispersive Channels

    Directory of Open Access Journals (Sweden)

    Francois-Xavier Socheleau

    2012-01-01

    established by the ISI/ICI and are based on lower bounds on mutual information that assume independent and identically distributed input data symbols. In agreement with recent statistical analyses of experimental shallow-water data, the channel is modeled as a multivariate Rician fading process with a slowly time-varying mean and with potentially correlated scatterers, which is more general than the common wide-sense stationary uncorrelated scattering model. Numerical assessments on real UA channels with spread factors around 10−1 show that reliable OFDM transmissions at 2 to 4 bits/sec/Hz are achievable provided an average signal-to-noise ratio of 15 to 20 dB.

  11. INFORMATION AND COMMUNICATION TECHNOLOGIES (ICT IN PHYSICAL EDUCATION. A THEORETICAL REVIEW

    Directory of Open Access Journals (Sweden)

    Mateo Rodríguez Quijada

    2015-01-01

    Full Text Available In this review we tour the treatment of the Information and Communication Technologies (ICT in the field of physical education by the professed and the students. For this purpose we review the existing lines of research on the topic and how the most remarkable works of different authors, with special attention to the situation in the autonomous community of Galicia. Finally the main problems related to the use of these technologies in classrooms are analyzed. All this in order t to shed light on a very topical issue regarding the education of our youth. Studies show that ICTs are increasingly present in the field of physical education, but much remains to be done to make an effective use of them in education. 

  12. An information-theoretic analysis of return maximization in reinforcement learning.

    Science.gov (United States)

    Iwata, Kazunori

    2011-12-01

    We present a general analysis of return maximization in reinforcement learning. This analysis does not require assumptions of Markovianity, stationarity, and ergodicity for the stochastic sequential decision processes of reinforcement learning. Instead, our analysis assumes the asymptotic equipartition property fundamental to information theory, providing a substantially different view from that in the literature. As our main results, we show that return maximization is achieved by the overlap of typical and best sequence sets, and we present a class of stochastic sequential decision processes with the necessary condition for return maximization. We also describe several examples of best sequences in terms of return maximization in the class of stochastic sequential decision processes, which satisfy the necessary condition.

  13. What is Fair Pay for Executives? An Information Theoretic Analysis of Wage Distributions

    Directory of Open Access Journals (Sweden)

    Venkat Venkatasubramanian

    2009-11-01

    Full Text Available The high pay packages of U.S. CEOs have raised serious concerns about what would constitute a fair pay. Since the present economic models do not adequately address this fundamental question, we propose a new theory based on statistical mechanics and information theory. We use the principle of maximum entropy to show that the maximally fair pay distribution is lognormal under ideal conditions. This prediction is in agreement with observed data for the bottom 90%–95% of the working population. The theory estimates that the top 35 U.S. CEOs were overpaid by about 129 times their ideal salaries in 2008. We also provide an insight of entropy as a measure of fairness, which is maximized at equilibrium, in an economic system.

  14. Breast mass detection in tomosynthesis projection images using information-theoretic similarity measures

    Science.gov (United States)

    Singh, Swatee; Tourassi, Georgia D.; Lo, Joseph Y.

    2007-03-01

    The purpose of this project is to study Computer Aided Detection (CADe) of breast masses for digital tomosynthesis. It is believed that tomosynthesis will show improvement over conventional mammography in detection and characterization of breast masses by removing overlapping dense fibroglandular tissue. This study used the 60 human subject cases collected as part of on-going clinical trials at Duke University. Raw projections images were used to identify suspicious regions in the algorithm's high-sensitivity, low-specificity stage using a Difference of Gaussian (DoG) filter. The filtered images were thresholded to yield initial CADe hits that were then shifted and added to yield a 3D distribution of suspicious regions. These were further summed in the depth direction to yield a flattened probability map of suspicious hits for ease of scoring. To reduce false positives, we developed an algorithm based on information theory where similarity metrics were calculated using knowledge databases consisting of tomosynthesis regions of interest (ROIs) obtained from projection images. We evaluated 5 similarity metrics to test the false positive reduction performance of our algorithm, specifically joint entropy, mutual information, Jensen difference divergence, symmetric Kullback-Liebler divergence, and conditional entropy. The best performance was achieved using the joint entropy similarity metric, resulting in ROC A z of 0.87 +/- 0.01. As a whole, the CADe system can detect breast masses in this data set with 79% sensitivity and 6.8 false positives per scan. In comparison, the original radiologists performed with only 65% sensitivity when using mammography alone, and 91% sensitivity when using tomosynthesis alone.

  15. An Information-Theoretic Multiscale Framework With Applications to Polycrystalline Materials

    Science.gov (United States)

    2010-02-20

    39) In this work, we used a maximum entropy ( MaxEnt ) principle to seek a joint probability distribution of the random texture. 5.4...Probability distribution of the random variables using the maximum entropy ( MaxEnt ) principle Let  1, , NY Y Y  be the set of random variables...44) The MaxEnt problem can be posed as an unconstrained optimization problem using Lagrange multipliers. In this method, the constraints are

  16. Optimal drug cocktail design: methods for targeting molecular ensembles and insights from theoretical model systems.

    Science.gov (United States)

    Radhakrishnan, Mala L; Tidor, Bruce

    2008-05-01

    required. We also treated cases in which a subset of target variants was to be avoided, modeling the common challenge of closely related host molecules that may be implicated in drug toxicity. Such decoys generally increased the size of the required cocktail and more often resulted in infeasible optimizations. Taken together, this work provides practical optimization methods for the design of drug cocktails and a theoretical, physics-based framework through which useful insights can be achieved.

  17. A Modified Genetic Algorithm for Product Family Optimization with Platform Specified by Information Theoretical Approach

    Institute of Scientific and Technical Information of China (English)

    CHEN Chun-bao; WANG Li-ya

    2008-01-01

    Many existing product family design methods assume a given platform, However, it is not an in-tuitive task to select the platform and unique variable within a product family. Meanwhile, most approachesare single-platform methods, in which design variables are either shared across all product variants or not atall. While in multiple-platform design, platform variables can have special value with regard to a subset ofproduct variants within the product family, and offer opportunities for superior overall design. An informationtheoretical approach incorporating fuzzy clustering and Shannon's entropy was proposed for platform variablesselection in multiple-platform product family. A 2-level chromosome genetic algorithm (2LCGA) was proposedand developed for optimizing the corresponding product family in a single stage, simultaneously determiningthe optimal settings for the product platform and unique variables. The single-stage approach can yield im-provements in the overall performance of the product family compared with two-stage approaches, in which thefirst stage involves determining the best settings for the platform and values of unique variables are found foreach product in the second stage. An example of design of a family of universal motors was used to verify theproposed method.

  18. Analysis of the Experimental and CFD-Based Theoretical Methods for Studying Rotordynamic Characteristics of Labyrinth Gas Seals

    OpenAIRE

    Pugachev, A. O.;Deckner, M.;Kwanka, K.;Helm, P.;Schettel, J.

    2017-01-01

    This paper presents an analysis of the experimental and theoretical methods used to study rotordynamic characteristics of short staggered labyrinth gas seal. Two experimental identification procedures referred to as static and dynamic methods are presented. The static method allows determining direct and cross-coupled stiffness coefficients of the seal by integrating measured circumferential pressure distribution in cavities at various shaft eccentric positions. In the dynamic method, identif...

  19. An information-theoretic approach to the gravitational-wave burst detection problem

    CERN Document Server

    Lynch, Ryan; Essick, Reed; Katsavounidis, Erik; Robinet, Florent

    2015-01-01

    The advanced era of gravitational-wave astronomy, with data collected in part by the LIGO gravitational-wave interferometers, has begun as of fall 2015. One potential type of detectable gravitational waves is short-duration gravitational-wave bursts, whose waveforms can be difficult to predict. We present the framework for a new detection algorithm -- called \\textit{oLIB} -- that can be used in relatively low-latency to turn calibrated strain data into a detection significance statement. This pipeline consists of 1) a sine-Gaussian matched-filter trigger generator based on the Q-transform -- known as \\textit{Omicron} --, 2) incoherent down-selection of these triggers to the most signal-like set, and 3) a fully coherent analysis of this signal-like set using the Markov chain Monte Carlo (MCMC) Bayesian evidence calculator \\textit{LALInferenceBurst} (LIB). These steps effectively compress the full data stream into a set of search statistics for the most signal-like events, and we use elements from information t...

  20. Anatomy of a Spin: The Information-Theoretic Structure of Classical Spin Systems

    Directory of Open Access Journals (Sweden)

    Vikram S. Vijayaraghavan

    2017-05-01

    Full Text Available Collective organization in matter plays a significant role in its expressed physical properties. Typically, it is detected via an order parameter, appropriately defined for each given system’s observed emergent patterns. Recent developments in information theory, however, suggest quantifying collective organization in a system- and phenomenon-agnostic way: decomposing the system’s thermodynamic entropy density into a localized entropy, that is solely contained in the dynamics at a single location, and a bound entropy, that is stored in space as domains, clusters, excitations, or other emergent structures. As a concrete demonstration, we compute this decomposition and related quantities explicitly for the nearest-neighbor Ising model on the 1D chain, on the Bethe lattice with coordination number k = 3 , and on the 2D square lattice, illustrating its generality and the functional insights it gives near and away from phase transitions. In particular, we consider the roles that different spin motifs play (in cluster bulk, cluster edges, and the like and how these affect the dependencies between spins.

  1. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Romain Morlhon

    2015-01-01

    Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

  2. An information-theoretic approach to the gravitational-wave burst detection problem

    Science.gov (United States)

    Katsavounidis, E.; Lynch, R.; Vitale, S.; Essick, R.; Robinet, F.

    2016-03-01

    The advanced era of gravitational-wave astronomy, with data collected in part by the LIGO gravitational-wave interferometers, has begun as of fall 2015. One potential type of detectable gravitational waves is short-duration gravitational-wave bursts, whose waveforms can be difficult to predict. We present the framework for a new detection algorithm - called oLIB - that can be used in relatively low-latency to turn calibrated strain data into a detection significance statement. This pipeline consists of 1) a sine-Gaussian matched-filter trigger generator based on the Q-transform - known as Omicron -, 2) incoherent down-selection of these triggers to the most signal-like set, and 3) a fully coherent analysis of this signal-like set using the Markov chain Monte Carlo (MCMC) Bayesian evidence calculator LALInferenceBurst (LIB). We optimally extract this information by using a likelihood-ratio test (LRT) to map these search statistics into a significance statement. Using representative archival LIGO data, we show that the algorithm can detect gravitational-wave burst events of realistic strength in realistic instrumental noise with good detection efficiencies across different burst waveform morphologies. With support from the National Science Foundation under Grant PHY-0757058.

  3. Maximum likelihood method and Fisher's information in physics and econophysics

    CERN Document Server

    Syska, Jacek

    2012-01-01

    Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models cl...

  4. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework

    Directory of Open Access Journals (Sweden)

    French Simon D

    2012-04-01

    Full Text Available Abstract Background There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF to advance the science of implementation research. Methods The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s of delivery could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? Results A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. Conclusions We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be

  5. PREFACE: XXXth International Colloquium on Group Theoretical Methods in Physics (ICGTMP) (Group30)

    Science.gov (United States)

    Brackx, Fred; De Schepper, Hennie; Van der Jeugt, Joris

    2015-04-01

    The XXXth International Colloquium on Group Theoretical Methods in Physics (ICGTMP), also known as the Group30 conference, took place in Ghent (Belgium) from Monday 14 to Friday 18 July 2014. The conference was organised by Ghent University (Department of Applied Mathematics, Computer Science and Statistics, and Department of Mathematical Analysis). The website http://www.group30.ugent.be is still available. The ICGTMP is one of the traditional conference series covering the most important topics of symmetry which are relevant to the interplay of present-day mathematics and physics. More than 40 years ago a group of enthusiasts, headed by H. Bacry of Marseille and A. Janner of Nijmegen, initiated a series of annual meetings with the aim to provide a common forum for scientists interested in group theoretical methods. At that time most of the participants belonged to two important communities: on the one hand solid state specialists, elementary particle theorists and phenomenologists, and on the other mathematicians eager to apply newly-discovered group and algebraic structures. The conference series has become a meeting point for scientists working at modelling physical phenomena through mathematical and numerical methods based on geometry and symmetry. It is considered as the oldest one among the conference series devoted to geometry and physics. It has been further broadened and diversified due to the successful applications of geometric and algebraic methods in life sciences and other areas. The first four meetings took place alternatively in Marseille and Nijmegen. Soon after, the conference acquired an international standing, especially following the 1975 colloquium in Nijmegen and the 1976 colloquium in Montreal. Since then it has been organized in many places around the world. It has become a bi-annual colloquium since 1990, the year it was organized in Moscow. This was the first time the colloquium took place in Belgium. There were 246 registered

  6. Blind information-theoretic multiuser detection algorithms for DS-CDMA and WCDMA downlink systems.

    Science.gov (United States)

    Waheed, Khuram; Salem, Fathi M

    2005-07-01

    Code division multiple access (CDMA) is based on the spread-spectrum technology and is a dominant air interface for 2.5G, 3G, and future wireless networks. For the CDMA downlink, the transmitted CDMA signals from the base station (BS) propagate through a noisy multipath fading communication channel before arriving at the receiver of the user equipment/mobile station (UE/MS). Classical CDMA single-user detection (SUD) algorithms implemented in the UE/MS receiver do not provide the required performance for modern high data-rate applications. In contrast, multi-user detection (MUD) approaches require a lot of a priori information not available to the UE/MS. In this paper, three promising adaptive Riemannian contra-variant (or natural) gradient based user detection approaches, capable of handling the highly dynamic wireless environments, are proposed. The first approach, blind multiuser detection (BMUD), is the process of simultaneously estimating multiple symbol sequences associated with all the users in the downlink of a CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. This approach is applicable to CDMA systems with relatively short spreading codes but becomes impractical for systems using long spreading codes. We also propose two other adaptive approaches, namely, RAKE -blind source recovery (RAKE-BSR) and RAKE-principal component analysis (RAKE-PCA) that fuse an adaptive stage into a standard RAKE receiver. This adaptation results in robust user detection algorithms with performance exceeding the linear minimum mean squared error (LMMSE) detectors for both Direct Sequence CDMA (DS-CDMA) and wide-band CDMA (WCDMA) systems under conditions of congestion, imprecise channel estimation and unmodeled multiple access interference (MAI).

  7. Theoretical and computational methods for the noninvasive detection of gastric electrical source coupling.

    Science.gov (United States)

    Irimia, Andrei; Bradshaw, L Alan

    2004-05-01

    The ability to study the pathology of the stomach noninvasively from magnetic field measurements is important due to the significant practical advantages offered by noninvasive methods over other techniques of investigation. The inverse biomagnetic problem can play a central role in this process due to the information that inverse solutions can yield concerning the characteristics of the gastric electrical activity (GEA). To analyze gastrointestinal (GI) magnetic fields noninvasively, we have developed a computer implementation of a least-squares minimization algorithm that obtains numerical solutions to the biomagnetic inverse problem for the stomach. In this paper, we show how electric current propagation and the mechanical coupling of gastric smooth muscle cells during electrical control activity can be studied using such solutions. To validate our model, two types of numerical simulations of the GEA were developed and successfully used to demonstrate the ability of our computer algorithm to detect and accurately analyze these two phenomena. We also describe our analysis of experimental, noninvasively acquired gastric biomagnetic data as well as the information of interest that our numerical method can yield in clinical studies. Most importantly, we present experimental evidence that the coupling of gastric electrical sources can be observed using noninvasive techniques of measurement, in our case with the use of a superconducting quantum interference device magnetometer. We discuss the relevance and implications of our achievement to the future of GI research.

  8. Design of a thermally controlled sequence of triazolinedione-based click and transclick reactions† †Electronic supplementary information (ESI) available: Additional figures, experimental details, synthesis and analysis of all the model compounds and polymers, computational methods and relevant theoretical data. See DOI: 10.1039/c7sc00119c Click here for additional data file.

    Science.gov (United States)

    Houck, Hannes A.; De Bruycker, Kevin; Billiet, Stijn; Dhanis, Bastiaan; Goossens, Hannelore; Catak, Saron; Van Speybroeck, Veronique

    2017-01-01

    The reaction of triazolinediones (TADs) and indoles is of particular interest for polymer chemistry applications, as it is a very fast and irreversible additive-free process at room temperature, but can be turned into a dynamic covalent bond forming process at elevated temperatures, giving a reliable bond exchange or ‘transclick’ reaction. In this paper, we report an in-depth study aimed at controlling the TAD–indole reversible click reactions through rational design of modified indole reaction partners. This has resulted in the identification of a novel class of easily accessible indole derivatives that give dynamic TAD-adduct formation at significantly lower temperatures. We further demonstrate that these new substrates can be used to design a directed cascade of click reactions of a functionalized TAD moiety from an initial indole reaction partner to a second indole, and finally to an irreversible reaction partner. This controlled sequence of click and transclick reactions of a single TAD reagent between three different substrates has been demonstrated both on small molecule and macromolecular level, and the factors that control the reversibility profiles have been rationalized and guided by mechanistic considerations supported by theoretical calculations. PMID:28507685

  9. Correlation Method for Public Security Information in Big Data Environment

    Directory of Open Access Journals (Sweden)

    Gang Zeng

    2015-04-01

    Full Text Available With the gradual improvement of the informationization level in public security area, the concept "Information led policing" has been formed, many information systems have been built and vast amounts of business data have been accumulated down, But these systems and data are isolated and becoming the isolated information islands. This thesis proposes an architecture of information analysis system on big data platform, then discuss the question of data integration, finally proposes the correlation method for public security information: direct association and indirect association.

  10. Method of Improving Personal Name Search in Academic Information Service

    Directory of Open Access Journals (Sweden)

    Heejun Han

    2012-12-01

    Full Text Available All academic information on the web or elsewhere has its creator, that is, a subject who has created the information. The subject can be an individual, a group, or an institution, and can be a nation depending on the nature of the relevant information. Most information is composed of a title, an author, and contents. An essay which is under the academic information category has metadata including a title, an author, keyword, abstract, data about publication, place of publication, ISSN, and the like. A patent has metadata including the title, an applicant, an inventor, an attorney, IPC, number of application, and claims of the invention. Most web-based academic information services enable users to search the information by processing the meta-information. An important element is to search information by using the author field which corresponds to a personal name. This study suggests a method of efficient indexing and using the adjacent operation result ranking algorithm to which phrase search-based boosting elements are applied, and thus improving the accuracy of the search results of personal names. It also describes a method for providing the results of searching co-authors and related researchers in searching personal names. This method can be effectively applied to providing accurate and additional search results in the academic information services.

  11. Predicting the potential distribution of invasive exotic species using GIS and information-theoretic approaches: A case of ragweed (Ambrosia artemisiifolia L.)distribution in China

    Institute of Scientific and Technical Information of China (English)

    CHEN Hao; CHEN LiJun; Thomas P. ALBRIGHT

    2007-01-01

    Invasive exotic species pose a growing threat to the economy, public health, and ecological integrity of nations worldwide. Explaining and predicting the spatial distribution of invasive exotic species is of great importance to prevention and early warning efforts. We are investigating the potential distribution of invasive exotic species, the environmental factors that influence these distributions, and the ability to predict them using statistical and information-theoretic approaches. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, for most species, absence data are not available. Presented with the challenge of developing a model based on presence-only information, we developed an improved logistic regression approach using Information Theory and Frequency Statistics to produce a relative suitability map.This paper generated a variety of distributions of ragweed (Ambrosia artemisiifolia L.) from logistic regression models applied to herbarium specimen location data and a suite of GIS layers including climatic, topographic, and land cover information. Our logistic regression model was based on Akaike's Information Criterion (AIC) from a suite of ecologically reasonable predictor variables. Based on the results we provided a new Frequency Statistical method to compartmentalize habitat-suitability in the native range. Finally, we used the model and the compartmentalized criterion developed in native ranges to "project" a potential distribution onto the exotic ranges to build habitat-suitability maps.

  12. A method of building information extraction based on mathematical morphology and multiscale

    Science.gov (United States)

    Li, Jing-wen; Wang, Ke; Zhang, Zi-ping; Xue, Long-li; Yin, Shou-qiang; Zhou, Song

    2015-12-01

    In view of monitoring the changes of buildings on Earth's surface ,by analyzing the distribution characteristics of building in remote sensing image, combined with multi-scale in image segmentation and the advantages of mathematical morphology, this paper proposes a multi-scale combined with mathematical morphology of high resolution remote sensing image segmentation method, and uses the multiple fuzzy classification method and the shadow of auxiliary method to extract information building, With the comparison of k-means classification, and the traditional maximum likelihood classification method, the results of experiment object based on multi-scale combined with mathematical morphology of image segmentation and extraction method, can accurately extract the structure of the information is more clear classification data, provide the basis for the intelligent monitoring of earth data and theoretical support.

  13. How do small groups make decisions? : A theoretical framework to inform the implementation and study of clinical competency committees.

    Science.gov (United States)

    Chahine, Saad; Cristancho, Sayra; Padgett, Jessica; Lingard, Lorelei

    2017-06-01

    In the competency-based medical education (CBME) approach, clinical competency committees are responsible for making decisions about trainees' competence. However, we currently lack a theoretical model for group decision-making to inform this emerging assessment phenomenon. This paper proposes an organizing framework to study and guide the decision-making processes of clinical competency committees.This is an explanatory, non-exhaustive review, tailored to identify relevant theoretical and evidence-based papers related to small group decision-making. The search was conducted using Google Scholar, Web of Science, MEDLINE, ERIC, and PsycINFO for relevant literature. Using a thematic analysis, two researchers (SC & JP) met four times between April-June 2016 to consolidate the literature included in this review.Three theoretical orientations towards group decision-making emerged from the review: schema, constructivist, and social influence. Schema orientations focus on how groups use algorithms for decision-making. Constructivist orientations focus on how groups construct their shared understanding. Social influence orientations focus on how individual members influence the group's perspective on a decision. Moderators of decision-making relevant to all orientations include: guidelines, stressors, authority, and leadership.Clinical competency committees are the mechanisms by which groups of clinicians will be in charge of interpreting multiple assessment data points and coming to a shared decision about trainee competence. The way in which these committees make decisions can have huge implications for trainee progression and, ultimately, patient care. Therefore, there is a pressing need to build the science of how such group decision-making works in practice. This synthesis suggests a preliminary organizing framework that can be used in the implementation and study of clinical competency committees.

  14. Collecting Information for Rating Global Assessment of Functioning (GAF): Sources of Information and Methods for Information Collection.

    Science.gov (United States)

    I H, Monrad Aas

    2014-11-01

    Global Assessment of Functioning (GAF) is an assessment instrument that is known worldwide. It is widely used for rating the severity of illness. Results from evaluations in psychiatry should characterize the patients. Rating of GAF is based on collected information. The aim of the study is to identify the factors involved in collecting information that is relevant for rating GAF, and gaps in knowledge where it is likely that further development would play a role for improved scoring. A literature search was conducted with a combination of thorough hand search and search in the bibliographic databases PubMed, PsycINFO, Google Scholar, and Campbell Collaboration Library of Systematic Reviews. Collection of information for rating GAF depends on two fundamental factors: the sources of information and the methods for information collection. Sources of information are patients, informants, health personnel, medical records, letters of referral and police records about violence and substance abuse. Methods for information collection include the many different types of interview - unstructured, semi-structured, structured, interviews for Axis I and II disorders, semistructured interviews for rating GAF, and interviews of informants - as well as instruments for rating symptoms and functioning, and observation. The different sources of information, and methods for collection, frequently result in inconsistencies in the information collected. The variation in collected information, and lack of a generally accepted algorithm for combining collected information, is likely to be important for rated GAF values, but there is a fundamental lack of knowledge about the degree of importance. Research to improve GAF has not reached a high level. Rated GAF values are likely to be influenced by both the sources of information used and the methods employed for information collection, but the lack of research-based information about these influences is fundamental. Further development of

  15. XML-based product information processing method for product design

    Science.gov (United States)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  16. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework.

    Science.gov (United States)

    French, Simon D; Green, Sally E; O'Connor, Denise A; McKenzie, Joanne E; Francis, Jill J; Michie, Susan; Buchbinder, Rachelle; Schattner, Peter; Spike, Neil; Grimshaw, Jeremy M

    2012-04-24

    There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF) to advance the science of implementation research. The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be maintained as the primary framework to guide researchers through a

  17. A Method of Image Symmetry Detection Based on Phase Information

    Institute of Scientific and Technical Information of China (English)

    WU Jun; YANG Zhaoxuan; FENG Dengchao

    2005-01-01

    Traditional methods for detecting symmetry in image suffer greatly from the contrast of image and noise, and they all require some preprocessing. This paper presents a new method of image symmetry detection. This method detects symmetry with phase information utilizing logGabor wavelets, because phase information is stable and significant, while symmetric points produce patterns easy to be recognised and confirmable in local phase. Phase method does not require any preprocessing, and its result is accurate or invariant to contrast, rotation and illumination conditions. This method can detect mirror symmetry, rotating symmetry and curve symmetry at one time. Results of experiment show that, compared with pivotal element algorithm based on intensity information, phase method is more accurate and robust.

  18. Method s for Measuring Productivity in Libraries and Information Centres

    OpenAIRE

    Mohammad Alaaei

    2009-01-01

      Within Information centers, productivity is the result of optimal and effective use of information resources, service quality improvement, increased user satisfaction, pleasantness of working environment, increased motivation and enthusiasm of staff to work better. All contribute to the growth and development of information centers. Thus these centers would need to be familiar with methods employed in productivity measurement. Productivity is one of the criteria for evaluating system perfor...

  19. Classifying and Designing the Educational Methods with Information Communications Technoligies

    Directory of Open Access Journals (Sweden)

    I. N. Semenova

    2013-01-01

    Full Text Available The article describes the conceptual apparatus for implementing the Information Communications Technologies (ICT in education. The authors suggest the classification variants of the related teaching methods according to the following component combinations: types of students work with information, goals of ICT incorporation into the training process, individualization degrees, contingent involvement, activity levels and pedagogical field targets, ideology of informational didactics, etc. Each classification can solve the educational tasks in the context of the partial paradigm of modern didactics; any kind of methods implies the particular combination of activities in educational environment.The whole spectrum of classifications provides the informational functional basis for the adequate selection of necessary teaching methods in accordance with the specified goals and planned results. The potential variants of ICT implementation methods are given for different teaching models. 

  20. Consent, Informal Organization and Job Rewards: A Mixed Methods Analysis

    Science.gov (United States)

    Laubach, Marty

    2005-01-01

    This study uses a mixed methods approach to workplace dynamics. Ethnographic observations show that the consent deal underlies an informal stratification that divides the workplace into an "informal periphery," a "conventional core" and an "administrative clan." The "consent deal" is defined as an exchange of autonomy, voice and schedule…

  1. Deriving harmonised forest information in Europe using remote sensing methods

    DEFF Research Database (Denmark)

    Seebach, Lucia Maria

    the need for harmonised forest information can be satisfied using remote sensing methods. In conclusion, the study showed that it is possible to derive harmonised forest information of high spatial detail in Europe with remote sensing. The study also highlighted the imperative provision of accuracy...

  2. Maintenance and methods of forming theoretical knowledge and methodical and practical abilities in area of physical culture for students, future specialists on social work

    Directory of Open Access Journals (Sweden)

    Leyfa A.V.

    2009-12-01

    Full Text Available The value of theoretical knowledge, methodical, practical studies, skills in forming physical activity of students is rotined. The level of mastering of components of physical activity is closely associate with the basic blocks of professional preparation of students and their future professional activity. Theoretical knowledge on discipline the «Physical culture» assist the certain affecting depth and breadth of mastering of knowledge of professional preparation.

  3. Methods of information theory and algorithmic complexity for network biology.

    Science.gov (United States)

    Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper

    2016-03-01

    We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdös-Rényi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Color Restoration Method Based on Spectral Information Using Normalized Cut

    Institute of Scientific and Technical Information of China (English)

    Tetsuro Morimoto; Tohru Mihashi; Katsushi Ikeuchi

    2008-01-01

    This paper proposes a novel method for color restoration that can effectively apply accurate color based on spectral information to a segmented image using the normalized cut technique. Using the proposed method, we can obtain a digital still camera image and spectral information in different environments. Also, it is not necessary to estimate reflectance spectra using a spectral database such as other methods. The synthesized images are accurate and high resolution. The proposed method effectively works in making digital archive contents. Some experimental results are demonstrated in this paper.

  5. Molecular Adsorption Bond Lengths at Metal Oxide Surfaces: Failure of Current Theoretical Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hoeft, J.-T.; Kittel, M.; Polcik, M.; Bao, S.; Toomes, R. L.; Kang, J.-H.; Woodruff, D. P.; Pascal, M.; Lamont, C. L. A.

    2001-08-20

    New experimental structure determinations for molecular adsorbates on NiO(100) reveal much shorter Ni-C and Ni-N bond lengths for adsorbed CO and NH{sub 3} as well as NO (2.07, 1.88, 2.07{angstrom}) than previously computed theoretical values, with discrepancies up to 0.79{angstrom}, highlighting a major weakness of current theoretical descriptions of oxide-molecule bonding. Comparisons with experimentally determined bond lengths of the same species adsorbed atop Ni on metallic Ni(111) show values on the oxide surface that are consistently larger (0.1--0.3{angstrom}) than on the metal, indicating somewhat weaker bonding.

  6. Some theoretical and practical aspects in the separation of humic substances by combined liquid chromatography methods.

    Science.gov (United States)

    Hutta, Milan; Góra, Róbert; Halko, Radoslav; Chalányová, Mária

    2011-12-01

    Permanent need to understand nature, structure and properties of humic substances influences also separation methods that are in a wide scope used for fractionation, characterization and analysis of humic substances (HS). At the first glance techniques based on size-exclusion phenomena are the most useful and utilized for relating elution data to the molecular mass distribution of HS, however, with some limitations and exceptions, respectively, in the structural investigation of HS. The second most abundant separation mechanism is reversed-phase based on weak hydrophobic interactions beneficially combined with the step gradients inducing distinct features in rather featureless analytical signal of HS. Relatively great effort is invested to the developments of immobilized-metal affinity chromatography mimicking chelate-forming properties of HS as ligands in the environment. Surprisingly, relatively less attention is given to the ion-ion interactions based ion-exchange chromatography of HS. Chromatographic separation methods play also an important role in the examination of interactions of HS with pesticides. They allow us to determine binding constants and the other data necessary to predict the mobility of chemical pollutants in the environment. HS is frequently adversely acting in analytical procedures as interfering substance, so more detailed information is desired on manifestation of its numerous properties in analytical procedures. The article topic is covered by the review emphasizing advances in the field done in the period of last 10 years from 2000 till 2010.

  7. A Method to Separate Stochastic and Deterministic Information from Electrocardiograms

    CERN Document Server

    Gutíerrez, R M

    2004-01-01

    In this work we present a new idea to develop a method to separate stochastic and deterministic information contained in an electrocardiogram, ECG, which may provide new sources of information with diagnostic purposes. We assume that the ECG has information corresponding to many different processes related with the cardiac activity as well as contamination from different sources related with the measurement procedure and the nature of the observed system itself. The method starts with the application of an improuved archetypal analysis to separate the mentioned stochastic and deterministic information. From the stochastic point of view we analyze Renyi entropies, and with respect to the deterministic perspective we calculate the autocorrelation function and the corresponding correlation time. We show that healthy and pathologic information may be stochastic and/or deterministic, can be identified by different measures and located in different parts of the ECG.

  8. Networked Guidance and Control for Mobile Multi-Agent Systems: A Multi-Terminal (Network) Information Theoretic Approach

    Science.gov (United States)

    2014-11-04

    998 Abstract: This paper focuses on the design of time-homogeneous fully observed Markov decision processes (MDPs), with finite state and action...characterization of information structures in team decision problems and their impact on the tractability of team optimization. Solution methods for team... decision problems are presented in various settings where the discussion is structured in two foci: The first is centered on solution methods for

  9. A nonparametric statistical method for image segmentation using information theory and curve evolution.

    Science.gov (United States)

    Kim, Junmo; Fisher, John W; Yezzi, Anthony; Cetin, Müjdat; Willsky, Alan S

    2005-10-01

    In this paper, we present a new information-theoretic approach to image segmentation. We cast the segmentation problem as the maximization of the mutual information between the region labels and the image pixel intensities, subject to a constraint on the total length of the region boundaries. We assume that the probability densities associated with the image pixel intensities within each region are completely unknown a priori, and we formulate the problem based on nonparametric density estimates. Due to the nonparametric structure, our method does not require the image regions to have a particular type of probability distribution and does not require the extraction and use of a particular statistic. We solve the information-theoretic optimization problem by deriving the associated gradient flows and applying curve evolution techniques. We use level-set methods to implement the resulting evolution. The experimental results based on both synthetic and real images demonstrate that the proposed technique can solve a variety of challenging image segmentation problems. Futhermore, our method, which does not require any training, performs as good as methods based on training.

  10. On the Adaptation of an Agile Information Systems Development Method

    NARCIS (Netherlands)

    Aydin, M.N.; Harmsen, F.; van Slooten, C.; Stegwee, R.A.

    2005-01-01

    Little specific research has been conducted to date on the adaptation of agile information systems development (ISD) methods. This article presents the work practice in dealing with the adaptation of such a method in the ISD department of one of the leading financial institutes in Europe. Two forms

  11. How Qualitative Methods Can be Used to Inform Model Development.

    Science.gov (United States)

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  12. Quantitative structure-activity relationship modeling of polycyclic aromatic hydrocarbon mutagenicity by classification methods based on holistic theoretical molecular descriptors.

    Science.gov (United States)

    Gramatica, Paola; Papa, Ester; Marrocchi, Assunta; Minuti, Lucio; Taticchi, Aldo

    2007-03-01

    Various polycyclic aromatic hydrocarbons (PAHs), ubiquitous environmental pollutants, are recognized mutagens and carcinogens. A homogeneous set of mutagenicity data (TA98 and TA100,+S9) for 32 benzocyclopentaphenanthrenes/chrysenes was modeled by the quantitative structure-activity relationship classification methods k-nearest neighbor and classification and regression tree, using theoretical holistic molecular descriptors. Genetic algorithm provided the selection of the best subset of variables for modeling mutagenicity. The models were validated by leave-one-out and leave-50%-out approaches and have good performance, with sensitivity and specificity ranges of 90-100%. Mutagenicity assessment for these PAHs requires only a few theoretical descriptors of their molecular structure.

  13. Iso standardization of theoretical activity evaluation method for low and intermediate level activated waste generated at nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Makoto Kashiwagi [JGC, Yokohama, 220-6001 (Japan); Garamszeghy, Mike [NWMO, Toronto, Ontario, M4T 2S3 (Canada); Lantes, Bertrand; Bonne, Sebastien [EDF UTO, 93192 Noisy le Grand (France); Pillette-Cousin, Lucien [AREVA TA, 91192 Gif-sur-Yvette (France); Leganes, Jose Luis [ENRESA, 28043 Madrid (Spain); Volmert, Ben [NAGRA, CH-5430 Wettingen (Switzerland); James, David W. [DW James Consulting, North Oaks, MN 55127 (United States)

    2013-07-01

    Disposal of low-and intermediate-level activated waste generated at nuclear power plants is being planned or carried out in many countries. The radioactivity concentrations and/or total quantities of long-lived, difficult-to-measure nuclides (DTM nuclides), such as C-14, Ni-63, Nb-94, α emitting nuclides etc., are often restricted by the safety case for a final repository as determined by each country's safety regulations, and these concentrations or amounts are required to be known and declared. With respect to waste contaminated by contact with process water, the Scaling Factor method (SF method), which is empirically based on sampling and analysis data, has been applied as an important method for determining concentrations of DTM nuclides. This method was standardized by the International Organization for Standardization (ISO) and published in 2007 as ISO21238 'Scaling factor method to determine the radioactivity of low and intermediate-level radioactive waste packages generated at nuclear power plants' [1]. However, for activated metal waste with comparatively high concentrations of radioactivity, such as may be found in reactor control rods and internal structures, direct sampling and radiochemical analysis methods to evaluate the DTM nuclides are limited by access to the material and potentially high personnel radiation exposure. In this case, theoretical calculation methods in combination with empirical methods based on remote radiation surveys need to be used to best advantage for determining the disposal inventory of DTM nuclides while minimizing exposure to radiation workers. Pursuant to this objective a standard for the theoretical evaluation of the radioactivity concentration of DTM nuclides in activated waste, is in process through ISO TC85/SC5 (ISO Technical Committee 85: Nuclear energy, nuclear technologies, and radiological protection; Subcommittee 5: Nuclear fuel cycle). The project team for this ISO standard was formed in 2011 and

  14. Studying collaborative information seeking: Experiences with three methods

    DEFF Research Database (Denmark)

    Hyldegård, Jette Seiden; Hertzum, Morten; Hansen, Preben

    2015-01-01

    , however, benefit from a discussion of methodological issues. This chapter describes the application of three methods for collecting and analyzing data in three CIS studies. The three methods are Multidimensional Exploration, used in a CIS study of students’ in-formation behavior during a group assignment......Collaborative information seeking (CIS) has lately produced interesting empirical studies, describing CIS in real-life settings. While these studies explore how and why CIS manifests itself in different domains, discussions about how to study CIS have been scarce. The research area of CIS may......; Task-structured Observation, used in a CIS study of patent engineers; and Condensed Observation, used in a CIS study of information-systems development. The three methods are presented in the context of the studies for which they were devised, and the experiences gained using the methods are discussed...

  15. Studying collaborative information seeking: Experiences with three methods

    DEFF Research Database (Denmark)

    Hyldegård, Jette Seiden; Hertzum, Morten; Hansen, Preben

    2015-01-01

    Collaborative information seeking (CIS) has lately produced interesting empirical studies, describing CIS in real-life settings. While these studies explore how and why CIS manifests itself in different domains, discussions about how to study CIS have been scarce. The research area of CIS may......, however, benefit from a discussion of methodological issues. This chapter describes the application of three methods for collecting and analyzing data in three CIS studies. The three methods are Multidimensional Exploration, used in a CIS study of students’ in-formation behavior during a group assignment......; Task-structured Observation, used in a CIS study of patent engineers; and Condensed Observation, used in a CIS study of information-systems development. The three methods are presented in the context of the studies for which they were devised, and the experiences gained using the methods are discussed...

  16. Justification of computational methods to ensure information management systems

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. Due to the diversity and complexity of organizational management tasks a large enterprise, the construction of an information management system requires the establishment of interconnected complexes of means, implementing the most efficient way collect, transfer, accumulation and processing of information necessary drivers handle different ranks in the governance process. The main trends of the construction of integrated logistics management information systems can be considered: the creation of integrated data processing systems by centralizing storage and processing of data arrays; organization of computer systems to realize the time-sharing; aggregate-block principle of the integrated logistics; Use a wide range of peripheral devices with the unification of information and hardware communication. Main attention is paid to the application of the system of research of complex technical support, in particular, the definition of quality criteria for the operation of technical complex, the development of information base analysis methods of management information systems and define the requirements for technical means, as well as methods of structural synthesis of the major subsystems of integrated logistics. Thus, the aim is to study on the basis of systematic approach of integrated logistics management information system and the development of a number of methods of analysis and synthesis of complex logistics that are suitable for use in the practice of engineering systems design. The objective function of the complex logistics management information systems is the task of gathering systems, transmission and processing of specified amounts of information in the regulated time intervals with the required degree of accuracy while minimizing the reduced costs for the establishment and operation of technical complex. Achieving the objective function of the complex logistics to carry out certain organization of interaction of information

  17. Evaluation of Information Requirements of Reliability Methods in Engineering Design

    DEFF Research Database (Denmark)

    Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema

    2010-01-01

    This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...... on their insight on design risks and wide spread application. A pilot case study has been performed with a washing machine in using these methods to assess design risks, following a reverse engineering approach. The study has shown the methods can be initiated at early design stages, but cannot be concluded...

  18. Bayesian Decision-theoretic Methods for Parameter Ensembles with Application to Epidemiology

    CERN Document Server

    Ginestet, Cedric E

    2011-01-01

    Parameter ensembles or sets of random effects constitute one of the cornerstones of modern statistical practice. This is especially the case in Bayesian hierarchical models, where several decision theoretic frameworks can be deployed. The estimation of these parameter ensembles may substantially vary depending on which inferential goals are prioritised by the modeller. Since one may wish to satisfy a range of desiderata, it is therefore of interest to investigate whether some sets of point estimates can simultaneously meet several inferential objectives. In this thesis, we will be especially concerned with identifying ensembles of point estimates that produce good approximations of (i) the true empirical quantiles and empirical quartile ratio (QR) and (ii) provide an accurate classification of the ensemble's elements above and below a given threshold. For this purpose, we review various decision-theoretic frameworks, which have been proposed in the literature in relation to the optimisation of different aspec...

  19. Usability Evaluation Methods for Special Interest Internet Information Services

    Directory of Open Access Journals (Sweden)

    Eva-Maria Schön

    2014-06-01

    Full Text Available The internet provides a wide range of scientific information for different areas of research, used by the related scientific communities. Often the design or architecture of these web pages does not correspond to the mental model of their users. As a result the wanted information is difficult to find. Methods established by Usability Engineering and User Experience can help to increase the appeal of scientific internet information services by analyzing the users’ requirements. This paper describes a procedure to analyze and optimize scientific internet information services that can be accomplished with relatively low effort. It consists of a combination of methods that already have been successfully applied to practice: Personas, usability inspections, Online Questionnaire, Kano model and Web Analytics.

  20. Current and future prospects for the application of systematic theoretical methods to the study of problems in physical oceanography

    Science.gov (United States)

    Constantin, A.; Johnson, R. S.

    2016-09-01

    This essay is a commentary on the pivotal role of systematic theoretical methods in physical oceanography. At some level, there will always be a conflict between theory and experiment/data collection: Which is pre-eminent? Which should come first? This issue appears to be particularly marked in physical oceanography, to the extreme detriment of the development of the subject. It is our contention that the classical theory of fluids, coupled with methods from the theory of differential equations, can play a significant role in carrying the subject, and our understanding, forward. We outline the philosophy behind a systematic theoretical approach, highlighting some aspects of equatorial ocean dynamics where these methods have already been successful, paving the way for much more in the future and leading, we expect, to the better understanding of this and many other types of ocean flow. We believe that the ideas described here promise to reveal a rich and beautiful dynamical structure.

  1. Implementation of 2D Discrete Wavelet Transform by Number Theoretic Transform and 2D Overlap-Save Method

    Directory of Open Access Journals (Sweden)

    Lina Yang

    2014-01-01

    Full Text Available To reduce the computation complexity of wavelet transform, this paper presents a novel approach to be implemented. It consists of two key techniques: (1 fast number theoretic transform(FNTT In the FNTT, linear convolution is replaced by the circular one. It can speed up the computation of 2D discrete wavelet transform. (2 In two-dimensional overlap-save method directly calculating the FNTT to the whole input sequence may meet two difficulties; namely, a big modulo obstructs the effective implementation of the FNTT and a long input sequence slows the computation of the FNTT down. To fight with such deficiencies, a new technique which is referred to as 2D overlap-save method is developed. Experiments have been conducted. The fast number theoretic transform and 2D overlap-method have been used to implement the dyadic wavelet transform and applied to contour extraction in pattern recognition.

  2. System and method for acquisition management of subject position information

    Energy Technology Data Exchange (ETDEWEB)

    Carrender, Curt (Morgan Hill, CA)

    2007-01-23

    A system and method for acquisition management of subject position information that utilizes radio frequency identification (RF ID) to store position information in position tags. Tag programmers receive position information from external positioning systems, such as the Global Positioning System (GPS), from manual inputs, such as keypads, or other tag programmers. The tag programmers program each position tag with the received position information. Both the tag programmers and the position tags can be portable or fixed. Implementations include portable tag programmers and fixed position tags for subject position guidance, and portable tag programmers for collection sample labeling. Other implementations include fixed tag programmers and portable position tags for subject route recordation. Position tags can contain other associated information such as destination address of an affixed subject for subject routing.

  3. System and method for acquisition management of subject position information

    Energy Technology Data Exchange (ETDEWEB)

    Carrender, Curt

    2005-12-13

    A system and method for acquisition management of subject position information that utilizes radio frequency identification (RF ID) to store position information in position tags. Tag programmers receive position information from external positioning systems, such as the Global Positioning System (GPS), from manual inputs, such as keypads, or other tag programmers. The tag programmers program each position tag with the received position information. Both the tag programmers and the position tags can be portable or fixed. Implementations include portable tag programmers and fixed position tags for subject position guidance, and portable tag programmers for collection sample labeling. Other implementations include fixed tag programmers and portable position tags for subject route recordation. Position tags can contain other associated information such as destination address of an affixed subject for subject routing.

  4. Evaluation of Database Modeling Methods for Geographic Information Systems

    Directory of Open Access Journals (Sweden)

    Thanasis Hadzilacos

    1998-11-01

    Full Text Available We present a systematic evaluation of different modeling techniques for the design of Geographic Information Systems as we experienced them through theoretical research and real world applications. A set of exemplary problems for spatial systems on which the suitability of models can be tested is discussed. We analyse the use of a specific database design methodology including the phases of conceptual, logical and physical modeling. By employing, at each phase, representative models of classical and object-oriented approaches we assess their efficiency in spatial data handling. At the conceptual phase, we show how the Entity-Relationship, EFO and OMT models deal with the geographic needs; at the logical phase we argue why the relational model is good to serve as a basis to accommodate these requirements, but not good enough as a stand alone solution.

  5. A secure communication method for a high-power information signal based on chaotic masking

    Institute of Scientific and Technical Information of China (English)

    李建芬; 李农

    2002-01-01

    In this paper, we present a secure communication method for a high-power information signal based on chaoticmasking. In the transmitter, an adaptive controller is adopted to pick up the change of the information signal, andto inject the controller's error into the transmitting system. At the same time, the information is directly added tothe chaotic signal in transmission to drive the receiving system. In the receiver, another adaptive controller is usedto maintain chaotic synchronization of the transmitting and receiving systems and to recover the information signal.Since the synchronization error is independent from the information signal, the power of the information signal can beequivalent to that of the chaotic signal, and the frequency of the information signal can be set within the range of theprincipal frequencies of the chaotic signal. The results of theoretical analysis and numerical simulation show that thepresented method not only enhances the degree of security of low-dimensional chaotic systems but also significantlyimproves the signal-to-noise ratio at the receiving end.

  6. Current and future prospects for the application of systematic theoretical methods to the study of problems in physical oceanography

    Energy Technology Data Exchange (ETDEWEB)

    Constantin, A., E-mail: adrian.constantin@kcl.ac.uk [Department of Mathematics, King' s College London, Strand, London WC2R 2LS (United Kingdom); Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna (Austria); Johnson, R.S., E-mail: r.s.johnson@ncl.ac.uk [School of Mathematics & Statistics, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)

    2016-09-07

    Highlights: • Systematic theoretical methods in studies of equatorial ocean dynamics. • Linear wave-current interactions in stratified flows. • Exact solutions – Kelvin waves, azimuthal non-uniform currents. • Three-dimensional nonlinear currents. • Hamiltonian formulation for the governing equations and for structure-preserving/enhancing approximations. - Abstract: This essay is a commentary on the pivotal role of systematic theoretical methods in physical oceanography. At some level, there will always be a conflict between theory and experiment/data collection: Which is pre-eminent? Which should come first? This issue appears to be particularly marked in physical oceanography, to the extreme detriment of the development of the subject. It is our contention that the classical theory of fluids, coupled with methods from the theory of differential equations, can play a significant role in carrying the subject, and our understanding, forward. We outline the philosophy behind a systematic theoretical approach, highlighting some aspects of equatorial ocean dynamics where these methods have already been successful, paving the way for much more in the future and leading, we expect, to the better understanding of this and many other types of ocean flow. We believe that the ideas described here promise to reveal a rich and beautiful dynamical structure.

  7. The value of value of information: best informing research design and prioritization using current methods.

    Science.gov (United States)

    Eckermann, Simon; Karnon, Jon; Willan, Andrew R

    2010-01-01

    Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of

  8. GOSim – an R-package for computation of information theoretic GO similarities between terms and gene products

    Directory of Open Access Journals (Sweden)

    Poustka Annemarie

    2007-05-01

    Full Text Available Abstract Background With the increased availability of high throughput data, such as DNA microarray data, researchers are capable of producing large amounts of biological data. During the analysis of such data often there is the need to further explore the similarity of genes not only with respect to their expression, but also with respect to their functional annotation which can be obtained from Gene Ontology (GO. Results We present the freely available software package GOSim, which allows to calculate the functional similarity of genes based on various information theoretic similarity concepts for GO terms. GOSim extends existing tools by providing additional lately developed functional similarity measures for genes. These can e.g. be used to cluster genes according to their biological function. Vice versa, they can also be used to evaluate the homogeneity of a given grouping of genes with respect to their GO annotation. GOSim hence provides the researcher with a flexible and powerful tool to combine knowledge stored in GO with experimental data. It can be seen as complementary to other tools that, for instance, search for significantly overrepresented GO terms within a given group of genes. Conclusion GOSim is implemented as a package for the statistical computing environment R and is distributed under GPL within the CRAN project.

  9. A quantum-information-theoretic complement to a general-relativistic implementation of a beyond-Turing computer

    CERN Document Server

    Wuthrich, Christian

    2014-01-01

    There exists a growing literature on the so-called physical Church-Turing thesis in a relativistic spacetime setting. The physical Church-Turing thesis is the conjecture that no computing device that is physically realizable (even in principle) can exceed the computational barriers of a Turing machine. By suggesting a concrete implementation of a beyond-Turing computer in a spacetime setting, Istv\\'an N\\'emeti and Gyula D\\'avid (2006) have shown how an appreciation of the physical Church-Turing thesis necessitates the confluence of mathematical, computational, physical, and indeed cosmological ideas. In this essay, I will honour Istv\\'an's seventieth birthday, as well as his longstanding interest in, and his seminal contributions to, this field going back to as early as 1987 by modestly proposing how the concrete implementation in N\\'emeti and D\\'avid (2006) might be complemented by a quantum-information-theoretic communication protocol between the computing device and the logician who sets the beyond-Turing ...

  10. Study on Method of Comprehensive Evaluating for Information System Projects

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In this paper, a new method of evaluation for information system project is propoeed on the basis of the meta-synthesis methodology from qualitative analysis to quantitative analysis. DHGF is an integrated method of improved Delphi, analytic hierarchy process, grey interconnect degree and fuzzy comprehensive evaluating. It gives full play to their advantages and controls theirs disadvantages. The feasibility and effectiveness of DHGF are shown in the practical example.

  11. State-of-the-Art: Research Theoretical Framework of Information Systems Implementation Research in the Health Sector in Sub-Saharan Africa

    DEFF Research Database (Denmark)

    Tetteh, Godwin Kofi

    2014-01-01

    This study is about the state-of-the-art of reference theories and theoretical framework of information systems implementation research in the health industry in the Sub-Saharan countries from a process perspective. A process – variance framework, Poole et al, (2000), Markus & Robey, (1988......, CINAHL, Science Direct and Emerald, we identified 41 published research articles that met our inclusion criteria. The articles were mapped unto the process-variance framework. A significant finding in this critical review is that, the state-of-the-art of a large proportion of the studies was underpinned...... the process theoretical framework to enhance our insight into successful information systems implementation in the region. It is our optimism that the process based theoretical framework will be useful for, information system practitioners and organisational managers and researchers in the health sector...

  12. THEORETICAL MODEL AND NUMERICAL METHOD ON ONLINE IDENTIFICATION OF DYNAMICAL CHARACTERISTICS OF STRUCTURAL SYSTEM

    Institute of Scientific and Technical Information of China (English)

    姚志远; 汪凤泉

    2004-01-01

    An online method of identification of dynamic characteristics only using measured ambient response of structural dynamic system is widely focused on. The Ibrahim and ARMA (AutoRegressive Moving Average ) methods are basic identification methods. A model on dynamic system suffered by random ambient excitation was researched into, and a subspace decomposition method being different from traditional harmonic retrieval method was introduced. Robustness and effectiveness of this approach on identification of vibration characteristics are demonstrated on numerical experiment.

  13. AN INFORMATION FUSION METHOD FOR SENSOR DATA RECTIFICATION

    Institute of Scientific and Technical Information of China (English)

    Zhang Zhen; Xu Lizhong; Harry Hua Li; Shi Aiye; Han Hua; Wang Huibin

    2012-01-01

    In the applications of water regime monitoring,incompleteness,and inaccuracy of sensor data may directly affect the reliability of acquired monitoring information.Based on the spatial and temporal correlation of water regime monitoring information,this paper addresses this issue and proposes an information fusion method to implement data rectification.An improved Back Propagation (BP) neural network is used to perform data fusion on the hardware platform of a stantion unit,which takes Field-Programmable Gate Array (FPGA) as the core component.In order to verify the effectiveness,five measurements including water level,discharge and velocity are selected from three different points in a water regime monitoring station.The simulation results show that this method can recitify random errors as well as gross errors significantly.

  14. A new learning method using prior information of neural networks

    Institute of Scientific and Technical Information of China (English)

    Lü Baiquan; Junichi Murata; Kotaro Hirasawa

    2004-01-01

    In this paper, we present a new learning method using prior information for three-layered neural networks. Usually when neural networks are used for identification of systems, all of their weights are trained independently, without considering their inter-relation of weight values. Thus the training results are not usually good. The reason for this is that each parameter has its influence on others during the learning. To overcome this problem, first, we give an exact mathematical equation that describes the relation between weight values given by a set of data conveying prior information. Then we present a new learning method that trains a part of the weights and calculates the others by using these exact mathematical equations. In almost all cases, this method keeps prior information given by a mathematical structure exactly during the learning. In addition, a learning method using prior information expressed by inequality is also presented. In any case, the degree of freedom of networks (the number of adjustable weights) is appropriately limited in order to speed up the learning and ensure small errors. Numerical computer simulation results are provided to support the present approaches.

  15. Transfer mutual information: A new method for measuring information transfer to the interactions of time series

    Science.gov (United States)

    Zhao, Xiaojun; Shang, Pengjian; Lin, Aijing

    2017-02-01

    In this paper, we propose a new method to measure the influence of a third variable on the interactions of two variables. The method called transfer mutual information (TMI) is defined by the difference between the mutual information and the partial mutual information. It is established on the assumption that if the presence or the absence of one variable does make change to the interactions of another two variables, then quantifying this change is supposed to be the influence from this variable to those two variables. Moreover, a normalized TMI and other derivatives of the TMI are introduced as well. The empirical analysis including the simulations as well as real-world applications is investigated to examine this measure and to reveal more information among variables.

  16. A Game Theoretic Optimization Method for Energy Efficient Global Connectivity in Hybrid Wireless Sensor Networks.

    Science.gov (United States)

    Lee, JongHyup; Pak, Dohyun

    2016-08-29

    For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections.

  17. A Game Theoretic Optimization Method for Energy Efficient Global Connectivity in Hybrid Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    JongHyup Lee

    2016-08-01

    Full Text Available For practical deployment of wireless sensor networks (WSN, WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections.

  18. An information-theoretic classification of amino acids for the assessment of interfaces in protein-protein docking.

    Science.gov (United States)

    Jardin, Christophe; Stefani, Arno G; Eberhardt, Martin; Huber, Johannes B; Sticht, Heinrich

    2013-09-01

    Docking represents a versatile and powerful method to predict the geometry of protein-protein complexes. However, despite significant methodical advances, the identification of good docking solutions among a large number of false solutions still remains a difficult task. We have previously demonstrated that the formalism of mutual information (MI) from information theory can be adapted to protein docking, and we have now extended this approach to enhance its robustness and applicability. A large dataset consisting of 22,934 docking decoys derived from 203 different protein-protein complexes was used for an MI-based optimization of reduced amino acid alphabets representing the protein-protein interfaces. This optimization relied on a clustering analysis that allows one to estimate the mutual information of whole amino acid alphabets by considering all structural features simultaneously, rather than by treating them individually. This clustering approach is fast and can be applied in a similar fashion to the generation of reduced alphabets for other biological problems like fold recognition, sequence data mining, or secondary structure prediction. The reduced alphabets derived from the present work were converted into a scoring function for the evaluation of docking solutions, which is available for public use via the web service score-MI: http://score-MI.biochem.uni-erlangen.de.

  19. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Some techniques and methods for deriving water information from SPOT -4 (XI) image were investigatedand discussed in this paper. An algorithm of decision-tree (DT) classification which includes several classifiers based onthe spectral responding characteristics of water bodies and other objects, was developed and put forward to delineate wa-ter bodies. Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary informa-tion of DEM and slope (DTDS) was also designed for water bodies extraction. In addition, supervised classificationmethod of maximum-likelyhood classification (MLC), and unsupervised method of interactive self-organizing dada analy-sis technique (ISODATA) were used to extract waterbodies for comparison purpose. An index was designed and used toassess the accuracy of different methods adopted in the research. Results have shown that water extraction accuracy wasvariable with respect to the various techniques applied. It was low using ISODATA, very high using DT algorithm andmuch higher using both DTDS and MLC.

  20. Fast Registration Method for Point Clouds Using the Image Information

    Directory of Open Access Journals (Sweden)

    WANG Ruiyan

    2016-01-01

    Full Text Available On the existing laser scanners, there usually is a coaxial camera, which could capture images in the scanning site. For the laser scanners with a coaxial camera, we propose a fast registration method using the image information. Unlike the traditional registration methods that computing the rotation and translation simultaneously, our method calculates them individually. The rotation transformation between the point clouds is obtained by the knowledge of the vision geometry and the image information, while their translation is acquired by our improved ICP algorithm. In the improved ICP algorithm, only the translation vector is updated iteratively, whose input is the point clouds that removing the rotation transformation. Experimental results show that the rotation matrix obtained by the images has a high accuracy. In addition, compared with the traditional ICP algorithm, our algorithm converges faster and is easier to fall into the global optimum.