WorldWideScience

Sample records for information theoretical methods

  1. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  2. Information theoretic methods for image processing algorithm optimization

    Science.gov (United States)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  3. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Ross S Williamson

    2015-04-01

    Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  4. Information theoretic preattentive saliency

    DEFF Research Database (Denmark)

    Loog, Marco

    2011-01-01

    Employing an information theoretic operational definition of bottom-up attention from the field of computational visual perception a very general expression for saliency is provided. As opposed to many of the current approaches to determining a saliency map there is no need for an explicit data...... of which features, image information is described. We illustrate our result by determining a few specific saliency maps based on particular choices of features. One of them makes the link with the mapping underlying well-known Harris interest points, which is a result recently obtained in isolation...

  5. STRUCTURAL AND METHODICAL MODEL OF INCREASING THE LEVEL OF THEORETICAL TRAINING OF CADETS USING INFORMATION AND COMMUNICATION TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Vladislav V. Bulgakov

    2018-03-01

    Full Text Available Features of training in higher educational institutions of system of EMERCOM of Russia demand introduction of the new educational techniques and the technical means directed on intensification of educational process, providing an opportunity of preparation of cadets at any time in the independent mode and improving quality of their theoretical knowledge. The authors have developed a structural and methodological model of increasing the level of theoretical training of cadets using information and communication technologies. The proposed structural and methodological model that includes elements to stimulate and enhance cognitive activity, allows you to generate the trajectory of theoretical training of cadets for the entire period of study at the University, to organize a systematic independent work, objective, current and final control of theoretical knowledge. The structural and methodological model for improving the level of theoretical training consists of three main elements: the base of theoretical questions, functional modules "teacher" and "cadet". The basis of the structural and methodological model of increasing the level of theoretical training of cadets is the base of theoretical issues, developed in all disciplines specialty 20.05.01 – fire safety. The functional module "teacher" allows you to create theoretical questions of various kinds, edit questions and delete them from the database if necessary, as well as create tests and monitor their implementation. The functional module "cadet" provides ample opportunities for theoretical training through independent work, testing for current and final control, the implementation of the game form of training in the form of a duel, as well as for the formation of the results of the cadets in the form of statistics and rankings. Structural and methodical model of increasing the level of theoretical training of cadets is implemented in practice in the form of a multi-level automated system

  6. Comparison of information-theoretic to statistical methods for gene-gene interactions in the presence of genetic heterogeneity

    Directory of Open Access Journals (Sweden)

    Sucheston Lara

    2010-09-01

    Full Text Available Abstract Background Multifactorial diseases such as cancer and cardiovascular diseases are caused by the complex interplay between genes and environment. The detection of these interactions remains challenging due to computational limitations. Information theoretic approaches use computationally efficient directed search strategies and thus provide a feasible solution to this problem. However, the power of information theoretic methods for interaction analysis has not been systematically evaluated. In this work, we compare power and Type I error of an information-theoretic approach to existing interaction analysis methods. Methods The k-way interaction information (KWII metric for identifying variable combinations involved in gene-gene interactions (GGI was assessed using several simulated data sets under models of genetic heterogeneity driven by susceptibility increasing loci with varying allele frequency, penetrance values and heritability. The power and proportion of false positives of the KWII was compared to multifactor dimensionality reduction (MDR, restricted partitioning method (RPM and logistic regression. Results The power of the KWII was considerably greater than MDR on all six simulation models examined. For a given disease prevalence at high values of heritability, the power of both RPM and KWII was greater than 95%. For models with low heritability and/or genetic heterogeneity, the power of the KWII was consistently greater than RPM; the improvements in power for the KWII over RPM ranged from 4.7% to 14.2% at for α = 0.001 in the three models at the lowest heritability values examined. KWII performed similar to logistic regression. Conclusions Information theoretic models are flexible and have excellent power to detect GGI under a variety of conditions that characterize complex diseases.

  7. Robust recognition via information theoretic learning

    CERN Document Server

    He, Ran; Yuan, Xiaotong; Wang, Liang

    2014-01-01

    This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip

  8. Information theoretical methods as discerning quantifiers of the equations of state of neutron stars

    Energy Technology Data Exchange (ETDEWEB)

    Avellar, M.G.B. de, E-mail: mgb.avellar@iag.usp.br [Instituto de Astronomia, Geofísica e Ciências Atmosféricas – Universidade de São Paulo, Rua do Matão 1226, Cidade Universitária, 05508-090, São Paulo, SP (Brazil); Souza, R.A. de, E-mail: rodrigo.souza@usp.br [Instituto de Astronomia, Geofísica e Ciências Atmosféricas – Universidade de São Paulo, Rua do Matão 1226, Cidade Universitária, 05508-090, São Paulo, SP (Brazil); Horvath, J.E., E-mail: foton@iag.usp.br [Instituto de Astronomia, Geofísica e Ciências Atmosféricas – Universidade de São Paulo, Rua do Matão 1226, Cidade Universitária, 05508-090, São Paulo, SP (Brazil); Paret, D.M., E-mail: dmanreza@fisica.uh.cu [Facultad de Física, Universidad de la Habana, San Lázaro y L, Vedado La Habana, 10400 (Cuba)

    2014-11-07

    In this work we use the statistical measures of information entropy, disequilibrium and complexity to discriminate different approaches and parametrizations for different equations of state for quark stars. We confirm the usefulness of such quantities to quantify the role of interactions in such stars. We find that within this approach, a quark matter equation of state such as SU(2) NJL with vectorial coupling and phase transition is slightly favoured and deserves deeper studies. - Highlights: • We used information theory tools to discern different compositions for compact stars. • Hadronic and quark stars analogues behave differently when analyzed with these tools. • The effects of different equations of state are singled out in this work.

  9. Information theoretic description of networks

    Science.gov (United States)

    Wilhelm, Thomas; Hollunder, Jens

    2007-11-01

    We present a new information theoretic approach for network characterizations. It is developed to describe the general type of networks with n nodes and L directed and weighted links, i.e., it also works for the simpler undirected and unweighted networks. The new information theoretic measures for network characterizations are based on a transmitter-receiver analogy of effluxes and influxes. Based on these measures, we classify networks as either complex or non-complex and as either democracy or dictatorship networks. Directed networks, in particular, are furthermore classified as either information spreading and information collecting networks. The complexity classification is based on the information theoretic network complexity measure medium articulation (MA). It is proven that special networks with a medium number of links ( L∼n1.5) show the theoretical maximum complexity MA=(log n)2/2. A network is complex if its MA is larger than the average MA of appropriately randomized networks: MA>MAr. A network is of the democracy type if its redundancy Rdictatorship network. In democracy networks all nodes are, on average, of similar importance, whereas in dictatorship networks some nodes play distinguished roles in network functioning. In other words, democracy networks are characterized by cycling of information (or mass, or energy), while in dictatorship networks there is a straight through-flow from sources to sinks. The classification of directed networks into information spreading and information collecting networks is based on the conditional entropies of the considered networks ( H(A/B)=uncertainty of sender node if receiver node is known, H(B/A)=uncertainty of receiver node if sender node is known): if H(A/B)>H(B/A), it is an information collecting network, otherwise an information spreading network. Finally, different real networks (directed and undirected, weighted and unweighted) are classified according to our general scheme.

  10. Unorthodox theoretical methods

    Energy Technology Data Exchange (ETDEWEB)

    Nedd, Sean [Iowa State Univ., Ames, IA (United States)

    2012-01-01

    The use of the ReaxFF force field to correlate with NMR mobilities of amine catalytic substituents on a mesoporous silica nanosphere surface is considered. The interfacing of the ReaxFF force field within the Surface Integrated Molecular Orbital/Molecular Mechanics (SIMOMM) method, in order to replicate earlier SIMOMM published data and to compare with the ReaxFF data, is discussed. The development of a new correlation consistent Composite Approach (ccCA) is presented, which incorporates the completely renormalized coupled cluster method with singles, doubles and non-iterative triples corrections towards the determination of heats of formations and reaction pathways which contain biradical species.

  11. Inform: Efficient Information-Theoretic Analysis of Collective Behaviors

    Directory of Open Access Journals (Sweden)

    Douglas G. Moore

    2018-06-01

    Full Text Available The study of collective behavior has traditionally relied on a variety of different methodological tools ranging from more theoretical methods such as population or game-theoretic models to empirical ones like Monte Carlo or multi-agent simulations. An approach that is increasingly being explored is the use of information theory as a methodological framework to study the flow of information and the statistical properties of collectives of interacting agents. While a few general purpose toolkits exist, most of the existing software for information theoretic analysis of collective systems is limited in scope. We introduce Inform, an open-source framework for efficient information theoretic analysis that exploits the computational power of a C library while simplifying its use through a variety of wrappers for common higher-level scripting languages. We focus on two such wrappers here: PyInform (Python and rinform (R. Inform and its wrappers are cross-platform and general-purpose. They include classical information-theoretic measures, measures of information dynamics and information-based methods to study the statistical behavior of collective systems, and expose a lower-level API that allow users to construct measures of their own. We describe the architecture of the Inform framework, study its computational efficiency and use it to analyze three different case studies of collective behavior: biochemical information storage in regenerating planaria, nest-site selection in the ant Temnothorax rugatulus, and collective decision making in multi-agent simulations.

  12. Theoretical information reuse and integration

    CERN Document Server

    Rubin, Stuart

    2016-01-01

    Information Reuse and Integration addresses the efficient extension and creation of knowledge through the exploitation of Kolmogorov complexity in the extraction and application of domain symmetry. Knowledge, which seems to be novel, can more often than not be recast as the image of a sequence of transformations, which yield symmetric knowledge. When the size of those transformations and/or the length of that sequence of transforms exceeds the size of the image, then that image is said to be novel or random. It may also be that the new knowledge is random in that no such sequence of transforms, which produces it exists, or is at least known. The nine chapters comprising this volume incorporate symmetry, reuse, and integration as overt operational procedures or as operations built into the formal representations of data and operators employed. Either way, the aforementioned theoretical underpinnings of information reuse and integration are supported.

  13. Hash functions and information theoretic security

    DEFF Research Database (Denmark)

    Bagheri, Nasoor; Knudsen, Lars Ramkilde; Naderi, Majid

    2009-01-01

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic...

  14. Toward a Theoretical Framework for Information Science

    Directory of Open Access Journals (Sweden)

    Amanda Spink

    2000-01-01

    Full Text Available Information Science is beginning to develop a theoretical framework for the modeling of users’ interactions with information retrieval (IR technologies within the more holistic context of human information behavior (Spink, 1998b. This paper addresses the following questions: (1 What is the nature of Information Science? and (2 What theoretical framework and model is most appropriate for Information Science? This paper proposes a theoretical framework for Information Science based on an explication of the processes of human information coordinating behavior and information feedback that facilitate the relationship between human information behavior and human interaction with information retrieval (IR technologies (Web, digital libraries, etc..

  15. Qualitative methods in theoretical physics

    CERN Document Server

    Maslov, Dmitrii

    2018-01-01

    This book comprises a set of tools which allow researchers and students to arrive at a qualitatively correct answer without undertaking lengthy calculations. In general, Qualitative Methods in Theoretical Physics is about combining approximate mathematical methods with fundamental principles of physics: conservation laws and symmetries. Readers will learn how to simplify problems, how to estimate results, and how to apply symmetry arguments and conduct dimensional analysis. A comprehensive problem set is included. The book will appeal to a wide range of students and researchers.

  16. Information-theoretic metamodel of organizational evolution

    Science.gov (United States)

    Sepulveda, Alfredo

    2011-12-01

    Social organizations are abstractly modeled by holarchies---self-similar connected networks---and intelligent complex adaptive multiagent systems---large networks of autonomous reasoning agents interacting via scaled processes. However, little is known of how information shapes evolution in such organizations, a gap that can lead to misleading analytics. The research problem addressed in this study was the ineffective manner in which classical model-predict-control methods used in business analytics attempt to define organization evolution. The purpose of the study was to construct an effective metamodel for organization evolution based on a proposed complex adaptive structure---the info-holarchy. Theoretical foundations of this study were holarchies, complex adaptive systems, evolutionary theory, and quantum mechanics, among other recently developed physical and information theories. Research questions addressed how information evolution patterns gleamed from the study's inductive metamodel more aptly explained volatility in organization. In this study, a hybrid grounded theory based on abstract inductive extensions of information theories was utilized as the research methodology. An overarching heuristic metamodel was framed from the theoretical analysis of the properties of these extension theories and applied to business, neural, and computational entities. This metamodel resulted in the synthesis of a metaphor for, and generalization of organization evolution, serving as the recommended and appropriate analytical tool to view business dynamics for future applications. This study may manifest positive social change through a fundamental understanding of complexity in business from general information theories, resulting in more effective management.

  17. Strongly Correlated Systems Theoretical Methods

    CERN Document Server

    Avella, Adolfo

    2012-01-01

    The volume presents, for the very first time, an exhaustive collection of those modern theoretical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciates consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as po...

  18. System identification with information theoretic criteria

    NARCIS (Netherlands)

    A.A. Stoorvogel; J.H. van Schuppen (Jan)

    1995-01-01

    textabstractAttention is focused in this paper on the approximation problem of system identification with information theoretic criteria. For a class of problems it is shown that the criterion of mutual information rate is identical to the criterion of exponential-of-quadratic cost and to

  19. Data, Methods, and Theoretical Implications

    Science.gov (United States)

    Hannagan, Rebecca J.; Schneider, Monica C.; Greenlee, Jill S.

    2012-01-01

    Within the subfields of political psychology and the study of gender, the introduction of new data collection efforts, methodologies, and theoretical approaches are transforming our understandings of these two fields and the places at which they intersect. In this article we present an overview of the research that was presented at a National…

  20. Information theoretic quantification of diagnostic uncertainty.

    Science.gov (United States)

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  1. Information-Theoretic Perspectives on Geophysical Models

    Science.gov (United States)

    Nearing, Grey

    2016-04-01

    practice of science (except by Gong et al., 2013, whose fundamental insight is the basis for this talk), and here I offer two examples of practical methods that scientists might use to approximately measure ontological information. I place this practical discussion in the context of several recent and high-profile experiments that have found that simple out-of-sample statistical models typically (vastly) outperform our most sophisticated terrestrial hydrology models. I offer some perspective on several open questions about how to use these findings to improve our models and understanding of these systems. Cartwright, N. (1983) How the Laws of Physics Lie. New York, NY: Cambridge Univ Press. Clark, M. P., Kavetski, D. and Fenicia, F. (2011) 'Pursuing the method of multiple working hypotheses for hydrological modeling', Water Resources Research, 47(9). Cover, T. M. and Thomas, J. A. (1991) Elements of Information Theory. New York, NY: Wiley-Interscience. Cox, R. T. (1946) 'Probability, frequency and reasonable expectation', American Journal of Physics, 14, pp. 1-13. Csiszár, I. (1972) 'A Class of Measures of Informativity of Observation Channels', Periodica Mathematica Hungarica, 2(1), pp. 191-213. Davies, P. C. W. (1990) 'Why is the physical world so comprehensible', Complexity, entropy and the physics of information, pp. 61-70. Gong, W., Gupta, H. V., Yang, D., Sricharan, K. and Hero, A. O. (2013) 'Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach', Water Resources Research, 49(4), pp. 2253-2273. Jaynes, E. T. (2003) Probability Theory: The Logic of Science. New York, NY: Cambridge University Press. Nearing, G. S. and Gupta, H. V. (2015) 'The quantity and quality of information in hydrologic models', Water Resources Research, 51(1), pp. 524-538. Popper, K. R. (2002) The Logic of Scientific Discovery. New York: Routledge. Van Horn, K. S. (2003) 'Constructing a logic of plausible inference: a guide to cox's theorem

  2. Information Theoretic-Learning Auto-Encoder

    OpenAIRE

    Santana, Eder; Emigh, Matthew; Principe, Jose C

    2016-01-01

    We propose Information Theoretic-Learning (ITL) divergence measures for variational regularization of neural networks. We also explore ITL-regularized autoencoders as an alternative to variational autoencoding bayes, adversarial autoencoders and generative adversarial networks for randomly generating sample data without explicitly defining a partition function. This paper also formalizes, generative moment matching networks under the ITL framework.

  3. Systems information management: graph theoretical approach

    NARCIS (Netherlands)

    Temel, T.

    2006-01-01

    This study proposes a new method for characterising the underlying information structure of a multi-sector system. A complete characterisation is accomplished by identifying information gaps and cause-effect information pathways in the system, and formulating critical testable hypotheses.

  4. Information Ergonomics A theoretical approach and practical experience in transportation

    CERN Document Server

    Sandl, Peter

    2012-01-01

    The variety and increasing availability of hypermedia information systems, which are used in stationary applications like operators’ consoles as well as mobile systems, e.g. driver information and navigation systems in automobiles form a foundation for the mediatization of the society. From the human engineering point of view this development and the ensuing increased importance of information systems for economic and private needs require careful deliberation of the derivation and application of ergonomics methods particularly in the field of information systems. This book consists of two closely intertwined parts. The first, theoretical part defines the concept of an information system, followed by an explanation of action regulation as well as cognitive theories to describe man information system interaction. A comprehensive description of information ergonomics concludes the theoretical approach. In the second, practically oriented part of this book authors from industry as well as from academic institu...

  5. Information-theoretic lengths of Jacobi polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, A; Dehesa, J S [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, Granada (Spain); Sanchez-Moreno, P, E-mail: agmartinez@ugr.e, E-mail: pablos@ugr.e, E-mail: dehesa@ugr.e [Instituto ' Carlos I' de Fisica Teorica y Computacional, Universidad de Granada, Granada (Spain)

    2010-07-30

    The information-theoretic lengths of the Jacobi polynomials P{sup ({alpha}, {beta})}{sub n}(x), which are information-theoretic measures (Renyi, Shannon and Fisher) of their associated Rakhmanov probability density, are investigated. They quantify the spreading of the polynomials along the orthogonality interval [- 1, 1] in a complementary but different way as the root-mean-square or standard deviation because, contrary to this measure, they do not refer to any specific point of the interval. The explicit expressions of the Fisher length are given. The Renyi lengths are found by the use of the combinatorial multivariable Bell polynomials in terms of the polynomial degree n and the parameters ({alpha}, {beta}). The Shannon length, which cannot be exactly calculated because of its logarithmic functional form, is bounded from below by using sharp upper bounds to general densities on [- 1, +1] given in terms of various expectation values; moreover, its asymptotics is also pointed out. Finally, several computational issues relative to these three quantities are carefully analyzed.

  6. Information-Theoretic Inference of Large Transcriptional Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Meyer Patrick

    2007-01-01

    Full Text Available The paper presents MRNET, an original method for inferring genetic networks from microarray data. The method is based on maximum relevance/minimum redundancy (MRMR, an effective information-theoretic technique for feature selection in supervised learning. The MRMR principle consists in selecting among the least redundant variables the ones that have the highest mutual information with the target. MRNET extends this feature selection principle to networks in order to infer gene-dependence relationships from microarray data. The paper assesses MRNET by benchmarking it against RELNET, CLR, and ARACNE, three state-of-the-art information-theoretic methods for large (up to several thousands of genes network inference. Experimental results on thirty synthetically generated microarray datasets show that MRNET is competitive with these methods.

  7. Information-Theoretic Inference of Large Transcriptional Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Patrick E. Meyer

    2007-06-01

    Full Text Available The paper presents MRNET, an original method for inferring genetic networks from microarray data. The method is based on maximum relevance/minimum redundancy (MRMR, an effective information-theoretic technique for feature selection in supervised learning. The MRMR principle consists in selecting among the least redundant variables the ones that have the highest mutual information with the target. MRNET extends this feature selection principle to networks in order to infer gene-dependence relationships from microarray data. The paper assesses MRNET by benchmarking it against RELNET, CLR, and ARACNE, three state-of-the-art information-theoretic methods for large (up to several thousands of genes network inference. Experimental results on thirty synthetically generated microarray datasets show that MRNET is competitive with these methods.

  8. Information-Theoretic Inference of Common Ancestors

    Directory of Open Access Journals (Sweden)

    Bastian Steudel

    2015-04-01

    Full Text Available A directed acyclic graph (DAG partially represents the conditional independence structure among observations of a system if the local Markov condition holds, that is if every variable is independent of its non-descendants given its parents. In general, there is a whole class of DAGs that represents a given set of conditional independence relations. We are interested in properties of this class that can be derived from observations of a subsystem only. To this end, we prove an information-theoretic inequality that allows for the inference of common ancestors of observed parts in any DAG representing some unknown larger system. More explicitly, we show that a large amount of dependence in terms of mutual information among the observations implies the existence of a common ancestor that distributes this information. Within the causal interpretation of DAGs, our result can be seen as a quantitative extension of Reichenbach’s principle of common cause to more than two variables. Our conclusions are valid also for non-probabilistic observations, such as binary strings, since we state the proof for an axiomatized notion of “mutual information” that includes the stochastic as well as the algorithmic version.

  9. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  10. Information-theoretic approach to uncertainty importance

    International Nuclear Information System (INIS)

    Park, C.K.; Bari, R.A.

    1985-01-01

    A method is presented for importance analysis in probabilistic risk assessments (PRA) for which the results of interest are characterized by full uncertainty distributions and not just point estimates. The method is based on information theory in which entropy is a measure of uncertainty of a probability density function. We define the relative uncertainty importance between two events as the ratio of the two exponents of the entropies. For the log-normal and log-uniform distributions the importance measure is comprised of the median (central tendency) and of the logarithm of the error factor (uncertainty). Thus, if accident sequences are ranked this way, and the error factors are not all equal, then a different rank order would result than if the sequences were ranked by the central tendency measure alone. As an illustration, the relative importance of internal events and in-plant fires was computed on the basis of existing PRA results

  11. Nanoscale thermal transport: Theoretical method and application

    Science.gov (United States)

    Zeng, Yu-Jia; Liu, Yue-Yang; Zhou, Wu-Xing; Chen, Ke-Qiu

    2018-03-01

    With the size reduction of nanoscale electronic devices, the heat generated by the unit area in integrated circuits will be increasing exponentially, and consequently the thermal management in these devices is a very important issue. In addition, the heat generated by the electronic devices mostly diffuses to the air in the form of waste heat, which makes the thermoelectric energy conversion also an important issue for nowadays. In recent years, the thermal transport properties in nanoscale systems have attracted increasing attention in both experiments and theoretical calculations. In this review, we will discuss various theoretical simulation methods for investigating thermal transport properties and take a glance at several interesting thermal transport phenomena in nanoscale systems. Our emphasizes will lie on the advantage and limitation of calculational method, and the application of nanoscale thermal transport and thermoelectric property. Project supported by the Nation Key Research and Development Program of China (Grant No. 2017YFB0701602) and the National Natural Science Foundation of China (Grant No. 11674092).

  12. Information theoretic analysis of edge detection in visual communication

    Science.gov (United States)

    Jiang, Bo; Rahman, Zia-ur

    2010-08-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.

  13. A group theoretic approach to quantum information

    CERN Document Server

    Hayashi, Masahito

    2017-01-01

    This textbook is the first one addressing quantum information from the viewpoint of group symmetry. Quantum systems have a group symmetrical structure. This structure enables to handle systematically quantum information processing. However, there is no other textbook focusing on group symmetry for quantum information although there exist many textbooks for group representation. After the mathematical preparation of quantum information, this book discusses quantum entanglement and its quantification by using group symmetry. Group symmetry drastically simplifies the calculation of several entanglement measures although their calculations are usually very difficult to handle. This book treats optimal information processes including quantum state estimation, quantum state cloning, estimation of group action and quantum channel etc. Usually it is very difficult to derive the optimal quantum information processes without asymptotic setting of these topics. However, group symmetry allows to derive these optimal solu...

  14. An Information Theoretic Characterisation of Auditory Encoding

    Science.gov (United States)

    Overath, Tobias; Cusack, Rhodri; Kumar, Sukhbinder; von Kriegstein, Katharina; Warren, Jason D; Grube, Manon; Carlyon, Robert P; Griffiths, Timothy D

    2007-01-01

    The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content. PMID:17958472

  15. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  16. Wireless Information-Theoretic Security in an Outdoor Topology with Obstacles: Theoretical Analysis and Experimental Measurements

    Directory of Open Access Journals (Sweden)

    Dagiuklas Tasos

    2011-01-01

    Full Text Available This paper presents a Wireless Information-Theoretic Security (WITS scheme, which has been recently introduced as a robust physical layer-based security solution, especially for infrastructureless networks. An autonomic network of moving users was implemented via 802.11n nodes of an ad hoc network for an outdoor topology with obstacles. Obstructed-Line-of-Sight (OLOS and Non-Line-of-Sight (NLOS propagation scenarios were examined. Low-speed user movement was considered, so that Doppler spread could be discarded. A transmitter and a legitimate receiver exchanged information in the presence of a moving eavesdropper. Average Signal-to-Noise Ratio (SNR values were acquired for both the main and the wiretap channel, and the Probability of Nonzero Secrecy Capacity was calculated based on theoretical formula. Experimental results validate theoretical findings stressing the importance of user location and mobility schemes on the robustness of Wireless Information-Theoretic Security and call for further theoretical analysis.

  17. Vector-Quantization using Information Theoretic Concepts

    DEFF Research Database (Denmark)

    Lehn-Schiøler, Tue; Hegde, Anant; Erdogmus, Deniz

    2005-01-01

    interpretation and relies on minimization of a well defined cost-function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact......The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen Self Organizing Map (SOM) and the Linde Buzo Gray (LBG) algorithm...... have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical...

  18. Information theoretic learning Renyi's entropy and Kernel perspectives

    CERN Document Server

    Principe, Jose C

    2010-01-01

    This book presents the first cohesive treatment of Information Theoretic Learning (ITL) algorithms to adapt linear or nonlinear learning machines both in supervised or unsupervised paradigms. ITL is a framework where the conventional concepts of second order statistics (covariance, L2 distances, correlation functions) are substituted by scalars and functions with information theoretic underpinnings, respectively entropy, mutual information and correntropy. ITL quantifies the stochastic structure of the data beyond second order statistics for improved performance without using full-blown Bayesi

  19. Information theoretic resources in quantum theory

    Science.gov (United States)

    Meznaric, Sebastian

    Resource identification and quantification is an essential element of both classical and quantum information theory. Entanglement is one of these resources, arising when quantum communication and nonlocal operations are expensive to perform. In the first part of this thesis we quantify the effective entanglement when operations are additionally restricted to account for both fundamental restrictions on operations, such as those arising from superselection rules, as well as experimental errors arising from the imperfections in the apparatus. For an important class of errors we find a linear relationship between the usual and effective higher dimensional generalization of concurrence, a measure of entanglement. Following the treatment of effective entanglement, we focus on a related concept of nonlocality in the presence of superselection rules (SSR). Here we propose a scheme that may be used to activate nongenuinely multipartite nonlocality, in that a single copy of a state is not multipartite nonlocal, while two or more copies exhibit nongenuinely multipartite nonlocality. The states used exhibit the more powerful genuinely multipartite nonlocality when SSR are not enforced, but not when they are, raising the question of what is needed for genuinely multipartite nonlocality. We show that whenever the number of particles is insufficient, the degrading of genuinely multipartite to nongenuinely multipartite nonlocality is necessary. While in the first few chapters we focus our attention on understanding the resources present in quantum states, in the final part we turn the picture around and instead treat operations themselves as a resource. We provide our observers with free access to classical operations - ie. those that cannot detect or generate quantum coherence. We show that the operation of interest can then be used to either generate or detect quantum coherence if and only if it violates a particular commutation relation. Using the relative entropy, the

  20. Set-theoretic methods in control

    CERN Document Server

    Blanchini, Franco

    2015-01-01

    The second edition of this monograph describes the set-theoretic approach for the control and analysis of dynamic systems, both from a theoretical and practical standpoint.  This approach is linked to fundamental control problems, such as Lyapunov stability analysis and stabilization, optimal control, control under constraints, persistent disturbance rejection, and uncertain systems analysis and synthesis.  Completely self-contained, this book provides a solid foundation of mathematical techniques and applications, extensive references to the relevant literature, and numerous avenues for further theoretical study. All the material from the first edition has been updated to reflect the most recent developments in the field, and a new chapter on switching systems has been added.  Each chapter contains examples, case studies, and exercises to allow for a better understanding of theoretical concepts by practical application. The mathematical language is kept to the minimum level necessary for the adequate for...

  1. Methods of information processing

    Energy Technology Data Exchange (ETDEWEB)

    Kosarev, Yu G; Gusev, V D

    1978-01-01

    Works are presented on automation systems for editing and publishing operations by methods of processing symbol information and information contained in training selection (ranking of objectives by promise, classification algorithm of tones and noise). The book will be of interest to specialists in the automation of processing textural information, programming, and pattern recognition.

  2. One-dimensional barcode reading: an information theoretic approach

    Science.gov (United States)

    Houni, Karim; Sawaya, Wadih; Delignon, Yves

    2008-03-01

    In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.

  3. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr

    2015-01-01

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics

  4. THEORETICAL ASPECTS OF INFORMATIONAL SERVICES REGIONAL MARKET EFFECTIVE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    I.N. Korabejnikov

    2008-12-01

    Full Text Available The peculiarities and priorities of the informational services regional market formation as a part of network model of the economic development are described in this article. The authors present the classification of the factors which have an influence on the effectiveness of the informational services regional market development. Theoretical aspects of the informational services regional market effective development are shown.

  5. Theoretical and simulation studies of seeding methods

    Energy Technology Data Exchange (ETDEWEB)

    Pellegrini, Claudio [Univ. of California, Los Angeles, CA (United States)

    2017-12-11

    We report the theoretical and experimental studies done with the support of DOE-Grant DE-SC0009983 to increase an X-ray FEL peak power from the present level of 20 to 40 GW to one or more TW by seeding, undulator tapering and using the new concept of the Double Bunch FEL.

  6. The Theoretical Principles of the Organization of Information Systems.

    Science.gov (United States)

    Kulikowski, Juliusz Lech

    A survey of the theoretical problems connected with the organization and design of systems for processing and transmitting information is presented in this article. It gives a definition of Information Systems (IS) and classifies them from various points of view. It discusses briefly the most important aspects of the organization of IS, such as…

  7. Almost Free Modules Set-Theoretic Methods

    CERN Document Server

    Eklof, PC

    1990-01-01

    This is an extended treatment of the set-theoretic techniques which have transformed the study of abelian group and module theory over the last 15 years. Part of the book is new work which does not appear elsewhere in any form. In addition, a large body of material which has appeared previously (in scattered and sometimes inaccessible journal articles) has been extensively reworked and in many cases given new and improved proofs. The set theory required is carefully developed with algebraists in mind, and the independence results are derived from explicitly stated axioms. The book contains exe

  8. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  9. Theoretical development of information science: A brief history

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    the strongest “paradigms” in the field is a tradition derived from the Cranfield experiments in the 1960s and the bibliometric research following the publication of Science Citation Index from 1963 and forward. Among the competing theoretical frameworks, ‘the cognitive view’ became influential from the 1970s......This paper presents a brief history of information science (IS) as viewed by the author. The term ‘information science’ goes back to 1955 and evolved in the aftermath of Claude Shannon’s ‘information theory’ (1948), which also inspired research into problems in fields of library science...... and documentation. These subjects were a main focus of what became established as ‘information science’, which from 1964 onwards was often termed ‘library and information science’ (LIS). However, the usefulness of Shannon’s information theory as the theoretical foundation of the field was been challenged. Among...

  10. Theoretical-methodical Fundamentals of industrial marketing research

    OpenAIRE

    Butenko, N.

    2009-01-01

    The article proves the necessity to research theoretical and methodical fundamentals of industrial marketing and defines main key aspects of relationship management with the customers on industrial market.

  11. Molecular physics. Theoretical principles and experimental methods

    International Nuclear Information System (INIS)

    Demtroeder, W.

    2005-01-01

    This advanced textbook comprehensively explains important principles of diatomic and polyatomic molecules and their spectra in two separate, distinct parts. The first part concentrates on the theoretical aspects of molecular physics, whereas the second part of the book covers experimental techniques, i.e. laser, Fourier, NMR, and ESR spectroscopies, used in the fields of physics, chemistry, biolog, and material science. Appropriate for undergraduate and graduate students in physics and chemistry with a knowledge of atomic physics and familiar with the basics of quantum mechanics. From the contents: - Electronic States of Molecules, - Rotation, Oscillation and Potential Curves of Diatomic Molecules, - The Spectra of Diatomic Molecules, - Molecule Symmetries and Group Theory, - Rotation and Oscillations of Polyatomic Molecules, - Electronic States of Polyatomic Molecules, - The Spectra of Polyatomic Molecules, - Collapse of the Born-Oppenheimer-Approximation, Disturbances in Molecular Spectra, - Molecules in Disturbing Fields, - Van-der-Waals-Molecules and Cluster, - Experimental Techniques in Molecular Physics. (orig.)

  12. A theoretical study on a convergence problem of nodal methods

    Energy Technology Data Exchange (ETDEWEB)

    Shaohong, Z.; Ziyong, L. [Shanghai Jiao Tong Univ., 1954 Hua Shan Road, Shanghai, 200030 (China); Chao, Y. A. [Westinghouse Electric Company, P. O. Box 355, Pittsburgh, PA 15230-0355 (United States)

    2006-07-01

    The effectiveness of modern nodal methods is largely due to its use of the information from the analytical flux solution inside a homogeneous node. As a result, the nodal coupling coefficients depend explicitly or implicitly on the evolving Eigen-value of a problem during its solution iteration process. This poses an inherently non-linear matrix Eigen-value iteration problem. This paper points out analytically that, whenever the half wave length of an evolving node interior analytic solution becomes smaller than the size of that node, this non-linear iteration problem can become inherently unstable and theoretically can always be non-convergent or converge to higher order harmonics. This phenomenon is confirmed, demonstrated and analyzed via the simplest 1-D problem solved by the simplest analytic nodal method, the Analytic Coarse Mesh Finite Difference (ACMFD, [1]) method. (authors)

  13. Distinguishing prognostic and predictive biomarkers: An information theoretic approach.

    Science.gov (United States)

    Sechidis, Konstantinos; Papangelou, Konstantinos; Metcalfe, Paul D; Svensson, David; Weatherall, James; Brown, Gavin

    2018-05-02

    The identification of biomarkers to support decision-making is central to personalised medicine, in both clinical and research scenarios. The challenge can be seen in two halves: identifying predictive markers, which guide the development/use of tailored therapies; and identifying prognostic markers, which guide other aspects of care and clinical trial planning, i.e. prognostic markers can be considered as covariates for stratification. Mistakenly assuming a biomarker to be predictive, when it is in fact largely prognostic (and vice-versa) is highly undesirable, and can result in financial, ethical and personal consequences. We present a framework for data-driven ranking of biomarkers on their prognostic/predictive strength, using a novel information theoretic method. This approach provides a natural algebra to discuss and quantify the individual predictive and prognostic strength, in a self-consistent mathematical framework. Our contribution is a novel procedure, INFO+, which naturally distinguishes the prognostic vs predictive role of each biomarker and handles higher order interactions. In a comprehensive empirical evaluation INFO+ outperforms more complex methods, most notably when noise factors dominate, and biomarkers are likely to be falsely identified as predictive, when in fact they are just strongly prognostic. Furthermore, we show that our methods can be 1-3 orders of magnitude faster than competitors, making it useful for biomarker discovery in 'big data' scenarios. Finally, we apply our methods to identify predictive biomarkers on two real clinical trials, and introduce a new graphical representation that provides greater insight into the prognostic and predictive strength of each biomarker. R implementations of the suggested methods are available at https://github.com/sechidis. konstantinos.sechidis@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  14. Information-theoretic temporal Bell inequality and quantum computation

    International Nuclear Information System (INIS)

    Morikoshi, Fumiaki

    2006-01-01

    An information-theoretic temporal Bell inequality is formulated to contrast classical and quantum computations. Any classical algorithm satisfies the inequality, while quantum ones can violate it. Therefore, the violation of the inequality is an immediate consequence of the quantumness in the computation. Furthermore, this approach suggests a notion of temporal nonlocality in quantum computation

  15. Biometric security from an information-theoretical perspective

    NARCIS (Netherlands)

    Ignatenko, T.; Willems, F.M.J.

    2012-01-01

    In this review, biometric systems are studied from an information theoretical point of view. In the first part biometric authentication systems are studied. The objective of these systems is, observing correlated enrollment and authentication biometric sequences, to generate or convey as large as

  16. Role of information theoretic uncertainty relations in quantum theory

    Energy Technology Data Exchange (ETDEWEB)

    Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz [FNSPE, Czech Technical University in Prague, Břehová 7, 115 19 Praha 1 (Czech Republic); ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin (Germany); Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk [Department of Physics and Astronomy, University of Sussex, Falmer, Brighton, BN1 9QH (United Kingdom); Joo, Jaewoo, E-mail: j.joo@surrey.ac.uk [Advanced Technology Institute and Department of Physics, University of Surrey, Guildford, GU2 7XH (United Kingdom)

    2015-04-15

    Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again, improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.

  17. Role of information theoretic uncertainty relations in quantum theory

    International Nuclear Information System (INIS)

    Jizba, Petr; Dunningham, Jacob A.; Joo, Jaewoo

    2015-01-01

    Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again, improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed

  18. Quantum dynamic imaging theoretical and numerical methods

    CERN Document Server

    Ivanov, Misha

    2011-01-01

    Studying and using light or "photons" to image and then to control and transmit molecular information is among the most challenging and significant research fields to emerge in recent years. One of the fastest growing areas involves research in the temporal imaging of quantum phenomena, ranging from molecular dynamics in the femto (10-15s) time regime for atomic motion to the atto (10-18s) time scale of electron motion. In fact, the attosecond "revolution" is now recognized as one of the most important recent breakthroughs and innovations in the science of the 21st century. A major participant in the development of ultrafast femto and attosecond temporal imaging of molecular quantum phenomena has been theory and numerical simulation of the nonlinear, non-perturbative response of atoms and molecules to ultrashort laser pulses. Therefore, imaging quantum dynamics is a new frontier of science requiring advanced mathematical approaches for analyzing and solving spatial and temporal multidimensional partial differ...

  19. Theoretical and numerical method in aeroacoustics

    Directory of Open Access Journals (Sweden)

    Nicuşor ALEXANDRESCU

    2010-06-01

    Full Text Available The paper deals with the mathematical and numerical modeling of the aerodynamic noisegenerated by the fluid flow interaction with the solid structure of a rotor blade.Our analysis use Lighthill’s acoustic analogy. Lighthill idea was to express the fundamental equationsof motion into a wave equation for acoustic fluctuation with a source term on the right-hand side. Theobtained wave equation is solved numerically by the spatial discretization. The method is applied inthe case of monopole source placed in different points of blade surfaces to find this effect of noisepropagation.

  20. Universality in an information-theoretic motivated nonlinear Schrodinger equation

    International Nuclear Information System (INIS)

    Parwani, R; Tabia, G

    2007-01-01

    Using perturbative methods, we analyse a nonlinear generalization of Schrodinger's equation that had previously been obtained through information-theoretic arguments. We obtain analytical expressions for the leading correction, in terms of the nonlinearity scale, to the energy eigenvalues of the linear Schrodinger equation in the presence of an external potential and observe some generic features. In one space dimension these are (i) for nodeless ground states, the energy shifts are subleading in the nonlinearity parameter compared to the shifts for the excited states; (ii) the shifts for the excited states are due predominantly to contribution from the nodes of the unperturbed wavefunctions, and (iii) the energy shifts for excited states are positive for small values of a regulating parameter and negative at large values, vanishing at a universal critical value that is not manifest in the equation. Some of these features hold true for higher dimensional problems. We also study two exactly solved nonlinear Schrodinger equations so as to contrast our observations. Finally, we comment on the possible significance of our results if the nonlinearity is physically realized

  1. Information theoretic bounds for compressed sensing in SAR imaging

    International Nuclear Information System (INIS)

    Jingxiong, Zhang; Ke, Yang; Jianzhong, Guo

    2014-01-01

    Compressed sensing (CS) is a new framework for sampling and reconstructing sparse signals from measurements significantly fewer than those prescribed by Nyquist rate in the Shannon sampling theorem. This new strategy, applied in various application areas including synthetic aperture radar (SAR), relies on two principles: sparsity, which is related to the signals of interest, and incoherence, which refers to the sensing modality. An important question in CS-based SAR system design concerns sampling rate necessary and sufficient for exact or approximate recovery of sparse signals. In the literature, bounds of measurements (or sampling rate) in CS have been proposed from the perspective of information theory. However, these information-theoretic bounds need to be reviewed and, if necessary, validated for CS-based SAR imaging, as there are various assumptions made in the derivations of lower and upper bounds on sub-Nyquist sampling rates, which may not hold true in CS-based SAR imaging. In this paper, information-theoretic bounds of sampling rate will be analyzed. For this, the SAR measurement system is modeled as an information channel, with channel capacity and rate-distortion characteristics evaluated to enable the determination of sampling rates required for recovery of sparse scenes. Experiments based on simulated data will be undertaken to test the theoretic bounds against empirical results about sampling rates required to achieve certain detection error probabilities

  2. Applied Mathematical Methods in Theoretical Physics

    Science.gov (United States)

    Masujima, Michio

    2005-04-01

    All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.

  3. Theoretical information measurement in nonrelativistic time-dependent approach

    Science.gov (United States)

    Najafizade, S. A.; Hassanabadi, H.; Zarrinkamar, S.

    2018-02-01

    The information-theoretic measures of time-dependent Schrödinger equation are investigated via the Shannon information entropy, variance and local Fisher quantities. In our calculations, we consider the two first states n = 0,1 and obtain the position Sx (t) and momentum Sp (t) Shannon entropies as well as Fisher information Ix (t) in position and momentum Ip (t) spaces. Using the Fourier transformed wave function, we obtain the results in momentum space. Some interesting features of the information entropy densities ρs (x,t) and γs (p,t), as well as the probability densities ρ (x,t) and γ (p,t) for time-dependent states are demonstrated. We establish a general relation between variance and Fisher's information. The Bialynicki-Birula-Mycielski inequality is tested and verified for the states n = 0,1.

  4. Information theoretic analysis of canny edge detection in visual communication

    Science.gov (United States)

    Jiang, Bo; Rahman, Zia-ur

    2011-06-01

    In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.

  5. Information-theoretic signatures of biodiversity in the barcoding gene.

    Science.gov (United States)

    Barbosa, Valmir C

    2018-08-14

    Analyzing the information content of DNA, though holding the promise to help quantify how the processes of evolution have led to information gain throughout the ages, has remained an elusive goal. Paradoxically, one of the main reasons for this has been precisely the great diversity of life on the planet: if on the one hand this diversity is a rich source of data for information-content analysis, on the other hand there is so much variation as to make the task unmanageable. During the past decade or so, however, succinct fragments of the COI mitochondrial gene, which is present in all animal phyla and in a few others, have been shown to be useful for species identification through DNA barcoding. A few million such fragments are now publicly available through the BOLD systems initiative, thus providing an unprecedented opportunity for relatively comprehensive information-theoretic analyses of DNA to be attempted. Here we show how a generalized form of total correlation can yield distinctive information-theoretic descriptors of the phyla represented in those fragments. In order to illustrate the potential of this analysis to provide new insight into the evolution of species, we performed principal component analysis on standardized versions of the said descriptors for 23 phyla. Surprisingly, we found that, though based solely on the species represented in the data, the first principal component correlates strongly with the natural logarithm of the number of all known living species for those phyla. The new descriptors thus constitute clear information-theoretic signatures of the processes whereby evolution has given rise to current biodiversity, which suggests their potential usefulness in further related studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Information-theoretic decomposition of embodied and situated systems.

    Science.gov (United States)

    Da Rold, Federico

    2018-07-01

    The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Information-Theoretic Bounded Rationality and ε-Optimality

    Directory of Open Access Journals (Sweden)

    Daniel A. Braun

    2014-08-01

    Full Text Available Bounded rationality concerns the study of decision makers with limited information processing resources. Previously, the free energy difference functional has been suggested to model bounded rational decision making, as it provides a natural trade-off between an energy or utility function that is to be optimized and information processing costs that are measured by entropic search costs. The main question of this article is how the information-theoretic free energy model relates to simple ε-optimality models of bounded rational decision making, where the decision maker is satisfied with any action in an ε-neighborhood of the optimal utility. We find that the stochastic policies that optimize the free energy trade-off comply with the notion of ε-optimality. Moreover, this optimality criterion even holds when the environment is adversarial. We conclude that the study of bounded rationality based on ε-optimality criteria that abstract away from the particulars of the information processing constraints is compatible with the information-theoretic free energy model of bounded rationality.

  8. A comparison of SAR ATR performance with information theoretic predictions

    Science.gov (United States)

    Blacknell, David

    2003-09-01

    Performance assessment of automatic target detection and recognition algorithms for SAR systems (or indeed any other sensors) is essential if the military utility of the system / algorithm mix is to be quantified. This is a relatively straightforward task if extensive trials data from an existing system is used. However, a crucial requirement is to assess the potential performance of novel systems as a guide to procurement decisions. This task is no longer straightforward since a hypothetical system cannot provide experimental trials data. QinetiQ has previously developed a theoretical technique for classification algorithm performance assessment based on information theory. The purpose of the study presented here has been to validate this approach. To this end, experimental SAR imagery of targets has been collected using the QinetiQ Enhanced Surveillance Radar to allow algorithm performance assessments as a number of parameters are varied. In particular, performance comparisons can be made for (i) resolutions up to 0.1m, (ii) single channel versus polarimetric (iii) targets in the open versus targets in scrubland and (iv) use versus non-use of camouflage. The change in performance as these parameters are varied has been quantified from the experimental imagery whilst the information theoretic approach has been used to predict the expected variation of performance with parameter value. A comparison of these measured and predicted assessments has revealed the strengths and weaknesses of the theoretical technique as will be discussed in the paper.

  9. Methods for evaluating information sources

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2012-01-01

    The article briefly presents and discusses 12 different approaches to the evaluation of information sources (for example a Wikipedia entry or a journal article): (1) the checklist approach; (2) classical peer review; (3) modified peer review; (4) evaluation based on examining the coverage...... of controversial views; (5) evidence-based evaluation; (6) comparative studies; (7) author credentials; (8) publisher reputation; (9) journal impact factor; (10) sponsoring: tracing the influence of economic, political, and ideological interests; (11) book reviews and book reviewing; and (12) broader criteria....... Reading a text is often not a simple process. All the methods discussed here are steps on the way on learning how to read, understand, and criticize texts. According to hermeneutics it involves the subjectivity of the reader, and that subjectivity is influenced, more or less, by different theoretical...

  10. Information-Theoretical Analysis of EEG Microstate Sequences in Python

    Directory of Open Access Journals (Sweden)

    Frederic von Wegner

    2018-06-01

    Full Text Available We present an open-source Python package to compute information-theoretical quantities for electroencephalographic data. Electroencephalography (EEG measures the electrical potential generated by the cerebral cortex and the set of spatial patterns projected by the brain's electrical potential on the scalp surface can be clustered into a set of representative maps called EEG microstates. Microstate time series are obtained by competitively fitting the microstate maps back into the EEG data set, i.e., by substituting the EEG data at a given time with the label of the microstate that has the highest similarity with the actual EEG topography. As microstate sequences consist of non-metric random variables, e.g., the letters A–D, we recently introduced information-theoretical measures to quantify these time series. In wakeful resting state EEG recordings, we found new characteristics of microstate sequences such as periodicities related to EEG frequency bands. The algorithms used are here provided as an open-source package and their use is explained in a tutorial style. The package is self-contained and the programming style is procedural, focusing on code intelligibility and easy portability. Using a sample EEG file, we demonstrate how to perform EEG microstate segmentation using the modified K-means approach, and how to compute and visualize the recently introduced information-theoretical tests and quantities. The time-lagged mutual information function is derived as a discrete symbolic alternative to the autocorrelation function for metric time series and confidence intervals are computed from Markov chain surrogate data. The software package provides an open-source extension to the existing implementations of the microstate transform and is specifically designed to analyze resting state EEG recordings.

  11. Research methods in information

    CERN Document Server

    Pickard, Alison Jane

    2013-01-01

    The long-awaited 2nd edition of this best-selling research methods handbook is fully updated and includes brand new coverage of online research methods and techniques, mixed methodology and qualitative analysis. There is an entire chapter contributed by Professor Julie McLeod, Sue Childs and Elizabeth Lomas focusing on research data management, applying evidence from the recent JISC funded 'DATUM' project. The first to focus entirely on the needs of the information and communications community, it guides the would-be researcher through the variety of possibilities open to them under the heading "research" and provides students with the confidence to embark on their dissertations. The focus here is on the 'doing' and although the philosophy and theory of research is explored to provide context, this is essentially a practical exploration of the whole research process with each chapter fully supported by examples and exercises tried and tested over a whole teaching career. The book will take readers through eac...

  12. Optimal information transfer in enzymatic networks: A field theoretic formulation

    Science.gov (United States)

    Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.

    2017-07-01

    Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in

  13. Symbolic interactionism as a theoretical perspective for multiple method research.

    Science.gov (United States)

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  14. Information theoretical assessment of visual communication with wavelet coding

    Science.gov (United States)

    Rahman, Zia-ur

    1995-06-01

    A visual communication channel can be characterized by the efficiency with which it conveys information, and the quality of the images restored from the transmitted data. Efficient data representation requires the use of constraints of the visual communication channel. Our information theoretic analysis combines the design of the wavelet compression algorithm with the design of the visual communication channel. Shannon's communication theory, Wiener's restoration filter, and the critical design factors of image gathering and display are combined to provide metrics for measuring the efficiency of data transmission, and for quantitatively assessing the visual quality of the restored image. These metrics are: a) the mutual information (Eta) between the radiance the radiance field and the restored image, and b) the efficiency of the channel which can be roughly measured by as the ratio (Eta) /H, where H is the average number of bits being used to transmit the data. Huck, et al. (Journal of Visual Communication and Image Representation, Vol. 4, No. 2, 1993) have shown that channels desinged to maximize (Eta) , also maximize. Our assessment provides a framework for designing channels which provide the highest possible visual quality for a given amount of data under the critical design limitations of the image gathering and display devices. Results show that a trade-off exists between the maximum realizable information of the channel and its efficiency: an increase in one leads to a decrease in the other. The final selection of which of these quantities to maximize is, of course, application dependent.

  15. Exploring super-gaussianity towards robust information-theoretical time delay estimation

    DEFF Research Database (Denmark)

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos

    2013-01-01

    the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced...

  16. About possibilities using of theoretical calculation methods in radioecology

    International Nuclear Information System (INIS)

    Demoukhamedova, S.D.; Aliev, D.I.; Alieva, I.N.

    2002-01-01

    remaining the biological activity and changing the herbicide properties and selectivity is determined by charges distribution into naphthalene ring. The changing of charge distribution in the naphthalene ring has been induced by the effect of ionizing radiation. So, the theoretical calculation methods capable provide more detailed information concerns the radiation effect on ecosystem

  17. Characterising Information Systems in Australia: A Theoretical Framework

    Directory of Open Access Journals (Sweden)

    Gail Ridley

    2006-11-01

    Full Text Available The study reported in this volume aims to investigate the state of the Information Systems academic discipline in Australia from a historical and current perspective, collecting evidence across a range of dimensions. To maximise the strategic potential of the study, the results need to be capable of integration, so that the relationships within and across the dimensions and geographical units are understood. A meaningful theoretical framework will help relate the results of the different dimensions of the study to characterise the discipline in the region, and assist in empowering the Australian IS research community. This paper reviewed literature on the development of disciplines, before deriving a theoretical framework for the broader study reported in this volume. The framework considered the current and past state of IS in Australian universities from the perspective of the development of a discipline. The components of the framework were derived and validated through a thematic analysis of both the IS and non-IS literature. This paper also presents brief vignettes of the development of two other related disciplines. The framework developed in this paper, which has been partly guided by Whitley’s Theory of Scientific Change, has been used to analyse data collated from the Australian states and the Australian Capital Territory. The degree of variation in Australian IS as an indication of its “professionalisation”, the nature of its body of knowledge and its mechanisms of control, will be used to frame the analysis. Research reported in several of the papers that follow in this volume has drawn upon the theoretical framework presented below.

  18. Informing Physics: Jacob Bekenstein and the Informational Turn in Theoretical Physics

    Science.gov (United States)

    Belfer, Israel

    2014-03-01

    In his PhD dissertation in the early 1970s, the Mexican-Israeli theoretical physicist Jacob Bekenstein developed the thermodynamics of black holes using a generalized version of the second law of thermodynamics. This work made it possible for physicists to describe and analyze black holes using information-theoretical concepts. It also helped to transform information theory into a fundamental and foundational concept in theoretical physics. The story of Bekenstein's work—which was initially opposed by many scientists, including Stephen Hawking—highlights the transformation within physics towards an information-oriented scientific mode of theorizing. This "informational turn" amounted to a mild-mannered revolution within physics, revolutionary without being rebellious.

  19. Methods of information geometry

    CERN Document Server

    Amari, Shun-Ichi

    2000-01-01

    Information geometry provides the mathematical sciences with a new framework of analysis. It has emerged from the investigation of the natural differential geometric structure on manifolds of probability distributions, which consists of a Riemannian metric defined by the Fisher information and a one-parameter family of affine connections called the \\alpha-connections. The duality between the \\alpha-connection and the (-\\alpha)-connection together with the metric play an essential role in this geometry. This kind of duality, having emerged from manifolds of probability distributions, is ubiquitous, appearing in a variety of problems which might have no explicit relation to probability theory. Through the duality, it is possible to analyze various fundamental problems in a unified perspective. The first half of this book is devoted to a comprehensive introduction to the mathematical foundation of information geometry, including preliminaries from differential geometry, the geometry of manifolds or probability d...

  20. Information theoretic approach to tactile encoding and discrimination

    OpenAIRE

    Saal, Hannes

    2011-01-01

    The human sense of touch integrates feedback from a multitude of touch receptors, but how this information is represented in the neural responses such that it can be extracted quickly and reliably is still largely an open question. At the same time, dexterous robots equipped with touch sensors are becoming more common, necessitating better methods for representing sequentially updated information and new control strategies that aid in extracting relevant features for object man...

  1. Physics Without Physics. The Power of Information-theoretical Principles

    Science.gov (United States)

    D'Ariano, Giacomo Mauro

    2017-01-01

    David Finkelstein was very fond of the new information-theoretic paradigm of physics advocated by John Archibald Wheeler and Richard Feynman. Only recently, however, the paradigm has concretely shown its full power, with the derivation of quantum theory (Chiribella et al., Phys. Rev. A 84:012311, 2011; D'Ariano et al., 2017) and of free quantum field theory (D'Ariano and Perinotti, Phys. Rev. A 90:062106, 2014; Bisio et al., Phys. Rev. A 88:032301, 2013; Bisio et al., Ann. Phys. 354:244, 2015; Bisio et al., Ann. Phys. 368:177, 2016) from informational principles. The paradigm has opened for the first time the possibility of avoiding physical primitives in the axioms of the physical theory, allowing a re-foundation of the whole physics over logically solid grounds. In addition to such methodological value, the new information-theoretic derivation of quantum field theory is particularly interesting for establishing a theoretical framework for quantum gravity, with the idea of obtaining gravity itself as emergent from the quantum information processing, as also suggested by the role played by information in the holographic principle (Susskind, J. Math. Phys. 36:6377, 1995; Bousso, Rev. Mod. Phys. 74:825, 2002). In this paper I review how free quantum field theory is derived without using mechanical primitives, including space-time, special relativity, Hamiltonians, and quantization rules. The theory is simply provided by the simplest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the three following simple principles: homogeneity, locality, and isotropy. The inherent discrete nature of the informational derivation leads to an extension of quantum field theory in terms of a quantum cellular automata and quantum walks. A simple heuristic argument sets the scale to the Planck one, and the currently observed regime where discreteness is not visible is the so-called "relativistic regime" of small wavevectors, which

  2. Theoretical methods and models for mechanical properties of soft biomaterials

    Directory of Open Access Journals (Sweden)

    Zhonggang Feng

    2017-06-01

    Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.

  3. Information-theoretic limitations on approximate quantum cloning and broadcasting

    Science.gov (United States)

    Lemm, Marius; Wilde, Mark M.

    2017-07-01

    We prove quantitative limitations on any approximate simultaneous cloning or broadcasting of mixed states. The results are based on information-theoretic (entropic) considerations and generalize the well-known no-cloning and no-broadcasting theorems. We also observe and exploit the fact that the universal cloning machine on the symmetric subspace of n qudits and symmetrized partial trace channels are dual to each other. This duality manifests itself both in the algebraic sense of adjointness of quantum channels and in the operational sense that a universal cloning machine can be used as an approximate recovery channel for a symmetrized partial trace channel and vice versa. The duality extends to give control of the performance of generalized universal quantum cloning machines (UQCMs) on subspaces more general than the symmetric subspace. This gives a way to quantify the usefulness of a priori information in the context of cloning. For example, we can control the performance of an antisymmetric analog of the UQCM in recovering from the loss of n -k fermionic particles.

  4. Information Theoretic Characterization of Physical Theories with Projective State Space

    Science.gov (United States)

    Zaopo, Marco

    2015-08-01

    Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.

  5. Towards integrating control and information theories from information-theoretic measures to control performance limitations

    CERN Document Server

    Fang, Song; Ishii, Hideaki

    2017-01-01

    This book investigates the performance limitation issues in networked feedback systems. The fact that networked feedback systems consist of control and communication devices and systems calls for the integration of control theory and information theory. The primary contributions of this book lie in two aspects: the newly-proposed information-theoretic measures and the newly-discovered control performance limitations. We first propose a number of information notions to facilitate the analysis. Using those notions, classes of performance limitations of networked feedback systems, as well as state estimation systems, are then investigated. In general, the book presents a unique, cohesive treatment of performance limitation issues of networked feedback systems via an information-theoretic approach. This book is believed to be the first to treat the aforementioned subjects systematically and in a unified manner, offering a unique perspective differing from existing books.

  6. Nonlocal correlations as an information-theoretic resource

    International Nuclear Information System (INIS)

    Barrett, Jonathan; Massar, Serge; Pironio, Stefano; Linden, Noah; Popescu, Sandu; Roberts, David

    2005-01-01

    It is well known that measurements performed on spatially separated entangled quantum systems can give rise to correlations that are nonlocal, in the sense that a Bell inequality is violated. They cannot, however, be used for superluminal signaling. It is also known that it is possible to write down sets of 'superquantum' correlations that are more nonlocal than is allowed by quantum mechanics, yet are still nonsignaling. Viewed as an information-theoretic resource, superquantum correlations are very powerful at reducing the amount of communication needed for distributed computational tasks. An intriguing question is why quantum mechanics does not allow these more powerful correlations. We aim to shed light on the range of quantum possibilities by placing them within a wider context. With this in mind, we investigate the set of correlations that are constrained only by the no-signaling principle. These correlations form a polytope, which contains the quantum correlations as a (proper) subset. We determine the vertices of the no-signaling polytope in the case that two observers each choose from two possible measurements with d outcomes. We then consider how interconversions between different sorts of correlations may be achieved. Finally, we consider some multipartite examples

  7. Advanced Numerical and Theoretical Methods for Photonic Crystals and Metamaterials

    Science.gov (United States)

    Felbacq, Didier

    2016-11-01

    This book provides a set of theoretical and numerical tools useful for the study of wave propagation in metamaterials and photonic crystals. While concentrating on electromagnetic waves, most of the material can be used for acoustic (or quantum) waves. For each presented numerical method, numerical code written in MATLAB® is presented. The codes are limited to 2D problems and can be easily translated in Python or Scilab, and used directly with Octave as well.

  8. Theoretical foundations of information security investment security companies

    Directory of Open Access Journals (Sweden)

    G.V. Berlyak

    2015-03-01

    Full Text Available Methodological problems related to the lack of guidance in the provisions (standards of accounting on the reflection in the accounting and financial reporting of the research object. In this connection, it is proposed to amend the provisions (standards of accounting. This will allow to come to the consistency of accounting methods of operations with elements of investment activity. Based on analysis of the information needs of users suggested indicators identikativnye blocks (block corporate finance unit assess the relationship with financial institutions, block the fulfillment of obligations according to the calculations, the investment unit, a science and innovation, investment security and developed forms of internal accounting controls and improvements to existing forms financial statements for the investment activities of the enterprise. Using enterprise data reporting forms provide timely and reliable information on the identity and structure of investment security and enable the company to effectively plan and develop personnel policies for enterprise management.

  9. Information System Quality Assessment Methods

    OpenAIRE

    Korn, Alexandra

    2014-01-01

    This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this w...

  10. Visual words assignment via information-theoretic manifold embedding.

    Science.gov (United States)

    Deng, Yue; Li, Yipeng; Qian, Yanjun; Ji, Xiangyang; Dai, Qionghai

    2014-10-01

    Codebook-based learning provides a flexible way to extract the contents of an image in a data-driven manner for visual recognition. One central task in such frameworks is codeword assignment, which allocates local image descriptors to the most similar codewords in the dictionary to generate histogram for categorization. Nevertheless, existing assignment approaches, e.g., nearest neighbors strategy (hard assignment) and Gaussian similarity (soft assignment), suffer from two problems: 1) too strong Euclidean assumption and 2) neglecting the label information of the local descriptors. To address the aforementioned two challenges, we propose a graph assignment method with maximal mutual information (GAMI) regularization. GAMI takes the power of manifold structure to better reveal the relationship of massive number of local features by nonlinear graph metric. Meanwhile, the mutual information of descriptor-label pairs is ultimately optimized in the embedding space for the sake of enhancing the discriminant property of the selected codewords. According to such objective, two optimization models, i.e., inexact-GAMI and exact-GAMI, are respectively proposed in this paper. The inexact model can be efficiently solved with a closed-from solution. The stricter exact-GAMI nonparametrically estimates the entropy of descriptor-label pairs in the embedding space and thus leads to a relatively complicated but still trackable optimization. The effectiveness of GAMI models are verified on both the public and our own datasets.

  11. Multivariate information-theoretic measures reveal directed information structure and task relevant changes in fMRI connectivity.

    Science.gov (United States)

    Lizier, Joseph T; Heinzle, Jakob; Horstmann, Annette; Haynes, John-Dylan; Prokopenko, Mikhail

    2011-02-01

    The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure between brain regions, and how that structure changes according to behavioral conditions. This method is distinguished in using asymmetric, multivariate, information-theoretical analysis, which captures not only directional and non-linear relationships, but also collective interactions. Importantly, the method is able to estimate multivariate information measures with only relatively little data. We demonstrate the method to analyze functional magnetic resonance imaging time series to establish the directed information structure between brain regions involved in a visuo-motor tracking task. Importantly, this results in a tiered structure, with known movement planning regions driving visual and motor control regions. Also, we examine the changes in this structure as the difficulty of the tracking task is increased. We find that task difficulty modulates the coupling strength between regions of a cortical network involved in movement planning and between motor cortex and the cerebellum which is involved in the fine-tuning of motor control. It is likely these methods will find utility in identifying interregional structure (and experimentally induced changes in this structure) in other cognitive tasks and data modalities.

  12. THEORETICAL APPROACHES TO THE DEFINITION OF THE "INFORMATION RESOURCE"

    OpenAIRE

    I. Netreba

    2014-01-01

    Existing approaches to determining the nature of the category "information resource" are detailed and systematized. The relationships between the categories "information resource", "information technology", "information management system" are revealed. Determined the importance of information resources for the production process at the enterprise.

  13. Signal correlations in biomass combustion. An information theoretic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ruusunen, M.

    2013-09-01

    Increasing environmental and economic awareness are driving the development of combustion technologies to efficient biomass use and clean burning. To accomplish these goals, quantitative information about combustion variables is needed. However, for small-scale combustion units the existing monitoring methods are often expensive or complex. This study aimed to quantify correlations between flue gas temperatures and combustion variables, namely typical emission components, heat output, and efficiency. For this, data acquired from four small-scale combustion units and a large circulating fluidised bed boiler was studied. The fuel range varied from wood logs, wood chips, and wood pellets to biomass residue. Original signals and a defined set of their mathematical transformations were applied to data analysis. In order to evaluate the strength of the correlations, a multivariate distance measure based on information theory was derived. The analysis further assessed time-varying signal correlations and relative time delays. Ranking of the analysis results was based on the distance measure. The uniformity of the correlations in the different data sets was studied by comparing the 10-quantiles of the measured signal. The method was validated with two benchmark data sets. The flue gas temperatures and the combustion variables measured carried similar information. The strongest correlations were mainly linear with the transformed signal combinations and explicable by the combustion theory. Remarkably, the results showed uniformity of the correlations across the data sets with several signal transformations. This was also indicated by simulations using a linear model with constant structure to monitor carbon dioxide in flue gas. Acceptable performance was observed according to three validation criteria used to quantify modelling error in each data set. In general, the findings demonstrate that the presented signal transformations enable real-time approximation of the studied

  14. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory.

    Science.gov (United States)

    Agres, Kat; Abdallah, Samer; Pearce, Marcus

    2018-01-01

    A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different types of statistical information affect listeners' memory for auditory stimuli. We used a combination of behavioral and computational methods to investigate memory for non-linguistic auditory sequences. Participants repeatedly heard tone sequences varying systematically in their information-theoretic properties. Expectedness ratings of tones were collected during three listening sessions, and a recognition memory test was given after each session. Information-theoretic measures of sequential predictability significantly influenced listeners' expectedness ratings, and variations in these properties had a significant impact on memory performance. Predictable sequences yielded increasingly better memory performance with increasing exposure. Computational simulations using a probabilistic model of auditory expectation suggest that listeners dynamically formed a new, and increasingly accurate, implicit cognitive model of the information-theoretic structure of the sequences throughout the experimental session. Copyright © 2017 Cognitive Science Society, Inc.

  15. Information-theoretical analysis of private content identification

    NARCIS (Netherlands)

    Voloshynovskiy, S.; Koval, O.; Beekhof, F.; Farhadzadeh, F.; Holotyak, T.

    2010-01-01

    In recent years, content identification based on digital fingerprinting attracts a lot of attention in different emerging applications. At the same time, the theoretical analysis of digital fingerprinting systems for finite length case remains an open issue. Additionally, privacy leaks caused by

  16. Theoretical physics 7 quantum mechanics : methods and applications

    CERN Document Server

    Nolting, Wolfgang

    2017-01-01

    This textbook offers a clear and comprehensive introduction to methods and applications in quantum mechanics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, thus developing the understanding of quantized states further on. The first part of the book introduces the quantum theory of angular momentum and approximation methods. More complex themes are covered in the second part of the book, which describes multiple particle systems and scattering theory. Ideally suited to undergraduate students with some grounding in the basics of quantum mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets.  About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this seri...

  17. Information density converges in dialogue: Towards an information-theoretic model.

    Science.gov (United States)

    Xu, Yang; Reitter, David

    2018-01-01

    The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Methods of Organizational Information Security

    Science.gov (United States)

    Martins, José; Dos Santos, Henrique

    The principle objective of this article is to present a literature review for the methods used in the security of information at the level of organizations. Some of the principle problems are identified and a first group of relevant dimensions is presented for an efficient management of information security. The study is based on the literature review made, using some of the more relevant certified articles of this theme, in international reports and in the principle norms of management of information security. From the readings that were done, we identified some of the methods oriented for risk management, norms of certification and good practice of security of information. Some of the norms are oriented for the certification of the product or system and others oriented to the processes of the business. There are also studies with the proposal of Frameworks that suggest the integration of different approaches with the foundation of norms focused on technologies, in processes and taking into consideration the organizational and human environment of the organizations. In our perspective, the biggest contribute to the security of information is the development of a method of security of information for an organization in a conflicting environment. This should make available the security of information, against the possible dimensions of attack that the threats could exploit, through the vulnerability of the organizational actives. This method should support the new concepts of "Network centric warfare", "Information superiority" and "Information warfare" especially developed in this last decade, where information is seen simultaneously as a weapon and as a target.

  19. A Theoretical Approach to Information Needs Across Different Healthcare Stakeholders

    Science.gov (United States)

    Raitoharju, Reetta; Aarnio, Eeva

    Increased access to medical information can lead to information overload among both the employees in the healthcare sector as well as among healthcare consumers. Moreover, medical information can be hard to understand for consumers who have no prerequisites for interpreting and understanding it. Information systems (e.g. electronic patient records) are normally designed to meet the demands of one professional group, for instance those of physicians. Therefore, the same information in the same form is presented to all the users of the systems regardless of the actual need or prerequisites. The purpose of this article is to illustrate the differences in information needs across different stakeholders in healthcare. A literature review was conducted to collect examples of these different information needs. Based on the findings the role of more user specific information systems is discussed.

  20. Experimental and Theoretical Methods in Algebra, Geometry and Topology

    CERN Document Server

    Veys, Willem; Bridging Algebra, Geometry, and Topology

    2014-01-01

    Algebra, geometry and topology cover a variety of different, but intimately related research fields in modern mathematics. This book focuses on specific aspects of this interaction. The present volume contains refereed papers which were presented at the International Conference “Experimental and Theoretical Methods in Algebra, Geometry and Topology”, held in Eforie Nord (near Constanta), Romania, during 20-25 June 2013. The conference was devoted to the 60th anniversary of the distinguished Romanian mathematicians Alexandru Dimca and Ştefan Papadima. The selected papers consist of original research work and a survey paper. They are intended for a large audience, including researchers and graduate students interested in algebraic geometry, combinatorics, topology, hyperplane arrangements and commutative algebra. The papers are written by well-known experts from different fields of mathematics, affiliated to universities from all over the word, they cover a broad range of topics and explore the research f...

  1. An integrated organisation-wide data quality management and information governance framework: theoretical underpinnings

    Directory of Open Access Journals (Sweden)

    Siaw-Teng Liaw

    2014-10-01

    Full Text Available Introduction Increasing investment in eHealth aims to improve cost effectiveness and safety of care. Data extraction and aggregation can create new data products to improve professional practice and provide feedback to improve the quality of source data. A previous systematic review concluded that locally relevant clinical indicators and use of clinical record systems could support clinical governance. We aimed to extend and update the review with a theoretical framework.Methods We searched PubMed, Medline, Web of Science, ABI Inform (Proquest and Business Source Premier (EBSCO using the terms curation, information ecosystem, data quality management (DQM, data governance, information governance (IG and data stewardship. We focused on and analysed the scope of DQM and IG processes, theoretical frameworks, and determinants of the processing, quality assurance, presentation and sharing of data across the enterprise.Findings There are good theoretical reasons for integrated governance, but there is variable alignment of DQM, IG and health system objectives across the health enterprise. Ethical constraints exist that require health information ecosystems to process data in ways that are aligned with improving health and system efficiency and ensuring patient safety. Despite an increasingly ‘big-data’ environment, DQM and IG in health services are still fragmented across the data production cycle. We extend current work on DQM and IG with a theoretical framework for integrated IG across the data cycle.Conclusions The dimensions of this theory-based framework would require testing with qualitative and quantitative studies to examine the applicability and utility, along with an evaluation of its impact on data quality across the health enterprise.

  2. An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.

    Science.gov (United States)

    Brito da Silva, Leonardo Enzo; Wunsch, Donald C

    2018-06-01

    Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.

  3. Human papillomavirus (HPV) information needs: a theoretical framework

    Science.gov (United States)

    Marlow, Laura A V; Wardle, Jane; Waller, Jo; Grant, Nina

    2009-01-01

    Background With the introduction of human papillomavirus (HPV) testing and vaccination in the UK, health professionals will start to receive questions about the virus from their patients. This study aimed to identify the key questions about HPV that British women will ask when considering having an HPV test or vaccination. Methods Face-to-face interviews were carried out with 21 women to discover what they wanted to know about HPV. A thematic framework approach was used to analyse the data and identify key themes in women's HPV knowledge requirements. Results Women's questions about HPV fell into six areas: identity (e.g. What are the symptoms?), cause (e.g. How do you get HPV?), timeline (e.g. How long does it last?), consequences (e.g. Does it always cause cervical cancer?) and control-cure (e.g. Can you prevent infection?). In addition, they asked procedural questions about testing and vaccination (e.g. Where do I get an HPV test?). These mapped well onto the dimensions identified in Leventhal's description of lay models of illness, called the 'Common Sense Model' (CSM). Discussion and conclusions These results indicated that the majority of the questions women asked about HPV fitted well into the CSM, which therefore provides a structure for women's information needs. The findings could help health professionals understand what questions they may be expected to answer. Framing educational materials using the CSM themes may also help health educators achieve a good fit with what the public want to know. PMID:19126314

  4. Theoretical studies of potential energy surfaces and computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, R. [Argonne National Laboratory, IL (United States)

    1993-12-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. This difficult challenge is met with general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions, are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

  5. Information-theoretic treatment of tripartite systems and quantum channels

    International Nuclear Information System (INIS)

    Coles, Patrick J.; Yu Li; Gheorghiu, Vlad; Griffiths, Robert B.

    2011-01-01

    A Holevo measure is used to discuss how much information about a given positive operator valued measure (POVM) on system a is present in another system b, and how this influences the presence or absence of information about a different POVM on a in a third system c. The main goal is to extend information theorems for mutually unbiased bases or general bases to arbitrary POVMs, and especially to generalize ''all-or-nothing'' theorems about information located in tripartite systems to the case of partial information, in the form of quantitative inequalities. Some of the inequalities can be viewed as entropic uncertainty relations that apply in the presence of quantum side information, as in recent work by Berta et al. [Nature Physics 6, 659 (2010)]. All of the results also apply to quantum channels: For example, if E accurately transmits certain POVMs, the complementary channel F will necessarily be noisy for certain other POVMs. While the inequalities are valid for mixed states of tripartite systems, restricting to pure states leads to the basis invariance of the difference between the information about a contained in b and c.

  6. Sentence Comprehension as Mental Simulation: An Information-Theoretic Perspective

    Directory of Open Access Journals (Sweden)

    Gabriella Vigliocco

    2011-11-01

    Full Text Available It has been argued that the mental representation resulting from sentence comprehension is not (just an abstract symbolic structure but a “mental simulation” of the state-of-affairs described by the sentence. We present a particular formalization of this theory and show how it gives rise to quantifications of the amount of syntactic and semantic information conveyed by each word in a sentence. These information measures predict simulated word-processing times in a dynamic connectionist model of sentence comprehension as mental simulation. A quantitatively similar relation between information content and reading time is known to be present in human reading-time data.

  7. Theoretical Model of Development of Information Competence among Students Enrolled in Elective Courses

    Science.gov (United States)

    Zhumasheva, Anara; Zhumabaeva, Zaida; Sakenov, Janat; Vedilina, Yelena; Zhaxylykova, Nuriya; Sekenova, Balkumis

    2016-01-01

    The current study focuses on the research topic of creating a theoretical model of development of information competence among students enrolled in elective courses. In order to examine specific features of the theoretical model of development of information competence among students enrolled in elective courses, we performed an analysis of…

  8. 31st International Colloquium in Group Theoretical Methods in Physics

    CERN Document Server

    Gazeau, Jean-Pierre; Faci, Sofiane; Micklitz, Tobias; Scherer, Ricardo; Toppan, Francesco

    2017-01-01

    This proceedings records the 31st International Colloquium on Group Theoretical Methods in Physics (“Group 31”). Plenary-invited articles propose new approaches to the moduli spaces in gauge theories (V. Pestun, 2016 Weyl Prize Awardee), the phenomenology of neutrinos in non-commutative space-time, the use of Hardy spaces in quantum physics, contradictions in the use of statistical methods on complex systems, and alternative models of supersymmetry. This volume’s survey articles broaden the colloquia’s scope out into Majorana neutrino behavior, the dynamics of radiating charges, statistical pattern recognition of amino acids, and a variety of applications of gauge theory, among others. This year’s proceedings further honors Bertram Kostant (2016 Wigner Medalist), as well as S.T. Ali and L. Boyle, for their life-long contributions to the math and physics communities. The aim of the ICGTMP is to provide a forum for physicists, mathematicians, and scientists of related disciplines who develop or apply ...

  9. An Information-Theoretic Approach for Indirect Train Traffic Monitoring Using Building Vibration

    Directory of Open Access Journals (Sweden)

    Susu Xu

    2017-05-01

    Full Text Available This paper introduces an indirect train traffic monitoring method to detect and infer real-time train events based on the vibration response of a nearby building. Monitoring and characterizing traffic events are important for cities to improve the efficiency of transportation systems (e.g., train passing, heavy trucks, and traffic. Most prior work falls into two categories: (1 methods that require intensive labor to manually record events or (2 systems that require deployment of dedicated sensors. These approaches are difficult and costly to execute and maintain. In addition, most prior work uses dedicated sensors designed for a single purpose, resulting in deployment of multiple sensor systems. This further increases costs. Meanwhile, with the increasing demands of structural health monitoring, many vibration sensors are being deployed in commercial buildings. Traffic events create ground vibration that propagates to nearby building structures inducing noisy vibration responses. We present an information-theoretic method for train event monitoring using commonly existing vibration sensors deployed for building health monitoring. The key idea is to represent the wave propagation in a building induced by train traffic as information conveyed in noisy measurement signals. Our technique first uses wavelet analysis to detect train events. Then, by analyzing information exchange patterns of building vibration signals, we infer the category of the events (i.e., southbound or northbound train. Our algorithm is evaluated with an 11-story building where trains pass by frequently. The results show that the method can robustly achieve a train event detection accuracy of up to a 93% true positive rate and an 80% true negative rate. For direction categorization, compared with the traditional signal processing method, our information-theoretic approach reduces categorization error from 32.1 to 12.1%, which is a 2.5× improvement.

  10. Integrated information in discrete dynamical systems: motivation and theoretical framework.

    Directory of Open Access Journals (Sweden)

    David Balduzzi

    2008-06-01

    Full Text Available This paper introduces a time- and state-dependent measure of integrated information, phi, which captures the repertoire of causal states available to a system as a whole. Specifically, phi quantifies how much information is generated (uncertainty is reduced when a system enters a particular state through causal interactions among its elements, above and beyond the information generated independently by its parts. Such mathematical characterization is motivated by the observation that integrated information captures two key phenomenological properties of consciousness: (i there is a large repertoire of conscious experiences so that, when one particular experience occurs, it generates a large amount of information by ruling out all the others; and (ii this information is integrated, in that each experience appears as a whole that cannot be decomposed into independent parts. This paper extends previous work on stationary systems and applies integrated information to discrete networks as a function of their dynamics and causal architecture. An analysis of basic examples indicates the following: (i phi varies depending on the state entered by a network, being higher if active and inactive elements are balanced and lower if the network is inactive or hyperactive. (ii phi varies for systems with identical or similar surface dynamics depending on the underlying causal architecture, being low for systems that merely copy or replay activity states. (iii phi varies as a function of network architecture. High phi values can be obtained by architectures that conjoin functional specialization with functional integration. Strictly modular and homogeneous systems cannot generate high phi because the former lack integration, whereas the latter lack information. Feedforward and lattice architectures are capable of generating high phi but are inefficient. (iv In Hopfield networks, phi is low for attractor states and neutral states, but increases if the networks

  11. Integrated information in discrete dynamical systems: motivation and theoretical framework.

    Science.gov (United States)

    Balduzzi, David; Tononi, Giulio

    2008-06-13

    This paper introduces a time- and state-dependent measure of integrated information, phi, which captures the repertoire of causal states available to a system as a whole. Specifically, phi quantifies how much information is generated (uncertainty is reduced) when a system enters a particular state through causal interactions among its elements, above and beyond the information generated independently by its parts. Such mathematical characterization is motivated by the observation that integrated information captures two key phenomenological properties of consciousness: (i) there is a large repertoire of conscious experiences so that, when one particular experience occurs, it generates a large amount of information by ruling out all the others; and (ii) this information is integrated, in that each experience appears as a whole that cannot be decomposed into independent parts. This paper extends previous work on stationary systems and applies integrated information to discrete networks as a function of their dynamics and causal architecture. An analysis of basic examples indicates the following: (i) phi varies depending on the state entered by a network, being higher if active and inactive elements are balanced and lower if the network is inactive or hyperactive. (ii) phi varies for systems with identical or similar surface dynamics depending on the underlying causal architecture, being low for systems that merely copy or replay activity states. (iii) phi varies as a function of network architecture. High phi values can be obtained by architectures that conjoin functional specialization with functional integration. Strictly modular and homogeneous systems cannot generate high phi because the former lack integration, whereas the latter lack information. Feedforward and lattice architectures are capable of generating high phi but are inefficient. (iv) In Hopfield networks, phi is low for attractor states and neutral states, but increases if the networks are optimized

  12. Towards an Information Theoretic Analysis of Searchable Encryption (Extended Version)

    NARCIS (Netherlands)

    Sedghi, S.; Doumen, J.M.; Hartel, Pieter H.; Jonker, Willem

    2008-01-01

    Searchable encryption is a technique that allows a client to store data in encrypted form on a curious server, such that data can be retrieved while leaking a minimal amount of information to the server. Many searchable encryption schemes have been proposed and proved secure in their own

  13. Towards an Information Theoretic Analysis of Searchable Encryption

    NARCIS (Netherlands)

    Sedghi, S.; Doumen, J.M.; Hartel, Pieter H.; Jonker, Willem

    2008-01-01

    Searchable encryption is a technique that allows a client to store data in encrypted form on a curious server, such that data can be retrieved while leaking a minimal amount of information to the server. Many searchable encryption schemes have been proposed and proved secure in their own

  14. Information-Theoretic Performance Analysis of Sensor Networks via Markov Modeling of Time Series Data.

    Science.gov (United States)

    Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K

    2018-06-01

    This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.

  15. Theoretical and experimental investigation of multispectral photoacoustic osteoporosis detection method

    Science.gov (United States)

    Steinberg, Idan; Hershkovich, Hadas Sara; Gannot, Israel; Eyal, Avishay

    2014-03-01

    Osteoporosis is a widespread disorder, which has a catastrophic impact on patients lives and overwhelming related to healthcare costs. Recently, we proposed a multispectral photoacoustic technique for early detection of osteoporosis. Such technique has great advantages over pure ultrasonic or optical methods as it allows the deduction of both bone functionality from the bone absorption spectrum and bone resistance to fracture from the characteristics of the ultrasound propagation. We demonstrated the propagation of multiple acoustic modes in animal bones in-vitro. To further investigate the effects of multiple wavelength excitations and of induced osteoporosis on the PA signal a multispectral photoacoustic system is presented. The experimental investigation is based on measuring the interference of multiple acoustic modes. The performance of the system is evaluated and a simple two mode theoretical model is fitted to the measured phase signals. The results show that such PA technique is accurate and repeatable. Then a multiple wavelength excitation is tested. It is shown that the PA response due to different excitation wavelengths revels that absorption by the different bone constitutes has a profound effect on the mode generation. The PA response is measured in single wavelength before and after induced osteoporosis. Results show that induced osteoporosis alters the measured amplitude and phase in a consistent manner which allows the detection of the onset of osteoporosis. These results suggest that a complete characterization of the bone over a region of both acoustic and optical frequencies might be used as a powerful tool for in-vivo bone evaluation.

  16. Number theoretic methods in cryptography complexity lower bounds

    CERN Document Server

    Shparlinski, Igor

    1999-01-01

    The book introduces new techniques which imply rigorous lower bounds on the complexity of some number theoretic and cryptographic problems. These methods and techniques are based on bounds of character sums and numbers of solutions of some polynomial equations over finite fields and residue rings. It also contains a number of open problems and proposals for further research. We obtain several lower bounds, exponential in terms of logp, on the de­ grees and orders of • polynomials; • algebraic functions; • Boolean functions; • linear recurring sequences; coinciding with values of the discrete logarithm modulo a prime p at suf­ ficiently many points (the number of points can be as small as pI/He). These functions are considered over the residue ring modulo p and over the residue ring modulo an arbitrary divisor d of p - 1. The case of d = 2 is of special interest since it corresponds to the representation of the right­ most bit of the discrete logarithm and defines whether the argument is a quadratic...

  17. Information-theoretic characterization of dynamic energy systems

    Science.gov (United States)

    Bevis, Troy Lawson

    sources are compounded by the dynamics of the grid itself. Loads are constantly changing, as well as the sources; this can sometimes lead to a quick change in system states. There is a need for a metric to be able to take into consideration all of the factors detailed above; it needs to be able to take into consideration the amount of information that is available in the system and the rate that the information is losing its value. In a dynamic system, the information is only valid for a length of time, and the controller must be able to take into account the decay of currently held information. This thesis will present the information theory metrics in a way that is useful for application to dynamic energy systems. A test case involving synchronization of several generators is presented for analysis and application of the theory. The objective is to synchronize all the generators and connect them to a common bus. As the phase shift of each generator is a random process, the effects of latency and information decay can be directly observed. The results of the experiments clearly show that the expected outcomes are observed and that entropy and information theory is a valid metric for timing requirement extraction.

  18. An Information-Theoretic Approach to PMU Placement in Electric Power Systems

    OpenAIRE

    Li, Qiao; Cui, Tao; Weng, Yang; Negi, Rohit; Franchetti, Franz; Ilic, Marija D.

    2012-01-01

    This paper presents an information-theoretic approach to address the phasor measurement unit (PMU) placement problem in electric power systems. Different from the conventional 'topological observability' based approaches, this paper advocates a much more refined, information-theoretic criterion, namely the mutual information (MI) between the PMU measurements and the power system states. The proposed MI criterion can not only include the full system observability as a special case, but also ca...

  19. Theoretical Modelling Methods for Thermal Management of Batteries

    Directory of Open Access Journals (Sweden)

    Bahman Shabani

    2015-09-01

    Full Text Available The main challenge associated with renewable energy generation is the intermittency of the renewable source of power. Because of this, back-up generation sources fuelled by fossil fuels are required. In stationary applications whether it is a back-up diesel generator or connection to the grid, these systems are yet to be truly emissions-free. One solution to the problem is the utilisation of electrochemical energy storage systems (ESS to store the excess renewable energy and then reusing this energy when the renewable energy source is insufficient to meet the demand. The performance of an ESS amongst other things is affected by the design, materials used and the operating temperature of the system. The operating temperature is critical since operating an ESS at low ambient temperatures affects its capacity and charge acceptance while operating the ESS at high ambient temperatures affects its lifetime and suggests safety risks. Safety risks are magnified in renewable energy storage applications given the scale of the ESS required to meet the energy demand. This necessity has propelled significant effort to model the thermal behaviour of ESS. Understanding and modelling the thermal behaviour of these systems is a crucial consideration before designing an efficient thermal management system that would operate safely and extend the lifetime of the ESS. This is vital in order to eliminate intermittency and add value to renewable sources of power. This paper concentrates on reviewing theoretical approaches used to simulate the operating temperatures of ESS and the subsequent endeavours of modelling thermal management systems for these systems. The intent of this review is to present some of the different methods of modelling the thermal behaviour of ESS highlighting the advantages and disadvantages of each approach.

  20. Multi-way Communications: An Information Theoretic Perspective

    KAUST Repository

    Chaaban, Anas

    2015-09-15

    Multi-way communication is a means to significantly improve the spectral efficiency of wireless networks. For instance, in a bi-directional (or two-way) communication channel, two users can simultaneously use the transmission medium to exchange information, thus achieving up to twice the rate that would be achieved had each user transmitted separately. Multi-way communications provides an overview on the developments in this research area since it has been initiated by Shannon. The basic two-way communication channel is considered first, followed by the two-way relay channel obtained by the deployment of an additional cooperative relay node to improve the overall communication performance. This basic setup is then extended to multi-user systems. For all these setups, fundamental limits on the achievable rates are reviewed, thereby making use of a linear high-SNR deterministic channel model to provide valuable insights which are helpful when discussing the coding schemes for Gaussian channel models in detail. Several tools and communication strategies are used in the process, including (but not limited to) computation, signal-space alignment, and nested-lattice codes. Finally, extensions of multi-way communication channels to multiple antenna settings are discussed. © 2015 A. Chaaban and A. Sezgin.

  1. Information theoretical assessment of visual communication with subband coding

    Science.gov (United States)

    Rahman, Zia-ur; Fales, Carl L.; Huck, Friedrich O.

    1994-09-01

    A well-designed visual communication channel is one which transmits the most information about a radiance field with the fewest artifacts. The role of image processing, encoding and restoration is to improve the quality of visual communication channels by minimizing the error in the transmitted data. Conventionally this role has been analyzed strictly in the digital domain neglecting the effects of image-gathering and image-display devices on the quality of the image. This results in the design of a visual communication channel which is `suboptimal.' We propose an end-to-end assessment of the imaging process which incorporates the influences of these devices in the design of the encoder and the restoration process. This assessment combines Shannon's communication theory with Wiener's restoration filter and with the critical design factors of the image gathering and display devices, thus providing the metrics needed to quantify and optimize the end-to-end performance of the visual communication channel. Results show that the design of the image-gathering device plays a significant role in determining the quality of the visual communication channel and in designing the analysis filters for subband encoding.

  2. Information geometric methods for complexity

    Science.gov (United States)

    Felice, Domenico; Cafaro, Carlo; Mancini, Stefano

    2018-03-01

    Research on the use of information geometry (IG) in modern physics has witnessed significant advances recently. In this review article, we report on the utilization of IG methods to define measures of complexity in both classical and, whenever available, quantum physical settings. A paradigmatic example of a dramatic change in complexity is given by phase transitions (PTs). Hence, we review both global and local aspects of PTs described in terms of the scalar curvature of the parameter manifold and the components of the metric tensor, respectively. We also report on the behavior of geodesic paths on the parameter manifold used to gain insight into the dynamics of PTs. Going further, we survey measures of complexity arising in the geometric framework. In particular, we quantify complexity of networks in terms of the Riemannian volume of the parameter space of a statistical manifold associated with a given network. We are also concerned with complexity measures that account for the interactions of a given number of parts of a system that cannot be described in terms of a smaller number of parts of the system. Finally, we investigate complexity measures of entropic motion on curved statistical manifolds that arise from a probabilistic description of physical systems in the presence of limited information. The Kullback-Leibler divergence, the distance to an exponential family and volumes of curved parameter manifolds, are examples of essential IG notions exploited in our discussion of complexity. We conclude by discussing strengths, limits, and possible future applications of IG methods to the physics of complexity.

  3. Information Retrieval Methods in Libraries and Information Centers ...

    African Journals Online (AJOL)

    The volumes of information created, generated and stored are immense that without adequate knowledge of information retrieval methods, the retrieval process for an information user would be cumbersome and frustrating. Studies have further revealed that information retrieval methods are essential in information centers ...

  4. METHODS OF POLYMODAL INFORMATION TRANSMISSION

    Directory of Open Access Journals (Sweden)

    O. O. Basov

    2015-03-01

    Full Text Available The research results upon the application of the existing information transmission methods in polymodal info communication systems are presented herein. The analysis of the existing commutation ways and multiplexing schemes has revealed that modern means of telecommunication are capable of providing polymodal information delivery with the required quality to the customer correspondent terminal. Under these conditions substantial capacity resource consumption in the data transmission networks with a simultaneous static time multiplexing is required, however, it is easier to achieve the modality synchronization within that kind of an infrastructure. The data networks with a static time multiplexing demand employing more sophisticated supporting algorithms of the guaranteed data blocks delivery quality. However, due to the stochastic data blocks delays modality synchronizing during the off-line processing is more difficult to provide. Nowadays there are objective preconditions for a data networking realization which is invariable to the applied transmission technology. This capability is defined by a wide (person-to-person application of the optical technologies in the transport infrastructure of the polymodal info communication systems. In case of the availability of the customer terminal and networking functioning matching mode it becomes possible to organize channels in the latter which can adaptively select the most effective networking technology according to the current volume allocation and modality types in the messages.

  5. Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics

    Energy Technology Data Exchange (ETDEWEB)

    Tsourtis, Anastasios, E-mail: tsourtis@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, Crete (Greece); Pantazis, Yannis, E-mail: pantazis@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Harmandaris, Vagelis, E-mail: harman@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, and Institute of Applied and Computational Mathematics (IACM), Foundation for Research and Technology Hellas (FORTH), GR-70013 Heraklion, Crete (Greece)

    2015-07-07

    In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.

  6. Investigating nurse practitioners in the private sector: a theoretically informed research protocol.

    Science.gov (United States)

    Adams, Margaret; Gardner, Glenn; Yates, Patsy

    2017-06-01

    To report a study protocol and the theoretical framework normalisation process theory that informs this protocol for a case study investigation of private sector nurse practitioners. Most research evaluating nurse practitioner service is focused on public, mainly acute care environments where nurse practitioner service is well established with strong structures for governance and sustainability. Conversely, there is lack of clarity in governance for emerging models in the private sector. In a climate of healthcare reform, nurse practitioner service is extending beyond the familiar public health sector. Further research is required to inform knowledge of the practice, operational framework and governance of new nurse practitioner models. The proposed research will use a multiple exploratory case study design to examine private sector nurse practitioner service. Data collection includes interviews, surveys and audits. A sequential mixed method approach to analysis of each case will be conducted. Findings from within-case analysis will lead to a meta-synthesis across all four cases to gain a holistic understanding of the cases under study, private sector nurse practitioner service. Normalisation process theory will be used to guide the research process, specifically coding and analysis of data using theory constructs and the relevant components associated with those constructs. This article provides a blueprint for the research and describes a theoretical framework, normalisation process theory in terms of its flexibility as an analytical framework. Consistent with the goals of best research practice, this study protocol will inform the research community in the field of primary health care about emerging research in this field. Publishing a study protocol ensures researcher fidelity to the analysis plan and supports research collaboration across teams. © 2016 John Wiley & Sons Ltd.

  7. Numerical Methods Application for Reinforced Concrete Elements-Theoretical Approach for Direct Stiffness Matrix Method

    Directory of Open Access Journals (Sweden)

    Sergiu Ciprian Catinas

    2015-07-01

    Full Text Available A detailed theoretical and practical investigation of the reinforced concrete elements is due to recent techniques and method that are implemented in the construction market. More over a theoretical study is a demand for a better and faster approach nowadays due to rapid development of the calculus technique. The paper above will present a study for implementing in a static calculus the direct stiffness matrix method in order capable to address phenomena related to different stages of loading, rapid change of cross section area and physical properties. The method is a demand due to the fact that in our days the FEM (Finite Element Method is the only alternative to such a calculus and FEM are considered as expensive methods from the time and calculus resources point of view. The main goal in such a method is to create the moment-curvature diagram in the cross section that is analyzed. The paper above will express some of the most important techniques and new ideas as well in order to create the moment curvature graphic in the cross sections considered.

  8. An information-theoretic approach to assess practical identifiability of parametric dynamical systems.

    Science.gov (United States)

    Pant, Sanjay; Lombardi, Damiano

    2015-10-01

    A new approach for assessing parameter identifiability of dynamical systems in a Bayesian setting is presented. The concept of Shannon entropy is employed to measure the inherent uncertainty in the parameters. The expected reduction in this uncertainty is seen as the amount of information one expects to gain about the parameters due to the availability of noisy measurements of the dynamical system. Such expected information gain is interpreted in terms of the variance of a hypothetical measurement device that can measure the parameters directly, and is related to practical identifiability of the parameters. If the individual parameters are unidentifiable, correlation between parameter combinations is assessed through conditional mutual information to determine which sets of parameters can be identified together. The information theoretic quantities of entropy and information are evaluated numerically through a combination of Monte Carlo and k-nearest neighbour methods in a non-parametric fashion. Unlike many methods to evaluate identifiability proposed in the literature, the proposed approach takes the measurement-noise into account and is not restricted to any particular noise-structure. Whilst computationally intensive for large dynamical systems, it is easily parallelisable and is non-intrusive as it does not necessitate re-writing of the numerical solvers of the dynamical system. The application of such an approach is presented for a variety of dynamical systems--ranging from systems governed by ordinary differential equations to partial differential equations--and, where possible, validated against results previously published in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. 108 Information Retrieval Methods in Libraries and Information ...

    African Journals Online (AJOL)

    User

    without adequate knowledge of information retrieval methods, the retrieval process for an ... discusses the concept of Information retrieval, the various information ..... Other advantages of automatic indexing are the maintenance of consistency.

  10. Theoretical framework for government information service delivery to deep rural communities in South Africa

    CSIR Research Space (South Africa)

    Mvelase, PS

    2009-10-01

    Full Text Available This paper reports on a study to determine the information requirements of communities in deep rural areas on government services and how this information can be made available to them. The study then proposes an e-government theoretical framework...

  11. Theoretical prediction method of subcooled flow boiling CHF

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)

  12. Theoretical prediction method of subcooled flow boiling CHF

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)

  13. Detecting Network Vulnerabilities Through Graph TheoreticalMethods

    Energy Technology Data Exchange (ETDEWEB)

    Cesarz, Patrick; Pomann, Gina-Maria; Torre, Luis de la; Villarosa, Greta; Flournoy, Tamara; Pinar, Ali; Meza Juan

    2007-09-30

    Identifying vulnerabilities in power networks is an important problem, as even a small number of vulnerable connections can cause billions of dollars in damage to a network. In this paper, we investigate a graph theoretical formulation for identifying vulnerabilities of a network. We first try to find the most critical components in a network by finding an optimal solution for each possible cutsize constraint for the relaxed version of the inhibiting bisection problem, which aims to find loosely coupled subgraphs with significant demand/supply mismatch. Then we investigate finding critical components by finding a flow assignment that minimizes the maximum among flow assignments on all edges. We also report experiments on IEEE 30, IEEE 118, and WSCC 179 benchmark power networks.

  14. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    Energy Technology Data Exchange (ETDEWEB)

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Guzmán, Orlando [Departamento de Física, Universidad Autónoma Metropolitana, Iztapalapa, DF 09340, México (Mexico); Hernández-Ortiz, Juan P. [Departamento de Materiales y Minerales, Universidad Nacional de Colombia, Sede Medellín, Medellín (Colombia); Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Pablo, Juan J. de, E-mail: depablo@uchicago.edu [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Materials Science Division, Argonne National Laboratory, Argonne, Illinois 60439 (United States)

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  15. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    Energy Technology Data Exchange (ETDEWEB)

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  16. Theoretical studies of densiometric methods using γ-radiation

    International Nuclear Information System (INIS)

    Luebbesmeyer, D.; Wesser, U.

    1975-10-01

    Some conclusions could be drawn from the calculations performed for the practical measuring method to be applied: 1) The incident method for the density measurement of an inhomogenous two-phase flow involves a lot of errors. 2) Should one, due to limited expense, only use two detectors for the measuring chains, then the scattered-beam method is more advantageous than the two-beam method. 3) If three detectors can be used, a greater accuracy can be expected than with the scattered-beam method. 4) The accuracy of all methods increases if a certain homogenity of a part of the flow is allowed. 5) The most favourable energy region is different for scattered-beam and multi-beam processes. Whereas the scattered-beam method can be used to an optimum at energies of about 60 KeV due to the enlarged scattering cross sections at small radiation energies, the energies with multi-beam methods should be more than 100 KeV. 6) If small calibration problems are important, than the multi-beam method is preferable to the scattered-beam method. A good compromise between apparative expenditure and the accuracy to be obtained is the three-beam method with, e.g., 137 Cs as a source. (orig./LH) [de

  17. Theoretical analysis and experimental study of spray degassing method

    International Nuclear Information System (INIS)

    Wu Ruizhi; Shu Da; Sun Baode; Wang Jun; Li Fei; Chen Haiyan; Lu YanLing

    2005-01-01

    A new hydrogen-removal method of aluminum melt, spray degassing, is presented. The thermodynamic and kinetic analysis of the method are discussed. A comparison between the thermodynamics and kinetics of the spray degassing method and rotary impellor degassing method is made. The thermodynamic analysis shows that the relationship between the final hydrogen content of the aluminum melt and the ratio of purge gas flow rate to melt flow rate is linear. The result of thermodynamic calculation shows that, in spray degassing, when the ratio of G/q is larger than 2.2 x 10 -6 , the final hydrogen content will be less than 0.1 ml/100 g Al. From the kinetic analysis, the degassing effect is affected by both the size of melt droplets and the time that melt droplets move from sprayer to the bottom of the treatment tank. In numerical calculation, the hydrogen in aluminum melt can be degassed to 0.05 ml/100 g Al from 0.2 ml/100 g Al in 0.02 s with the spray degassing method. Finally, the water-model experiments are presented with the spray degassing method and rotary impellor degassing method. Melt experiments are also presented. Both the water-model experiments and the melt experiments show that the degassing effect of the spray degassing method is better than that of the rotary impeller method

  18. Dynamical Systems Method and Applications Theoretical Developments and Numerical Examples

    CERN Document Server

    Ramm, Alexander G

    2012-01-01

    Demonstrates the application of DSM to solve a broad range of operator equations The dynamical systems method (DSM) is a powerful computational method for solving operator equations. With this book as their guide, readers will master the application of DSM to solve a variety of linear and nonlinear problems as well as ill-posed and well-posed problems. The authors offer a clear, step-by-step, systematic development of DSM that enables readers to grasp the method's underlying logic and its numerous applications. Dynamical Systems Method and Applications begins with a general introduction and

  19. Information technology equipment cooling method

    Science.gov (United States)

    Schultz, Mark D.

    2015-10-20

    According to one embodiment, a system for removing heat from a rack of information technology equipment may include a sidecar indoor air to liquid heat exchanger that cools air utilized by the rack of information technology equipment to cool the rack of information technology equipment. The system may also include a liquid to liquid heat exchanger and an outdoor heat exchanger. The system may further include configurable pathways to connect and control fluid flow through the sidecar heat exchanger, the liquid to liquid heat exchanger, the rack of information technology equipment, and the outdoor heat exchanger based upon ambient temperature and/or ambient humidity to remove heat generated by the rack of information technology equipment.

  20. Information-Theoretic Data Discarding for Dynamic Trees on Data Streams

    Directory of Open Access Journals (Sweden)

    Christoforos Anagnostopoulos

    2013-12-01

    Full Text Available Ubiquitous automated data collection at an unprecedented scale is making available streaming, real-time information flows in a wide variety of settings, transforming both science and industry. Learning algorithms deployed in such contexts often rely on single-pass inference, where the data history is never revisited. Learning may also need to be temporally adaptive to remain up-to-date against unforeseen changes in the data generating mechanism. Online Bayesian inference remains challenged by such transient, evolving data streams. Nonparametric modeling techniques can prove particularly ill-suited, as the complexity of the model is allowed to increase with the sample size. In this work, we take steps to overcome these challenges by porting information theoretic heuristics, such as exponential forgetting and active learning, into a fully Bayesian framework. We showcase our methods by augmenting a modern non-parametric modeling framework, dynamic trees, and illustrate its performance on a number of practical examples. The end product is a powerful streaming regression and classification tool, whose performance compares favorably to the state-of-the-art.

  1. Information-theoretic semi-supervised metric learning via entropy regularization.

    Science.gov (United States)

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  2. Dimensional Information-Theoretic Measurement of Facial Emotion Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Jihun Hamm

    2014-01-01

    Full Text Available Altered facial expressions of emotions are characteristic impairments in schizophrenia. Ratings of affect have traditionally been limited to clinical rating scales and facial muscle movement analysis, which require extensive training and have limitations based on methodology and ecological validity. To improve reliable assessment of dynamic facial expression changes, we have developed automated measurements of facial emotion expressions based on information-theoretic measures of expressivity of ambiguity and distinctiveness of facial expressions. These measures were examined in matched groups of persons with schizophrenia (n=28 and healthy controls (n=26 who underwent video acquisition to assess expressivity of basic emotions (happiness, sadness, anger, fear, and disgust in evoked conditions. Persons with schizophrenia scored higher on ambiguity, the measure of conditional entropy within the expression of a single emotion, and they scored lower on distinctiveness, the measure of mutual information across expressions of different emotions. The automated measures compared favorably with observer-based ratings. This method can be applied for delineating dynamic emotional expressivity in healthy and clinical populations.

  3. Theoretical method for determining particle distribution functions of classical systems

    International Nuclear Information System (INIS)

    Johnson, E.

    1980-01-01

    An equation which involves the triplet distribution function and the three-particle direct correlation function is obtained. This equation was derived using an analogue of the Ornstein--Zernike equation. The new equation is used to develop a variational method for obtaining the triplet distribution function of uniform one-component atomic fluids from the pair distribution function. The variational method may be used with the first and second equations in the YBG hierarchy to obtain pair and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function

  4. A Game-Theoretic Approach to Information-Flow Control via Protocol Composition

    Directory of Open Access Journals (Sweden)

    Mário S. Alvim

    2018-05-01

    Full Text Available In the inference attacks studied in Quantitative Information Flow (QIF, the attacker typically tries to interfere with the system in the attempt to increase its leakage of secret information. The defender, on the other hand, typically tries to decrease leakage by introducing some controlled noise. This noise introduction can be modeled as a type of protocol composition, i.e., a probabilistic choice among different protocols, and its effect on the amount of leakage depends heavily on whether or not this choice is visible to the attacker. In this work, we consider operators for modeling visible and hidden choice in protocol composition, and we study their algebraic properties. We then formalize the interplay between defender and attacker in a game-theoretic framework adapted to the specific issues of QIF, where the payoff is information leakage. We consider various kinds of leakage games, depending on whether players act simultaneously or sequentially, and on whether or not the choices of the defender are visible to the attacker. In the case of sequential games, the choice of the second player is generally a function of the choice of the first player, and his/her probabilistic choice can be either over the possible functions (mixed strategy or it can be on the result of the function (behavioral strategy. We show that when the attacker moves first in a sequential game with a hidden choice, then behavioral strategies are more advantageous for the defender than mixed strategies. This contrasts with the standard game theory, where the two types of strategies are equivalent. Finally, we establish a hierarchy of these games in terms of their information leakage and provide methods for finding optimal strategies (at the points of equilibrium for both attacker and defender in the various cases.

  5. A Theoretical Perspective on the Case Study Method

    Science.gov (United States)

    Çakmak, Zafer; Akgün, Ismail Hakan

    2018-01-01

    Ensuring that students reach the determined goals of the courses at the desired level is one of the primary goals of teaching. In order to achieve this purpose, educators use a variety of teaching strategies and methods, and teaching materials appropriate to the content and the subject of the courses in the teaching process. As a matter of fact,…

  6. Integral methods in science and engineering theoretical and practical aspects

    CERN Document Server

    Constanda, C; Rollins, D

    2006-01-01

    Presents a series of analytic and numerical methods of solution constructed for important problems arising in science and engineering, based on the powerful operation of integration. This volume is meant for researchers and practitioners in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students.

  7. Theoretical aspects of new options of sublevel caving methods

    Directory of Open Access Journals (Sweden)

    Ladislav Kačmár

    2008-12-01

    Full Text Available The article deals with the proposal of the SMZ Jelšava a.s. exploitation issue. Author refers to possible options of the new methodsaplication, which rises of the theory of gravity flow loose and blasting rocks. With mentioned methods the safety of exploitation couldbe higher, especially in bigger depths.

  8. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    Science.gov (United States)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  9. Nonstationary Hydrological Frequency Analysis: Theoretical Methods and Application Challenges

    Science.gov (United States)

    Xiong, L.

    2014-12-01

    Because of its great implications in the design and operation of hydraulic structures under changing environments (either climate change or anthropogenic changes), nonstationary hydrological frequency analysis has become so important and essential. Two important achievements have been made in methods. Without adhering to the consistency assumption in the traditional hydrological frequency analysis, the time-varying probability distribution of any hydrological variable can be established by linking the distribution parameters to some covariates such as time or physical variables with the help of some powerful tools like the Generalized Additive Model of Location, Scale and Shape (GAMLSS). With the help of copulas, the multivariate nonstationary hydrological frequency analysis has also become feasible. However, applications of the nonstationary hydrological frequency formula to the design and operation of hydraulic structures for coping with the impacts of changing environments in practice is still faced with many challenges. First, the nonstationary hydrological frequency formulae with time as covariate could only be extrapolated for a very short time period beyond the latest observation time, because such kind of formulae is not physically constrained and the extrapolated outcomes could be unrealistic. There are two physically reasonable methods that can be used for changing environments, one is to directly link the quantiles or the distribution parameters to some measureable physical factors, and the other is to use the derived probability distributions based on hydrological processes. However, both methods are with a certain degree of uncertainty. For the design and operation of hydraulic structures under changing environments, it is recommended that design results of both stationary and nonstationary methods be presented together and compared with each other, to help us understand the potential risks of each method.

  10. Informality as a stepping stone: A search-theoretical assessment of informal sector and government policy

    Directory of Open Access Journals (Sweden)

    Semih Tümen

    2016-09-01

    Full Text Available This paper develops a model of sequential job search to understand the factors determining the effect of tax and enforcement policies on the size (i.e., employment share of informal sector. The focus is on the role of informal sector as a stepping stone to formal jobs. I argue that the stepping-stone role of informal jobs is an important concept determining how strongly government policies affect the size of informal sector. I measure the extent of the stepping-stone role with the intensity of skill accumulation in the informal sector. If informal jobs help workers acquire skills, gain expertise, and build professional networks for boosting the chances to switch to a formal job, then the size of informal sector is less sensitive to government policy. In this case, the option value of a job in informal sector will be high and a worker with an informal job will not rush to switch to a formal job when a policy encouraging formal employment is in effect. If, on the other hand, informal sector does not provide satisfactory training opportunities, then the size of informal sector becomes more sensitive to government policy. Calibrating the model to the Brazilian data, I perform numerical exercises confirming that the effect of government policy on the size of informal sector is a decreasing function of the intensity of skill acquisition in the informal sector.

  11. Theoretical and applied aerodynamics and related numerical methods

    CERN Document Server

    Chattot, J J

    2015-01-01

    This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...

  12. Group theoretical methods and wavelet theory: coorbit theory and applications

    Science.gov (United States)

    Feichtinger, Hans G.

    2013-05-01

    Before the invention of orthogonal wavelet systems by Yves Meyer1 in 1986 Gabor expansions (viewed as discretized inversion of the Short-Time Fourier Transform2 using the overlap and add OLA) and (what is now perceived as) wavelet expansions have been treated more or less at an equal footing. The famous paper on painless expansions by Daubechies, Grossman and Meyer3 is a good example for this situation. The description of atomic decompositions for functions in modulation spaces4 (including the classical Sobolev spaces) given by the author5 was directly modeled according to the corresponding atomic characterizations by Frazier and Jawerth,6, 7 more or less with the idea of replacing the dyadic partitions of unity of the Fourier transform side by uniform partitions of unity (so-called BUPU's, first named as such in the early work on Wiener-type spaces by the author in 19808). Watching the literature in the subsequent two decades one can observe that the interest in wavelets "took over", because it became possible to construct orthonormal wavelet systems with compact support and of any given degree of smoothness,9 while in contrast the Balian-Low theorem is prohibiting the existence of corresponding Gabor orthonormal bases, even in the multi-dimensional case and for general symplectic lattices.10 It is an interesting historical fact that* his construction of band-limited orthonormal wavelets (the Meyer wavelet, see11) grew out of an attempt to prove the impossibility of the existence of such systems, and the final insight was that it was not impossible to have such systems, and in fact quite a variety of orthonormal wavelet system can be constructed as we know by now. Meanwhile it is established wisdom that wavelet theory and time-frequency analysis are two different ways of decomposing signals in orthogonal resp. non-orthogonal ways. The unifying theory, covering both cases, distilling from these two situations the common group theoretical background lead to the

  13. Information-theoretic discrepancy based iterative reconstructions (IDIR) for polychromatic x-ray tomography

    International Nuclear Information System (INIS)

    Jang, Kwang Eun; Lee, Jongha; Sung, Younghun; Lee, SeongDeok

    2013-01-01

    Purpose: X-ray photons generated from a typical x-ray source for clinical applications exhibit a broad range of wavelengths, and the interactions between individual particles and biological substances depend on particles' energy levels. Most existing reconstruction methods for transmission tomography, however, neglect this polychromatic nature of measurements and rely on the monochromatic approximation. In this study, we developed a new family of iterative methods that incorporates the exact polychromatic model into tomographic image recovery, which improves the accuracy and quality of reconstruction.Methods: The generalized information-theoretic discrepancy (GID) was employed as a new metric for quantifying the distance between the measured and synthetic data. By using special features of the GID, the objective function for polychromatic reconstruction which contains a double integral over the wavelength and the trajectory of incident x-rays was simplified to a paraboloidal form without using the monochromatic approximation. More specifically, the original GID was replaced with a surrogate function with two auxiliary, energy-dependent variables. Subsequently, the alternating minimization technique was applied to solve the double minimization problem. Based on the optimization transfer principle, the objective function was further simplified to the paraboloidal equation, which leads to a closed-form update formula. Numerical experiments on the beam-hardening correction and material-selective reconstruction were conducted to compare and assess the performance of conventional methods and the proposed algorithms.Results: The authors found that the GID determines the distance between its two arguments in a flexible manner. In this study, three groups of GIDs with distinct data representations were considered. The authors demonstrated that one type of GIDs that comprises “raw” data can be viewed as an extension of existing statistical reconstructions; under a

  14. Quantum information theoretical analysis of various constructions for quantum secret sharing

    NARCIS (Netherlands)

    Rietjens, K.P.T.; Schoenmakers, B.; Tuyls, P.T.

    2005-01-01

    Recently, an information theoretical model for quantum secret sharing (QSS) schemes was introduced. By using this model, we prove that pure state quantum threshold schemes (QTS) can be constructed from quantum MDS codes and vice versa. In particular, we consider stabilizer codes and give a

  15. An Everyday and Theoretical Reading of "Perezhivanie" for Informing Research in Early Childhood Education

    Science.gov (United States)

    Fleer, Marilyn

    2016-01-01

    The concept of "perezhivanie" has received increasing attention in recent years. However, a clear understanding of this term has not yet been established. Mostly what is highlighted is the need for more informed theoretical discussion. In this paper, discussions centre on what "perezhivanie" means for research in early…

  16. Phenomenological description of selected elementary chemical reaction mechanisms: An information-theoretic study

    International Nuclear Information System (INIS)

    Esquivel, R.O.; Flores-Gallegos, N.; Iuga, C.; Carrera, E.M.; Angulo, J.C.; Antolin, J.

    2010-01-01

    The information-theoretic description of the course of two elementary chemical reactions allows a phenomenological description of the chemical course of the hydrogenic abstraction and the S N 2 identity reactions by use of Shannon entropic measures in position and momentum spaces. The analyses reveal their synchronous/asynchronous mechanistic behavior.

  17. On the information-theoretic approach to G\\"odel's incompleteness theorem

    OpenAIRE

    D'Abramo, Germano

    2002-01-01

    In this paper we briefly review and analyze three published proofs of Chaitin's theorem, the celebrated information-theoretic version of G\\"odel's incompleteness theorem. Then, we discuss our main perplexity concerning a key step common to all these demonstrations.

  18. Adaptive information-theoretic bounded rational decision-making with parametric priors

    OpenAIRE

    Grau-Moya, Jordi; Braun, Daniel A.

    2015-01-01

    Deviations from rational decision-making due to limited computational resources have been studied in the field of bounded rationality, originally proposed by Herbert Simon. There have been a number of different approaches to model bounded rationality ranging from optimality principles to heuristics. Here we take an information-theoretic approach to bounded rationality, where information-processing costs are measured by the relative entropy between a posterior decision strategy and a given fix...

  19. A short course in quantum information theory. An approach from theoretical physics

    International Nuclear Information System (INIS)

    Diosi, L.

    2007-01-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. (orig.)

  20. GROUPWARE - MODERN INFORMATION MANAGERIAL METHOD

    Directory of Open Access Journals (Sweden)

    Rozalia NISTOR

    2006-01-01

    Full Text Available The notion groupware contents the information technologies that facilitate theteam work and that are intended for communication, collaboration,coordination within the organization. Having as base software routines forteamwork, the groupware technology has many applications in themanagement process of the organization. The notion groupware refers to aspecial class of web packages connected to a network of personalcomputers: email, chat, video IP, newsgroups, etc. The studies from theliterature consider the groupware as a class of software programs thatfacilitate the coordination, the communication and the cooperation within themember of a group. As in marketing the marketing-mix is known as the “4P”,in the area of groupware its characteristics are known as the “3C”:communication within the group; coordination among the members of thegroup; collaboration among the members of the group. From the groupwaresoftware those with relevance for the managerial activity are: electronic mail,Internet meetings, time management, project management, the managementof dissimulated information. The groupware technologies can be divised inmany categories based on two elements: time and space. The users of agroupware work together in the same time – real time groupware, or invarious periods of time – offline groupware.

  1. Using a fuzzy comprehensive evaluation method to determine product usability: A proposed theoretical framework.

    Science.gov (United States)

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process.

  2. Toward theoretical understanding of the fertility preservation decision-making process: Examining information processing among young women with cancer

    Science.gov (United States)

    Hershberger, Patricia E.; Finnegan, Lorna; Altfeld, Susan; Lake, Sara; Hirshfeld-Cytron, Jennifer

    2014-01-01

    Background Young women with cancer now face the complex decision about whether to undergo fertility preservation. Yet little is known about how these women process information involved in making this decision. Objective The purpose of this paper is to expand theoretical understanding of the decision-making process by examining aspects of information processing among young women diagnosed with cancer. Methods Using a grounded theory approach, 27 women with cancer participated in individual, semi-structured interviews. Data were coded and analyzed using constant-comparison techniques that were guided by five dimensions within the Contemplate phase of the decision-making process framework. Results In the first dimension, young women acquired information primarily from clinicians and Internet sources. Experiential information, often obtained from peers, occurred in the second dimension. Preferences and values were constructed in the third dimension as women acquired factual, moral, and ethical information. Women desired tailored, personalized information that was specific to their situation in the fourth dimension; however, women struggled with communicating these needs to clinicians. In the fifth dimension, women offered detailed descriptions of clinician behaviors that enhance or impede decisional debriefing. Conclusion Better understanding of theoretical underpinnings surrounding women’s information processes can facilitate decision support and improve clinical care. PMID:24552086

  3. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    Science.gov (United States)

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  4. Open source tools for the information theoretic analysis of neural data

    Directory of Open Access Journals (Sweden)

    Robin A. A Ince

    2010-05-01

    Full Text Available The recent and rapid development of open-source software tools for the analysis of neurophysiological datasets consisting of multiple simultaneous recordings of spikes, field potentials and other neural signals holds the promise for a significant advance in the standardization, transparency, quality, reproducibility and variety of techniques used to analyze neurophysiological data and integrate the information obtained at different spatial and temporal scales. In this Review we focus on recent advances in open source toolboxes for the information theoretic analysis of neural responses. We also present examples of their use to investigate the role of spike timing precision, correlations across neurons and field potential fluctuations in the encoding of sensory information. These information toolboxes, available both in Matlab and Python programming environments, hold the potential to enlarge the domain of application of information theory to neuroscience and to lead to new discoveries about how neurons encode and transmit information.

  5. Open source tools for the information theoretic analysis of neural data.

    Science.gov (United States)

    Ince, Robin A A; Mazzoni, Alberto; Petersen, Rasmus S; Panzeri, Stefano

    2010-01-01

    The recent and rapid development of open source software tools for the analysis of neurophysiological datasets consisting of simultaneous multiple recordings of spikes, field potentials and other neural signals holds the promise for a significant advance in the standardization, transparency, quality, reproducibility and variety of techniques used to analyze neurophysiological data and for the integration of information obtained at different spatial and temporal scales. In this review we focus on recent advances in open source toolboxes for the information theoretic analysis of neural responses. We also present examples of their use to investigate the role of spike timing precision, correlations across neurons, and field potential fluctuations in the encoding of sensory information. These information toolboxes, available both in MATLAB and Python programming environments, hold the potential to enlarge the domain of application of information theory to neuroscience and to lead to new discoveries about how neurons encode and transmit information.

  6. Expectancy-Violation and Information-Theoretic Models of Melodic Complexity

    Directory of Open Access Journals (Sweden)

    Tuomas Eerola

    2016-07-01

    Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.

  7. Theoretical frameworks informing family-based child and adolescent obesity interventions

    DEFF Research Database (Denmark)

    Alulis, Sarah; Grabowski, Dan

    2017-01-01

    into focus. However, the use of theoretical frameworks to strengthen these interventions is rare and very uneven. OBJECTIVE AND METHOD: To conduct a qualitative meta-synthesis of family-based interventions for child and adolescent obesity to identify the theoretical frameworks applied, thus understanding how...... inconsistencies and a significant void between research results and health care practice. Based on the analysis, this article proposes three themes to be used as focus points when designing future interventions and when selecting theories for the development of solid, theory-based frameworks for application...... cognitive, self-efficacy and Family Systems Theory appeared most frequently. The remaining 24 were classified as theory-related as theoretical elements of self-monitoring; stimulus control, reinforcement and modelling were used. CONCLUSION: The designs of family-based interventions reveal numerous...

  8. Theoretical aspects of cellular decision-making and information-processing.

    Science.gov (United States)

    Kobayashi, Tetsuya J; Kamimura, Atsushi

    2012-01-01

    Microscopic biological processes have extraordinary complexity and variety at the sub-cellular, intra-cellular, and multi-cellular levels. In dealing with such complex phenomena, conceptual and theoretical frameworks are crucial, which enable us to understand seemingly different intra- and inter-cellular phenomena from unified viewpoints. Decision-making is one such concept that has attracted much attention recently. Since a number of cellular behavior can be regarded as processes to make specific actions in response to external stimuli, decision-making can cover and has been used to explain a broad range of different cellular phenomena [Balázsi et al. (Cell 144(6):910, 2011), Zeng et al. (Cell 141(4):682, 2010)]. Decision-making is also closely related to cellular information-processing because appropriate decisions cannot be made without exploiting the information that the external stimuli contain. Efficiency of information transduction and processing by intra-cellular networks determines the amount of information obtained, which in turn limits the efficiency of subsequent decision-making. Furthermore, information-processing itself can serve as another concept that is crucial for understanding of other biological processes than decision-making. In this work, we review recent theoretical developments on cellular decision-making and information-processing by focusing on the relation between these two concepts.

  9. An integrated organisation-wide data quality management and information governance framework: theoretical underpinnings.

    Science.gov (United States)

    Liaw, Siaw-Teng; Pearce, Christopher; Liyanage, Harshana; Liaw, Gladys S S; de Lusignan, Simon

    2014-01-01

    Increasing investment in eHealth aims to improve cost effectiveness and safety of care. Data extraction and aggregation can create new data products to improve professional practice and provide feedback to improve the quality of source data. A previous systematic review concluded that locally relevant clinical indicators and use of clinical record systems could support clinical governance. We aimed to extend and update the review with a theoretical framework. We searched PubMed, Medline, Web of Science, ABI Inform (Proquest) and Business Source Premier (EBSCO) using the terms curation, information ecosystem, data quality management (DQM), data governance, information governance (IG) and data stewardship. We focused on and analysed the scope of DQM and IG processes, theoretical frameworks, and determinants of the processing, quality assurance, presentation and sharing of data across the enterprise. There are good theoretical reasons for integrated governance, but there is variable alignment of DQM, IG and health system objectives across the health enterprise. Ethical constraints exist that require health information ecosystems to process data in ways that are aligned with improving health and system efficiency and ensuring patient safety. Despite an increasingly 'big-data' environment, DQM and IG in health services are still fragmented across the data production cycle. We extend current work on DQM and IG with a theoretical framework for integrated IG across the data cycle. The dimensions of this theory-based framework would require testing with qualitative and quantitative studies to examine the applicability and utility, along with an evaluation of its impact on data quality across the health enterprise.

  10. Method Engineering: Engineering of Information Systems Development Methods and Tools

    NARCIS (Netherlands)

    Brinkkemper, J.N.; Brinkkemper, Sjaak

    1996-01-01

    This paper proposes the term method engineering for the research field of the construction of information systems development methods and tools. Some research issues in method engineering are identified. One major research topic in method engineering is discussed in depth: situational methods, i.e.

  11. LPI Optimization Framework for Target Tracking in Radar Network Architectures Using Information-Theoretic Criteria

    Directory of Open Access Journals (Sweden)

    Chenguang Shi

    2014-01-01

    Full Text Available Widely distributed radar network architectures can provide significant performance improvement for target detection and localization. For a fixed radar network, the achievable target detection performance may go beyond a predetermined threshold with full transmitted power allocation, which is extremely vulnerable in modern electronic warfare. In this paper, we study the problem of low probability of intercept (LPI design for radar network and propose two novel LPI optimization schemes based on information-theoretic criteria. For a predefined threshold of target detection, Schleher intercept factor is minimized by optimizing transmission power allocation among netted radars in the network. Due to the lack of analytical closed-form expression for receiver operation characteristics (ROC, we employ two information-theoretic criteria, namely, Bhattacharyya distance and J-divergence as the metrics for target detection performance. The resulting nonconvex and nonlinear LPI optimization problems associated with different information-theoretic criteria are cast under a unified framework, and the nonlinear programming based genetic algorithm (NPGA is used to tackle the optimization problems in the framework. Numerical simulations demonstrate that our proposed LPI strategies are effective in enhancing the LPI performance for radar network.

  12. Method and apparatus for information carrier authentication

    NARCIS (Netherlands)

    2015-01-01

    The present invention relates to a method of enabling authentication of an information carrier, the information carrier comprising a writeable part and a physical token arranged to supply a response upon receiving a challenge, the method comprising the following steps; applying a first challenge to

  13. Perspectives on Cybersecurity Information Sharing among Multiple Stakeholders Using a Decision-Theoretic Approach.

    Science.gov (United States)

    He, Meilin; Devine, Laura; Zhuang, Jun

    2018-02-01

    The government, private sectors, and others users of the Internet are increasingly faced with the risk of cyber incidents. Damage to computer systems and theft of sensitive data caused by cyber attacks have the potential to result in lasting harm to entities under attack, or to society as a whole. The effects of cyber attacks are not always obvious, and detecting them is not a simple proposition. As the U.S. federal government believes that information sharing on cybersecurity issues among organizations is essential to safety, security, and resilience, the importance of trusted information exchange has been emphasized to support public and private decision making by encouraging the creation of the Information Sharing and Analysis Center (ISAC). Through a decision-theoretic approach, this article provides new perspectives on ISAC, and the advent of the new Information Sharing and Analysis Organizations (ISAOs), which are intended to provide similar benefits to organizations that cannot fit easily into the ISAC structure. To help understand the processes of information sharing against cyber threats, this article illustrates 15 representative information sharing structures between ISAC, government, and other participating entities, and provide discussions on the strategic interactions between different stakeholders. This article also identifies the costs of information sharing and information security borne by different parties in this public-private partnership both before and after cyber attacks, as well as the two main benefits. This article provides perspectives on the mechanism of information sharing and some detailed cost-benefit analysis. © 2017 Society for Risk Analysis.

  14. The informal recycling in the international and local context: theoretical Elements

    International Nuclear Information System (INIS)

    Yepes P, Dora Luz

    2002-01-01

    This article is a synthesis of the theoretical aspects related with the urban problem of the informal recycling in our means, and it is framed inside the denominated investigation project alternatives for their invigoration of the informal recycling in Medellin, which is a thesis of the grade that looks for to strengthen the informal recycling through the study of the factors associated to the labor productivity of the informal recycle. Specifically, the study will identify options of improvement of its work y points to propose alternatives to dignify the labor of these people integrally by the light of environmental precepts, technicians, normative, institutional social and of sustainability. This document describe the theoretical elements in which this investigation will be based, showing the informal recycling inside of an international context, and their situation in a national and local environment. As a result of the bibliographical revision carried out, can be said, that it glimpses a low interest in to improve the conditions of work a International level of the informal recycle, unless the strategies that it outlines the international labor organization, with regard to the strengthening of the informal economy; in Latin America, it has not been possible to go further of the official rhetoric and the pro motion of the groups environmentalists, but in the issue of the recovery policies, reuse, and the recycling of solid wastes, if there. Has been a sustained advance; at national level clear strategies to improve the informal work of the recycle are being identified, however, lacks many efforts to develop the committed actions with these strategies, in spite of the fact that has been advancing the creation of recycle organizations little by little

  15. An Information Theoretic Analysis of Classification Sorting and Cognition by Ninth Grade Children within a Piagetian Setting.

    Science.gov (United States)

    Dunlop, David Livingston

    The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…

  16. Method Engineering: Engineering of Information Systems Development Methods and Tools

    OpenAIRE

    Brinkkemper, J.N.; Brinkkemper, Sjaak

    1996-01-01

    This paper proposes the term method engineering for the research field of the construction of information systems development methods and tools. Some research issues in method engineering are identified. One major research topic in method engineering is discussed in depth: situational methods, i.e. the configuration of a project approach that is tuned to the project at hand. A language and support tool for the engineering of situational methods are discussed.

  17. A short course in quantum information theory. An approach from theoretical physics. 2. ed.

    International Nuclear Information System (INIS)

    Diosi, Lajos

    2011-01-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. For the second edition, the authors has succeeded in adding many new topics while sticking to the conciseness of the overall approach. A new chapter on qubit thermodynamics has been added, while new sections and subsections have been incorporated in various chapter to deal with weak and time-continuous measurements, period-finding quantum algorithms and quantum error corrections. From the reviews of the first edition: ''The best things about this book are its brevity and clarity. In around 100 pages it provides a tutorial introduction to quantum information theory, including problems and solutions.. it's worth a look if you want to quickly get up to speed with the language and central concepts of quantum information theory, including the background classical information theory.'' (Craig Savage, Australian Physics, Vol. 44 (2), 2007). (orig.)

  18. A short course in quantum information theory an approach from theoretical physics

    CERN Document Server

    Diosi, Lajos

    2011-01-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. For the second edition, the authors has succeeded in adding many new topics while sticking to the conciseness of the overall approach. A new chapter on qubit thermodynamics has been added, while new sections and subsections have been incorporated in various chapter to deal with weak and time-continuous measurements, period-finding quantum algorithms and quantum error corrections. From the reviews of the first edition...

  19. Methods for communicating technical information as public information

    International Nuclear Information System (INIS)

    Zara, S.A.

    1987-01-01

    Many challenges face the nuclear industry, especially in the waste management area. One of the biggest challenges is effective communication with the general public. Technical complexity, combined with the public's lack of knowledge and negative emotional response, complicate clear communication of radioactive waste management issues. The purpose of this session is to present and discuss methods for overcoming these obstacles and effectively transmitting technical information as public information. The methods presented encompass audio, visual, and print approaches to message transmission. To support these methods, the author also discusses techniques, based on current research, for improving the communication process

  20. An investigation on characterizing dense coal-water slurry with ultrasound: theoretical and experimental method

    Energy Technology Data Exchange (ETDEWEB)

    Xue, M.H.; Su, M.X.; Dong, L.L.; Shang, Z.T.; Cai, X.S. [Shanghai University of Science & Technology, Shanghai (China)

    2010-07-01

    Particle size distribution and concentration in particulate two-phase flow are important parameters in a wide variety of industrial areas. For the purpose of online characterization in dense coal-water slurries, ultrasonic methods have many advantages such as avoiding dilution, the capability for being used in real time, and noninvasive testing, while light-based techniques are not capable of providing information because optical methods often require the slurry to be diluted. In this article, the modified Urick equation including temperature modification, which can be used to determine the concentration by means of the measurement of ultrasonic velocity in a coal-water slurry, is evaluated on the basis of theoretical analysis and experimental study. A combination of the coupled-phase model and the Bouguer-Lambert-Beer law is employed in this work, and the attenuation spectrum is measured within the frequency region from 3 to 12 MHz. Particle size distributions of the coal-water slurry at different volume fractions are obtained with the optimum regularization technique. Therefore, the ultrasonic technique presented in this work brings the possibility of using ultrasound for online measurements of dense slurries.

  1. Method for gathering and summarizing internet information

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  2. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  3. Theoretical Coalescence: A Method to Develop Qualitative Theory: The Example of Enduring.

    Science.gov (United States)

    Morse, Janice M

    Qualitative research is frequently context bound, lacks generalizability, and is limited in scope. The purpose of this article was to describe a method, theoretical coalescence, that provides a strategy for analyzing complex, high-level concepts and for developing generalizable theory. Theoretical coalescence is a method of theoretical expansion, inductive inquiry, of theory development, that uses data (rather than themes, categories, and published extracts of data) as the primary source for analysis. Here, using the development of the lay concept of enduring as an example, I explore the scientific development of the concept in multiple settings over many projects and link it within the Praxis Theory of Suffering. As comprehension emerges when conducting theoretical coalescence, it is essential that raw data from various different situations be available for reinterpretation/reanalysis and comparison to identify the essential features of the concept. The concept is then reconstructed, with additional inquiry that builds description, and evidence is conducted and conceptualized to create a more expansive concept and theory. By utilizing apparently diverse data sets from different contexts that are linked by certain characteristics, the essential features of the concept emerge. Such inquiry is divergent and less bound by context yet purposeful, logical, and with significant pragmatic implications for practice in nursing and beyond our discipline. Theoretical coalescence is a means by which qualitative inquiry is broadened to make an impact, to accommodate new theoretical shifts and concepts, and to make qualitative research applied and accessible in new ways.

  4. E-loyalty towards a cancer information website: applying a theoretical framework.

    Science.gov (United States)

    Crutzen, Rik; Beekers, Nienke; van Eenbergen, Mies; Becker, Monique; Jongen, Lilian; van Osch, Liesbeth

    2014-06-01

    To provide more insight into user perceptions related to e-loyalty towards a cancer information website. This is needed to assure adequate provision of high quality information during the full process of cancer treatment-from diagnosis to after care-and an important first step towards optimizing cancer information websites in order to promote e-loyalty. Participants were cancer patients (n = 63) and informal caregivers (n = 202) that visited a website providing regional information about cancer care for all types of cancer. Subsequently, they filled out a questionnaire assessing e-loyalty towards the website and user perceptions (efficiency, effectiveness, active trust and enjoyment) based on a theoretical framework derived from the field of e-commerce. A structural equation model was constructed to test the relationships between user perceptions and e-loyalty. Participants in general could find the information they were looking for (efficiency), thought it was relevant (effectiveness) and that they could act upon it (active trust) and thought the visit itself was pleasant (enjoyment). Effectiveness and enjoyment were both positively related with e-loyalty, but this was mediated by active trust. Efficiency was positively related with e-loyalty. The explained variance of e-loyalty was high (R(2)  = 0.70). This study demonstrates that the importance of user perceptions is not limited to fields such as e-commerce but is also present within the context of cancer information websites. The high information need among participants might explain the positive relationship between efficiency and e-loyalty. Therefore, cancer information websites need to foster easy search and access of information provided. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Information decomposition method to analyze symbolical sequences

    International Nuclear Information System (INIS)

    Korotkov, E.V.; Korotkova, M.A.; Kudryashov, N.A.

    2003-01-01

    The information decomposition (ID) method to analyze symbolical sequences is presented. This method allows us to reveal a latent periodicity of any symbolical sequence. The ID method is shown to have advantages in comparison with application of the Fourier transformation, the wavelet transform and the dynamic programming method to look for latent periodicity. Examples of the latent periods for poetic texts, DNA sequences and amino acids are presented. Possible origin of a latent periodicity for different symbolical sequences is discussed

  6. THEORETICAL FRAMEWORK FOR INFORMATION AND EDUCATIONAL COMPLEX DEVELOPMENT OF AN ACADEMIC DISCIPLINE AT A HIGHER INSTITUTION

    Directory of Open Access Journals (Sweden)

    Evgeniia Nikolaevna Kikot

    2015-05-01

    Full Text Available The question of organization of contemporary education process is getting more important nowadays in the conditions of ICT (information and communication technologies and e-education usage.This defines one of the most important methodological and research directions in the university – creation of informational-educational course unit complex as the foundation of e-University resource.The foundation of informational-educational course unit complex creation are the concepts of openness, accessibility, clearness, personalisation and that allow to built the requirements system to the complex creation and its substantial content.The main functions of informational educational complex are detected: informational, educational, controlling and communicative.It’s defined that into the basis of scientific justification of new structure elements of informational-educational of course unit complex development and introduction is necessary to include creation of e-workbook, e-workshops in order to organize theoretical and practical e-conferences.Development of ICT in education that provides e-education application assume establishment of distance learning techno-logies for educational programme implementation.

  7. Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems

    Energy Technology Data Exchange (ETDEWEB)

    Omitaomu, Olufemi A [ORNL; Ganguly, Auroop R [ORNL; Jiao, Yu [ORNL

    2009-01-01

    In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.

  8. Experimental Verification of a Jarzynski-Related Information-Theoretic Equality by a Single Trapped Ion.

    Science.gov (United States)

    Xiong, T P; Yan, L L; Zhou, F; Rehan, K; Liang, D F; Chen, L; Yang, W L; Ma, Z H; Feng, M; Vedral, V

    2018-01-05

    Most nonequilibrium processes in thermodynamics are quantified only by inequalities; however, the Jarzynski relation presents a remarkably simple and general equality relating nonequilibrium quantities with the equilibrium free energy, and this equality holds in both the classical and quantum regimes. We report a single-spin test and confirmation of the Jarzynski relation in the quantum regime using a single ultracold ^{40}Ca^{+} ion trapped in a harmonic potential, based on a general information-theoretic equality for a temporal evolution of the system sandwiched between two projective measurements. By considering both initially pure and mixed states, respectively, we verify, in an exact and fundamental fashion, the nonequilibrium quantum thermodynamics relevant to the mutual information and Jarzynski equality.

  9. Several foundational and information theoretic implications of Bell’s theorem

    Science.gov (United States)

    Kar, Guruprasad; Banik, Manik

    2016-08-01

    In 1935, Albert Einstein and two colleagues, Boris Podolsky and Nathan Rosen (EPR) developed a thought experiment to demonstrate what they felt was a lack of completeness in quantum mechanics (QM). EPR also postulated the existence of more fundamental theory where physical reality of any system would be completely described by the variables/states of that fundamental theory. This variable is commonly called hidden variable and the theory is called hidden variable theory (HVT). In 1964, John Bell proposed an empirically verifiable criterion to test for the existence of these HVTs. He derived an inequality, which must be satisfied by any theory that fulfill the conditions of locality and reality. He also showed that QM, as it violates this inequality, is incompatible with any local-realistic theory. Later it has been shown that Bell’s inequality (BI) can be derived from different set of assumptions and it also find applications in useful information theoretic protocols. In this review, we will discuss various foundational as well as information theoretic implications of BI. We will also discuss about some restricted nonlocal feature of quantum nonlocality and elaborate the role of Uncertainty principle and Complementarity principle in explaining this feature.

  10. SAIL: Summation-bAsed Incremental Learning for Information-Theoretic Text Clustering.

    Science.gov (United States)

    Cao, Jie; Wu, Zhiang; Wu, Junjie; Xiong, Hui

    2013-04-01

    Information-theoretic clustering aims to exploit information-theoretic measures as the clustering criteria. A common practice on this topic is the so-called Info-Kmeans, which performs K-means clustering with KL-divergence as the proximity function. While expert efforts on Info-Kmeans have shown promising results, a remaining challenge is to deal with high-dimensional sparse data such as text corpora. Indeed, it is possible that the centroids contain many zero-value features for high-dimensional text vectors, which leads to infinite KL-divergence values and creates a dilemma in assigning objects to centroids during the iteration process of Info-Kmeans. To meet this challenge, in this paper, we propose a Summation-bAsed Incremental Learning (SAIL) algorithm for Info-Kmeans clustering. Specifically, by using an equivalent objective function, SAIL replaces the computation of KL-divergence by the incremental computation of Shannon entropy. This can avoid the zero-feature dilemma caused by the use of KL-divergence. To improve the clustering quality, we further introduce the variable neighborhood search scheme and propose the V-SAIL algorithm, which is then accelerated by a multithreaded scheme in PV-SAIL. Our experimental results on various real-world text collections have shown that, with SAIL as a booster, the clustering performance of Info-Kmeans can be significantly improved. Also, V-SAIL and PV-SAIL indeed help improve the clustering quality at a lower cost of computation.

  11. A Theoretical Model of Health Information Technology Usage Behaviour with Implications for Patient Safety

    Science.gov (United States)

    Holden, Richard J.; Karsh, Ben-Tzion

    2009-01-01

    Primary objective: much research and practice related to the design and implementation of information technology in health care has been atheoretical. It is argued that using extant theory to develop testable models of health information technology (HIT) benefits both research and practice. Methods and procedures: several theories of motivation,…

  12. Methods of determining information needs for control

    Energy Technology Data Exchange (ETDEWEB)

    Borkowski, Z.

    1980-01-01

    Work has begun in the Main Data Center in the field of mining (Poland) on estimation in improvement of methods of determining information requirements necessary for control. Existing methods are briefly surveyed. Their imperfection is shown. The complexity of characteristics for this problem is pointed out.

  13. Synergy between experimental and theoretical methods in the exploration of homogeneous transition metal catalysis

    DEFF Research Database (Denmark)

    Lupp, Daniel; Christensen, Niels Johan; Fristrup, Peter

    2014-01-01

    n this Perspective, we will focus on the use of both experimental and theoretical methods in the exploration of reaction mechanisms in homogeneous transition metal catalysis. We briefly introduce the use of Hammett studies and kinetic isotope effects (KIE). Both of these techniques can be complem......n this Perspective, we will focus on the use of both experimental and theoretical methods in the exploration of reaction mechanisms in homogeneous transition metal catalysis. We briefly introduce the use of Hammett studies and kinetic isotope effects (KIE). Both of these techniques can...... be complemented by computational chemistry – in particular in cases where interpretation of the experimental results is not straightforward. The good correspondence between experiment and theory is only possible due to recent advances within the applied theoretical framework. We therefore also highlight...

  14. Exploring methods in information literacy research

    CERN Document Server

    Lipu, Suzanne; Lloyd, Annemaree

    2007-01-01

    This book provides an overview of approaches to assist researchers and practitioners to explore ways of undertaking research in the information literacy field. The first chapter provides an introductory overview of research by Dr Kirsty Williamson (author of Research Methods for Students, Academics and Professionals: Information Management and Systems) and this sets the scene for the rest of the chapters where each author explores the key aspects of a specific method and explains how it may be applied in practice. The methods covered include those representing qualitative, quantitative and mix

  15. MAIA - Method for Architecture of Information Applied: methodological construct of information processing in complex contexts

    Directory of Open Access Journals (Sweden)

    Ismael de Moura Costa

    2017-04-01

    Full Text Available Introduction: Paper to presentation the MAIA Method for Architecture of Information Applied evolution, its structure, results obtained and three practical applications.Objective: Proposal of a methodological constructo for treatment of complex information, distinguishing information spaces and revealing inherent configurations of those spaces. Metodology: The argument is elaborated from theoretical research of analitical hallmark, using distinction as a way to express concepts. Phenomenology is used as a philosophical position, which considers the correlation between Subject↔Object. The research also considers the notion of interpretation as an integrating element for concepts definition. With these postulates, the steps to transform the information spaces are formulated. Results: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Conclusions: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Besides that, the article presents not only possible applications as a cientific method, but also as configuration tool in information spaces, as well as generator of ontologies. At last, but not least, presents a brief summary of the analysis made by researchers who have already evaluated the method considering the three aspects mentioned.

  16. An information-theoretic approach to the modeling and analysis of whole-genome bisulfite sequencing data.

    Science.gov (United States)

    Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John

    2018-03-07

    DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of

  17. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  18. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Bertram, Anna

    2018-01-01

    Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...

  19. Method of and System for Information Retrieval

    DEFF Research Database (Denmark)

    2015-01-01

    This invention relates to a system for and a method (100) of searching a collection of digital information (150) comprising a number of digital documents (110), the method comprising receiving or obtaining (102) a search query, the query comprising a number of search terms, searching (103) an ind......, a method of and a system for information retrieval or searching is readily provided that enhances the searching quality (i.e. the number of relevant documents retrieved and such documents being ranked high) when (also) using queries containing many search terms.......This invention relates to a system for and a method (100) of searching a collection of digital information (150) comprising a number of digital documents (110), the method comprising receiving or obtaining (102) a search query, the query comprising a number of search terms, searching (103) an index...... (300) using the search terms thereby providing information (301) about which digital documents (110) of the collection of digital information (150) that contains a given search term and one or more search related metrics (302; 303; 304; 305; 306), ranking (105) at least a part of the search result...

  20. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  1. Molecular acidity: An accurate description with information-theoretic approach in density functional reactivity theory.

    Science.gov (United States)

    Cao, Xiaofang; Rong, Chunying; Zhong, Aiguo; Lu, Tian; Liu, Shubin

    2018-01-15

    Molecular acidity is one of the important physiochemical properties of a molecular system, yet its accurate calculation and prediction are still an unresolved problem in the literature. In this work, we propose to make use of the quantities from the information-theoretic (IT) approach in density functional reactivity theory and provide an accurate description of molecular acidity from a completely new perspective. To illustrate our point, five different categories of acidic series, singly and doubly substituted benzoic acids, singly substituted benzenesulfinic acids, benzeneseleninic acids, phenols, and alkyl carboxylic acids, have been thoroughly examined. We show that using IT quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Rényi entropy, one is able to simultaneously predict experimental pKa values of these different categories of compounds. Because of the universality of the quantities employed in this work, which are all density dependent, our approach should be general and be applicable to other systems as well. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. A method for comparison of experimental and theoretical differential neutron spectra in the Zenith reactor

    International Nuclear Information System (INIS)

    Reed, D.L.; Symons, C.R.

    1965-01-01

    A method of calculation is given which assists the analyses of chopper measurements of spectra from ZENITH and enables complex multigroup theoretical calculations of the spectra to be put into a form which may be compared with experiment. In addition the theory of the cut-off function has been extended to give analytical expressions which take into account the effects of sub-collimators, off centre slits and of a rotor made of a material partially transparent to neutrons. The theoretical cut-off function suggested shows good agreement with experiment. (author)

  3. A method for comparison of experimental and theoretical differential neutron spectra in the Zenith reactor

    Energy Technology Data Exchange (ETDEWEB)

    Reed, D L; Symons, C R [General Reactor Physics Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1965-01-15

    A method of calculation is given which assists the analyses of chopper measurements of spectra from ZENITH and enables complex multigroup theoretical calculations of the spectra to be put into a form which may be compared with experiment. In addition the theory of the cut-off function has been extended to give analytical expressions which take into account the effects of sub-collimators, off centre slits and of a rotor made of a material partially transparent to neutrons. The theoretical cut-off function suggested shows good agreement with experiment. (author)

  4. An information-theoretical approach to image resolution applied to neutron imaging detectors based upon individual discriminator signals

    International Nuclear Information System (INIS)

    Clergeau, Jean-Francois; Ferraton, Matthieu; Guerard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick; Daulle, Thibault

    2013-06-01

    1D or 2D neutron imaging detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of resolution. We then apply this measure to quantify the power of resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the resolution over best-wire algorithms which are the standard way of treating these signals. (authors)

  5. An information-theoretical approach to image resolution applied to neutron imaging detectors based upon individual discriminator signals

    Energy Technology Data Exchange (ETDEWEB)

    Clergeau, Jean-Francois; Ferraton, Matthieu; Guerard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick [Institut Laue Langevin, Neutron Detector Service, Grenoble (France); Daulle, Thibault [PHELMA Grenoble - INP Grenoble (France)

    2013-06-15

    1D or 2D neutron imaging detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of resolution. We then apply this measure to quantify the power of resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the resolution over best-wire algorithms which are the standard way of treating these signals. (authors)

  6. Analysis of methods. [information systems evolution environment

    Science.gov (United States)

    Mayer, Richard J. (Editor); Ackley, Keith A.; Wells, M. Sue; Mayer, Paula S. D.; Blinn, Thomas M.; Decker, Louis P.; Toland, Joel A.; Crump, J. Wesley; Menzel, Christopher P.; Bodenmiller, Charles A.

    1991-01-01

    Information is one of an organization's most important assets. For this reason the development and maintenance of an integrated information system environment is one of the most important functions within a large organization. The Integrated Information Systems Evolution Environment (IISEE) project has as one of its primary goals a computerized solution to the difficulties involved in the development of integrated information systems. To develop such an environment a thorough understanding of the enterprise's information needs and requirements is of paramount importance. This document is the current release of the research performed by the Integrated Development Support Environment (IDSE) Research Team in support of the IISEE project. Research indicates that an integral part of any information system environment would be multiple modeling methods to support the management of the organization's information. Automated tool support for these methods is necessary to facilitate their use in an integrated environment. An integrated environment makes it necessary to maintain an integrated database which contains the different kinds of models developed under the various methodologies. In addition, to speed the process of development of models, a procedure or technique is needed to allow automatic translation from one methodology's representation to another while maintaining the integrity of both. The purpose for the analysis of the modeling methods included in this document is to examine these methods with the goal being to include them in an integrated development support environment. To accomplish this and to develop a method for allowing intra-methodology and inter-methodology model element reuse, a thorough understanding of multiple modeling methodologies is necessary. Currently the IDSE Research Team is investigating the family of Integrated Computer Aided Manufacturing (ICAM) DEFinition (IDEF) languages IDEF(0), IDEF(1), and IDEF(1x), as well as ENALIM, Entity

  7. Three-dimensionality of space and the quantum bit: an information-theoretic approach

    International Nuclear Information System (INIS)

    Müller, Markus P; Masanes, Lluís

    2013-01-01

    It is sometimes pointed out as a curiosity that the state space of quantum two-level systems, i.e. the qubit, and actual physical space are both three-dimensional and Euclidean. In this paper, we suggest an information-theoretic analysis of this relationship, by proving a particular mathematical result: suppose that physics takes place in d spatial dimensions, and that some events happen probabilistically (not assuming quantum theory in any way). Furthermore, suppose there are systems that carry ‘minimal amounts of direction information’, interacting via some continuous reversible time evolution. We prove that this uniquely determines spatial dimension d = 3 and quantum theory on two qubits (including entanglement and unitary time evolution), and that it allows observers to infer local spatial geometry from probability measurements. (paper)

  8. A Theoretical Framework for Soft-Information-Based Synchronization in Iterative (Turbo Receivers

    Directory of Open Access Journals (Sweden)

    Lottici Vincenzo

    2005-01-01

    Full Text Available This contribution considers turbo synchronization, that is to say, the use of soft data information to estimate parameters like carrier phase, frequency, or timing offsets of a modulated signal within an iterative data demodulator. In turbo synchronization, the receiver exploits the soft decisions computed at each turbo decoding iteration to provide a reliable estimate of some signal parameters. The aim of our paper is to show that such "turbo-estimation" approach can be regarded as a special case of the expectation-maximization (EM algorithm. This leads to a general theoretical framework for turbo synchronization that allows to derive parameter estimation procedures for carrier phase and frequency offset, as well as for timing offset and signal amplitude. The proposed mathematical framework is illustrated by simulation results reported for the particular case of carrier phase and frequency offsets estimation of a turbo-coded 16-QAM signal.

  9. Some Observations on the Concepts of Information-Theoretic Entropy and Randomness

    Directory of Open Access Journals (Sweden)

    Jonathan D.H. Smith

    2001-02-01

    Full Text Available Abstract: Certain aspects of the history, derivation, and physical application of the information-theoretic entropy concept are discussed. Pre-dating Shannon, the concept is traced back to Pauli. A derivation from first principles is given, without use of approximations. The concept depends on the underlying degree of randomness. In physical applications, this translates to dependence on the experimental apparatus available. An example illustrates how this dependence affects Prigogine's proposal for the use of the Second Law of Thermodynamics as a selection principle for the breaking of time symmetry. The dependence also serves to yield a resolution of the so-called ``Gibbs Paradox.'' Extension of the concept from the discrete to the continuous case is discussed. The usual extension is shown to be dimensionally incorrect. Correction introduces a reference density, leading to the concept of Kullback entropy. Practical relativistic considerations suggest a possible proper reference density.

  10. Research Investigation of Information Access Methods

    Science.gov (United States)

    Heinrichs, John H.; Sharkey, Thomas W.; Lim, Jeen-Su

    2006-01-01

    This study investigates the satisfaction of library users at Wayne State University who utilize alternative information access methods. The LibQUAL+[TM] desired and perceived that satisfaction ratings are used to determine the user's "superiority gap." By focusing limited library resources to address "superiority gap" issues identified by each…

  11. Agile Methods from the Viewpoint of Information

    Directory of Open Access Journals (Sweden)

    Eder Junior Alves

    2017-10-01

    Full Text Available Introduction: Since Paul M. G. Otlet highlighted the term documentation in 1934, proposing how to collect and organize the world's knowledge, many scientific researches directed observations to the study of Information Science. Methods and techniques have come up with a world view from the perspective of information. Agile methods follow this trend. Objective: The purpose is to analyze the relevance of information flow to organizations adopting agile methods, understanding how the innovation process is influenced by this practice. Methodology: This is a bibliometric study with fundamentals of Systematic Literature Review (SLR. The integration between the SLR technique interacting with Summarize tool is a new methodological proposal. Results: Scrum appears with the highest number of publications in SPELL. In comparison, results of Google Scholar pointed out to the importance of practices and team behaviors. In Science Direct repository, critical success factors in project management and software development are highlighted. Introduction: Conclusions: It was evident that agile methods are being used as process innovations. The benefits and advantages are evident with internal and external occurrence of information flow. Due to the prevalence in the literature, Scrum deserves attention by firms.

  12. Theoretical vs. empirical discriminability: the application of ROC methods to eyewitness identification.

    Science.gov (United States)

    Wixted, John T; Mickes, Laura

    2018-01-01

    Receiver operating characteristic (ROC) analysis was introduced to the field of eyewitness identification 5 years ago. Since that time, it has been both influential and controversial, and the debate has raised an issue about measuring discriminability that is rarely considered. The issue concerns the distinction between empirical discriminability (measured by area under the ROC curve) vs. underlying/theoretical discriminability (measured by d' or variants of it). Under most circumstances, the two measures will agree about a difference between two conditions in terms of discriminability. However, it is possible for them to disagree, and that fact can lead to confusion about which condition actually yields higher discriminability. For example, if the two conditions have implications for real-world practice (e.g., a comparison of competing lineup formats), should a policymaker rely on the area-under-the-curve measure or the theory-based measure? Here, we illustrate the fact that a given empirical ROC yields as many underlying discriminability measures as there are theories that one is willing to take seriously. No matter which theory is correct, for practical purposes, the singular area-under-the-curve measure best identifies the diagnostically superior procedure. For that reason, area under the ROC curve informs policy in a way that underlying theoretical discriminability never can. At the same time, theoretical measures of discriminability are equally important, but for a different reason. Without an adequate theoretical understanding of the relevant task, the field will be in no position to enhance empirical discriminability.

  13. An information theoretic model of information processing in the Drosophila olfactory system: the role of inhibitory neurons for system efficiency.

    Science.gov (United States)

    Faghihi, Faramarz; Kolodziejski, Christoph; Fiala, André; Wörgötter, Florentin; Tetzlaff, Christian

    2013-12-20

    Fruit flies (Drosophila melanogaster) rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the fly's olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons (Kenyon cells) was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the system efficiency will be substantially reduced.

  14. Theoretical reflections on the connection between environmental assessment methods and conflict

    International Nuclear Information System (INIS)

    Persson, Jesper

    2006-01-01

    Today there is a great variety of methods for evaluating the environmental impact of plans, programs and projects. But which of these methods should planners and managers choose? This theoretical article explores the connection between conflicts, communication and rationality in assessment methods. It focuses on the form (rationality) and substance of communication, i.e. what we should communicate about. The outcome supports the view that environmental assessments should be based on value- and interest-focused thinking, following a teleological ethic, when goals, alternatives and compensations are to be developed and impacts evaluated

  15. Methods to determine stratification efficiency of thermal energy storage processes–Review and theoretical comparison

    DEFF Research Database (Denmark)

    Haller, Michel; Cruickshank, Chynthia; Streicher, Wolfgang

    2009-01-01

    This paper reviews different methods that have been proposed to characterize thermal stratification in energy storages from a theoretical point of view. Specifically, this paper focuses on the methods that can be used to determine the ability of a storage to promote and maintain stratification...... during charging, storing and discharging, and represent this ability with a single numerical value in terms of a stratification efficiency for a given experiment or under given boundary conditions. Existing methods for calculating stratification efficiencies have been applied to hypothetical storage...

  16. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Smith, Ralph [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Williams, Brian [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Figueroa, Victor [Sandia National Laboratories, Albuquerque, NM 87185 (United States)

    2016-11-01

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is to employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.

  17. A theoretical framework informing research about the role of stress in the pathophysiology of bipolar disorder.

    Science.gov (United States)

    Brietzke, Elisa; Mansur, Rodrigo Barbachan; Soczynska, Joanna; Powell, Alissa M; McIntyre, Roger S

    2012-10-01

    The staggering illness burden associated with Bipolar Disorder (BD) invites the need for primary prevention strategies. Before preventative strategies can be considered in individuals during a pre-symptomatic period (i.e., at risk), unraveling the mechanistic steps wherein external stress is transduced and interacts with genetic vulnerability in the early stages of BD will be a critical conceptual necessity. Herein we comprehensively review extant studies reporting on stress and bipolar disorder. The overarching aim is to propose a conceptual framework to inform research about the role of stress in the pathophysiology of BD. Computerized databases i.e. PubMed, PsychInfo, Cochrane Library and Scielo were searched using the following terms: "bipolar disorder" cross-referenced with "stress", "general reaction to stress", "resilience", "resistance", "recovery" "stress-diathesis", "allostasis", and "hormesis". Data from literature indicate the existence of some theoretical models to understand the influence of stress in the pathophysiology of BD, including classical stress-diathesis model and new models such as allostasis and hormesis. In addition, molecular mechanisms involved in stress adaptation (resistance, resilience and recovery) can also be translated in research strategies to investigate the impact of stress in the pathophysiology of BD. Most studies are retrospective and/or cross sectional, do not consider the period of development, assess brain function with only one or few methodologies, and use animal models which are not always similar to human phenotypes. The interaction between stress and brain development is dynamic and complex. In this article we proposed a theoretical model for investigation about the role of stress in the pathophysiology of BD, based on the different kinds of stress adaptation response and their putative neurobiological underpinnings. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Game-theoretic methods for functional response and optimal foraging behavior

    Czech Academy of Sciences Publication Activity Database

    Cressman, R.; Křivan, Vlastimil; Brown, J. S.; Garay, J.

    2014-01-01

    Roč. 9, č. 2 (2014), e88773 E-ISSN 1932-6203 Grant - others:Hungarian National Research Fund(HU) K62000; Hungarian National Research Fund(HU) K67961 Institutional support: RVO:60077344 Keywords : game-theoretic methods Subject RIV: EH - Ecology, Behaviour Impact factor: 3.234, year: 2014 http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0088773

  19. Theoretical methods for the calculation of the multiphoton ionisation cross-section of atoms and molecules

    International Nuclear Information System (INIS)

    Moccia, R.

    1991-01-01

    Some of the available theoretical methods to compute the two-photon ionisation cross-section of many-electron systems are reviewed. In particular the problems concerning the computation of (i) reliable approximations for the transition matrix elements and the excitation energies; and (ii) accurate results pertaining to the electronic continuum by the use of L 2 basis functions are considered. (author). 29 refs., 6 figs., 1 tab

  20. Eclecticism as the foundation of meta-theoretical, mixed methods and interdisciplinary research in social sciences.

    Science.gov (United States)

    Kroos, Karmo

    2012-03-01

    This article examines the value of "eclecticism" as the foundation of meta-theoretical, mixed methods and interdisciplinary research in social sciences. On the basis of the analysis of the historical background of the concept, it is first suggested that eclecticism-based theoretical scholarship in social sciences could benefit from the more systematic research method that has been developed for synthesizing theoretical works under the name metatheorizing. Second, it is suggested that the mixed methods community could base its research approach on philosophical eclecticism instead of pragmatism because the basic idea of eclecticism is much more in sync with the nature of the combined research tradition. Finally, the Kuhnian frame is used to support the argument for interdisciplinary research and, hence, eclecticism in social sciences (rather than making an argument against multiple paradigms). More particularly, it is suggested that integrating the different (inter)disciplinary traditions and schools into one is not necessarily desirable at all in social sciences because of the complexity and openness of the research field. If it is nevertheless attempted, experience in economics suggests that paradigmatic unification comes at a high price.

  1. Consumers’ Acceptance and Use of Information and Communications Technology: A UTAUT and Flow Based Theoretical Model

    Directory of Open Access Journals (Sweden)

    Saleh Alwahaishi

    2013-03-01

    Full Text Available The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As the use of ICT expands globally, there is need for further research into cultural aspects and implications of ICT. The acceptance of Information Technology (IT has become a fundamental part of the research plan for most organizations (Igbaria 1993. In IT research, numerous theories are used to understand users’ adoption of new technologies. Various models were developed including the Technology Acceptance Model, Theory of Reasoned Action, Theory of Planned Behavior, and recently, the Unified Theory of Acceptance and Use of Technology. Each of these models has sought to identify the factors which influence a citizen’s intention or actual use of information technology. Drawing on the UTAUT model and Flow Theory, this research composes a new hybrid theoretical framework to identify the factors affecting the acceptance and use of Mobile Internet -as an ICT application- in a consumer context. The proposed model incorporates eight constructs: Performance Expectancy, Effort Expectancy, Facilitating Conditions, Social Influences, Perceived Value, Perceived Playfulness, Attention Focus, and Behavioral intention. Data collected online from 238 respondents in Saudi Arabia were tested against the research model, using the structural equation modeling approach. The proposed model was mostly supported by the empirical data. The findings of this study provide several crucial implications for ICT and, in particular, mobile Internet service practitioners and researchers

  2. Practical Methods for Information Security Risk Management

    Directory of Open Access Journals (Sweden)

    Cristian AMANCEI

    2011-01-01

    Full Text Available The purpose of this paper is to present some directions to perform the risk man-agement for information security. The article follows to practical methods through question-naire that asses the internal control, and through evaluation based on existing controls as part of vulnerability assessment. The methods presented contains all the key elements that concurs in risk management, through the elements proposed for evaluation questionnaire, list of threats, resource classification and evaluation, correlation between risks and controls and residual risk computation.

  3. Theoretical and numerical investigations into the SPRT method for anomaly detection

    Energy Technology Data Exchange (ETDEWEB)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Interuniversitair Reactor Inst., Delft (Netherlands)

    1995-11-01

    The sequential probability ratio test developed by Wald is a powerful method of testing an alternative hypothesis against a null hypothesis. This makes the method applicable for anomaly detection. In this paper the method is used to detect a change of the standard deviation of a Gaussian distributed white noise signal. The false alarm probability, the alarm failure probability and the average time to alarm of the method, which are important parameters for anomaly detection, are determined by simulation and compared with theoretical results. Each of the three parameters is presented in dependence of the other two and the ratio of the standard deviation of the anomalous signal and that of the normal signal. Results show that the method is very well suited for anomaly detection. It can detect for example a 50% change in standard deviation within 1 second with a false alarm and alarm failure rate of less than once per month. (author).

  4. Theoretical and numerical investigations into the SPRT method for anomaly detection

    International Nuclear Information System (INIS)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.

    1995-01-01

    The sequential probability ratio test developed by Wald is a powerful method of testing an alternative hypothesis against a null hypothesis. This makes the method applicable for anomaly detection. In this paper the method is used to detect a change of the standard deviation of a Gaussian distributed white noise signal. The false alarm probability, the alarm failure probability and the average time to alarm of the method, which are important parameters for anomaly detection, are determined by simulation and compared with theoretical results. Each of the three parameters is presented in dependence of the other two and the ratio of the standard deviation of the anomalous signal and that of the normal signal. Results show that the method is very well suited for anomaly detection. It can detect for example a 50% change in standard deviation within 1 second with a false alarm and alarm failure rate of less than once per month. (author)

  5. Adjusting Estimates of the Expected Value of Information for Implementation: Theoretical Framework and Practical Application.

    Science.gov (United States)

    Andronis, Lazaros; Barton, Pelham M

    2016-04-01

    Value of information (VoI) calculations give the expected benefits of decision making under perfect information (EVPI) or sample information (EVSI), typically on the premise that any treatment recommendations made in light of this information will be implemented instantly and fully. This assumption is unlikely to hold in health care; evidence shows that obtaining further information typically leads to "improved" rather than "perfect" implementation. To present a method of calculating the expected value of further research that accounts for the reality of improved implementation. This work extends an existing conceptual framework by introducing additional states of the world regarding information (sample information, in addition to current and perfect information) and implementation (improved implementation, in addition to current and optimal implementation). The extension allows calculating the "implementation-adjusted" EVSI (IA-EVSI), a measure that accounts for different degrees of implementation. Calculations of implementation-adjusted estimates are illustrated under different scenarios through a stylized case study in non-small cell lung cancer. In the particular case study, the population values for EVSI and IA-EVSI were £ 25 million and £ 8 million, respectively; thus, a decision assuming perfect implementation would have overestimated the expected value of research by about £ 17 million. IA-EVSI was driven by the assumed time horizon and, importantly, the specified rate of change in implementation: the higher the rate, the greater the IA-EVSI and the lower the difference between IA-EVSI and EVSI. Traditionally calculated measures of population VoI rely on unrealistic assumptions about implementation. This article provides a simple framework that accounts for improved, rather than perfect, implementation and offers more realistic estimates of the expected value of research. © The Author(s) 2015.

  6. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory

    International Nuclear Information System (INIS)

    Xu, Yun-Chao; Chen, Qun

    2013-01-01

    The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases

  7. Theoretical Methods of Domain Structures in Ultrathin Ferroelectric Films: A Review

    Directory of Open Access Journals (Sweden)

    Jianyi Liu

    2014-09-01

    Full Text Available This review covers methods and recent developments of the theoretical study of domain structures in ultrathin ferroelectric films. The review begins with an introduction to some basic concepts and theories (e.g., polarization and its modern theory, ferroelectric phase transition, domain formation, and finite size effects, etc. that are relevant to the study of domain structures in ultrathin ferroelectric films. Basic techniques and recent progress of a variety of important approaches for domain structure simulation, including first-principles calculation, molecular dynamics, Monte Carlo simulation, effective Hamiltonian approach and phase field modeling, as well as multiscale simulation are then elaborated. For each approach, its important features and relative merits over other approaches for modeling domain structures in ultrathin ferroelectric films are discussed. Finally, we review recent theoretical studies on some important issues of domain structures in ultrathin ferroelectric films, with an emphasis on the effects of interfacial electrostatics, boundary conditions and external loads.

  8. Fuel cycle covariance of plutonium and americium separations to repository capacity using information theoretic measures

    International Nuclear Information System (INIS)

    Scopatz, Anthony; Schneider, Erich; Li, Jun; Yim, Man-Sung

    2011-01-01

    A light water reactor, fast reactor symbiotic fuel cycle scenario was modeled and parameterized based on thirty independent inputs. Simultaneously and stochastically choosing different values for each of these inputs and performing the associated fuel cycle mass-balance calculation, the fuel cycle itself underwent Monte Carlo simulation. A novel information theoretic metric is postulated as a measure of system-wide covariance. This metric is the coefficient of variation of the set of uncertainty coefficients generated from 2D slices of a 3D contingency table. It is then applied to the fuel cycle, taking fast reactor used fuel plutonium and americium separations as independent variables and the capacity of a fully-loaded tuff repository as the response. This set of parameters is known from prior studies to have a strong covariance. When measured with all 435 other input parameters possible, the fast reactor plutonium and americium separations pair was found to be ranked the second most covariant. This verifies that the coefficient of variation metric captures the desired sensitivity of sensitivity effects in the nuclear fuel cycle. (author)

  9. Information-theoretic security proof for quantum-key-distribution protocols

    International Nuclear Information System (INIS)

    Renner, Renato; Gisin, Nicolas; Kraus, Barbara

    2005-01-01

    We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel

  10. Information-Theoretic Limits on Broadband Multi-Antenna Systems in the Presence of Mutual Coupling

    Science.gov (United States)

    Taluja, Pawandeep Singh

    2011-12-01

    Multiple-input, multiple-output (MIMO) systems have received considerable attention over the last decade due to their ability to provide high throughputs and mitigate multipath fading effects. While most of these benefits are obtained for ideal arrays with large separation between the antennas, practical devices are often constrained in physical dimensions. With smaller inter-element spacings, signal correlation and mutual coupling between the antennas start to degrade the system performance, thereby limiting the deployment of a large number of antennas. Various studies have proposed transceiver designs based on optimal matching networks to compensate for this loss. However, such networks are considered impractical due to their multiport structure and sensitivity to the RF bandwidth of the system. In this dissertation, we investigate two aspects of compact transceiver design. First, we consider simpler architectures that exploit coupling between the antennas, and second, we establish information-theoretic limits of broadband communication systems with closely-spaced antennas. We begin with a receiver model of a diversity antenna selection system and propose novel strategies that make use of inactive elements by virtue of mutual coupling. We then examine the limits on the matching efficiency of a single antenna system using broadband matching theory. Next, we present an extension to this theory for coupled MIMO systems to elucidate the impact of coupling on the RF bandwidth of the system, and derive optimal transceiver designs. Lastly, we summarize the main findings of this dissertation and suggest open problems for future work.

  11. Theoretically informed correlates of hepatitis B knowledge among four Asian groups: the health behavior framework.

    Science.gov (United States)

    Maxwell, Annette E; Stewart, Susan L; Glenn, Beth A; Wong, Weng Kee; Yasui, Yutaka; Chang, L Cindy; Taylor, Victoria M; Nguyen, Tung T; Chen, Moon S; Bastani, Roshan

    2012-01-01

    Few studies have examined theoretically informed constructs related to hepatitis B (HBV) testing, and comparisons across studies are challenging due to lack of uniformity in constructs assessed. The present analysis examined relationships among Health Behavior Framework factors across four Asian American groups to advance the development of theory-based interventions for HBV testing in at-risk populations. Data were collected from 2007-2010 as part of baseline surveys during four intervention trials promoting HBV testing among Vietnamese-, Hmong-, Korean- and Cambodian-Americans (n = 1,735). Health Behavior Framework constructs assessed included: awareness of HBV, knowledge of transmission routes, perceived susceptibility, perceived severity, doctor recommendation, stigma of HBV infection, and perceived efficacy of testing. Within each group we assessed associations between our intermediate outcome of knowledge of HBV transmission and other constructs, to assess the concurrent validity of our model and instruments. While the absolute levels for Health Behavior Framework factors varied across groups, relationships between knowledge and other factors were generally consistent. This suggests similarities rather than differences with respect to posited drivers of HBV-related behavior. Our findings indicate that Health Behavior Framework constructs are applicable to diverse ethnic groups and provide preliminary evidence for the construct validity of the Health Behavior Framework.

  12. Model-free information-theoretic approach to infer leadership in pairs of zebrafish.

    Science.gov (United States)

    Butail, Sachit; Mwaffo, Violet; Porfiri, Maurizio

    2016-04-01

    Collective behavior affords several advantages to fish in avoiding predators, foraging, mating, and swimming. Although fish schools have been traditionally considered egalitarian superorganisms, a number of empirical observations suggest the emergence of leadership in gregarious groups. Detecting and classifying leader-follower relationships is central to elucidate the behavioral and physiological causes of leadership and understand its consequences. Here, we demonstrate an information-theoretic approach to infer leadership from positional data of fish swimming. In this framework, we measure social interactions between fish pairs through the mathematical construct of transfer entropy, which quantifies the predictive power of a time series to anticipate another, possibly coupled, time series. We focus on the zebrafish model organism, which is rapidly emerging as a species of choice in preclinical research for its genetic similarity to humans and reduced neurobiological complexity with respect to mammals. To overcome experimental confounds and generate test data sets on which we can thoroughly assess our approach, we adapt and calibrate a data-driven stochastic model of zebrafish motion for the simulation of a coupled dynamical system of zebrafish pairs. In this synthetic data set, the extent and direction of the coupling between the fish are systematically varied across a wide parameter range to demonstrate the accuracy and reliability of transfer entropy in inferring leadership. Our approach is expected to aid in the analysis of collective behavior, providing a data-driven perspective to understand social interactions.

  13. Information-theoretic security proof for quantum-key-distribution protocols

    Science.gov (United States)

    Renner, Renato; Gisin, Nicolas; Kraus, Barbara

    2005-07-01

    We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel.

  14. An information-theoretic machine learning approach to expression QTL analysis.

    Directory of Open Access Journals (Sweden)

    Tao Huang

    Full Text Available Expression Quantitative Trait Locus (eQTL analysis is a powerful tool to study the biological mechanisms linking the genotype with gene expression. Such analyses can identify genomic locations where genotypic variants influence the expression of genes, both in close proximity to the variant (cis-eQTL, and on other chromosomes (trans-eQTL. Many traditional eQTL methods are based on a linear regression model. In this study, we propose a novel method by which to identify eQTL associations with information theory and machine learning approaches. Mutual Information (MI is used to describe the association between genetic marker and gene expression. MI can detect both linear and non-linear associations. What's more, it can capture the heterogeneity of the population. Advanced feature selection methods, Maximum Relevance Minimum Redundancy (mRMR and Incremental Feature Selection (IFS, were applied to optimize the selection of the affected genes by the genetic marker. When we applied our method to a study of apoE-deficient mice, it was found that the cis-acting eQTLs are stronger than trans-acting eQTLs but there are more trans-acting eQTLs than cis-acting eQTLs. We compared our results (mRMR.eQTL with R/qtl, and MatrixEQTL (modelLINEAR and modelANOVA. In female mice, 67.9% of mRMR.eQTL results can be confirmed by at least two other methods while only 14.4% of R/qtl result can be confirmed by at least two other methods. In male mice, 74.1% of mRMR.eQTL results can be confirmed by at least two other methods while only 18.2% of R/qtl result can be confirmed by at least two other methods. Our methods provide a new way to identify the association between genetic markers and gene expression. Our software is available from supporting information.

  15. Pathways from Trauma to Psychotic Experiences: A Theoretically Informed Model of Posttraumatic Stress in Psychosis

    Directory of Open Access Journals (Sweden)

    Amy Hardy

    2017-05-01

    Full Text Available In recent years, empirical data and theoretical accounts relating to the relationship between childhood victimization and psychotic experiences have accumulated. Much of this work has focused on co-occurring Posttraumatic Stress Disorder or putative causal mechanisms in isolation from each other. The complexity of posttraumatic stress reactions experienced in psychosis remains poorly understood. This paper therefore attempts to synthesize the current evidence base into a theoretically informed, multifactorial model of posttraumatic stress in psychosis. Three trauma-related vulnerability factors are proposed to give rise to intrusions and to affect how people appraise and cope with them. First, understandable attempts to survive trauma become habitual ways of regulating emotion, manifesting in cognitive-affective, behavioral and interpersonal responses. Second, event memories, consisting of perceptual and episodic representations, are impacted by emotion experienced during trauma. Third, personal semantic memory, specifically appraisals of the self and others, are shaped by event memories. It is proposed these vulnerability factors have the potential to lead to two types of intrusions. The first type is anomalous experiences arising from emotion regulation and/or the generation of novel images derived from trauma memory. The second type is trauma memory intrusions reflecting, to varying degrees, the retrieval of perceptual, episodic and personal semantic representations. It is speculated trauma memory intrusions may be experienced on a continuum from contextualized to fragmented, depending on memory encoding and retrieval. Personal semantic memory will then impact on how intrusions are appraised, with habitual emotion regulation strategies influencing people’s coping responses to these. Three vignettes are outlined to illustrate how the model accounts for different pathways between victimization and psychosis, and implications for therapy are

  16. Transactive System: Part I: Theoretical Underpinnings of Payoff Functions, Control Decisions, Information Privacy, and Solution Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Lian, Jianming [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhang, Wei [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Marinovici, Laurentiu D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kalsi, Karanjit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Widergren, Steven E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2018-01-17

    new transactive energy system design with demonstrable guarantees on stability and performance. Specifically, the goals are to (1) establish a theoretical basis for evaluating the performance of different transactive systems, (2) devise tools to address canonical problems that exemplify challenges and scenarios of transactive systems, and (3) provide guidelines for design of future transactive systems. This report, Part 1 of a two part series, advances the above-listed research objectives by reviewing existing transactive systems and identifying a theoretical foundation that integrates payoff functions, control decisions, information privacy, and mathematical solution concepts.

  17. 3D nonrigid medical image registration using a new information theoretic measure

    Science.gov (United States)

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  18. 3D nonrigid medical image registration using a new information theoretic measure

    International Nuclear Information System (INIS)

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-01-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen–Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy. (paper)

  19. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    International Nuclear Information System (INIS)

    Shafii, Mohammad Ali; Meidianti, Rahma; Wildian,; Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-01-01

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation

  20. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    Energy Technology Data Exchange (ETDEWEB)

    Shafii, Mohammad Ali, E-mail: mashafii@fmipa.unand.ac.id; Meidianti, Rahma, E-mail: mashafii@fmipa.unand.ac.id; Wildian,, E-mail: mashafii@fmipa.unand.ac.id; Fitriyani, Dian, E-mail: mashafii@fmipa.unand.ac.id [Department of Physics, Andalas University Padang West Sumatera Indonesia (Indonesia); Tongkukut, Seni H. J. [Department of Physics, Sam Ratulangi University Manado North Sulawesi Indonesia (Indonesia); Arkundato, Artoto [Department of Physics, Jember University Jember East Java Indonesia (Indonesia)

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  1. When the Mannequin Dies, Creation and Exploration of a Theoretical Framework Using a Mixed Methods Approach.

    Science.gov (United States)

    Tripathy, Shreepada; Miller, Karen H; Berkenbosch, John W; McKinley, Tara F; Boland, Kimberly A; Brown, Seth A; Calhoun, Aaron W

    2016-06-01

    Controversy exists in the simulation community as to the emotional and educational ramifications of mannequin death due to learner action or inaction. No theoretical framework to guide future investigations of learner actions currently exists. The purpose of our study was to generate a model of the learner experience of mannequin death using a mixed methods approach. The study consisted of an initial focus group phase composed of 11 learners who had previously experienced mannequin death due to action or inaction on the part of learners as defined by Leighton (Clin Simul Nurs. 2009;5(2):e59-e62). Transcripts were analyzed using grounded theory to generate a list of relevant themes that were further organized into a theoretical framework. With the use of this framework, a survey was generated and distributed to additional learners who had experienced mannequin death due to action or inaction. Results were analyzed using a mixed methods approach. Forty-one clinicians completed the survey. A correlation was found between the emotional experience of mannequin death and degree of presession anxiety (P framework. Using the previous approach, we created a model of the effect of mannequin death on the educational and psychological state of learners. We offer the final model as a guide to future research regarding the learner experience of mannequin death.

  2. Investigation of Means of Mitigating Congestion in Complex, Distributed Network Systems by Optimization Means and Information Theoretic Procedures

    Science.gov (United States)

    2008-02-01

    Information Theoretic Proceedures Frank Mufalli Rakesh Nagi Jim Llinas Sumita Mishra SUNY at Buffalo— CUBRC 4455 Genessee Street Buffalo...5f. WORK UNIT NUMBER NY 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) SUNY at Buffalo— CUBRC * Paine College ** 4455 Genessee

  3. Structural, vibrational and nuclear magnetic resonance investigations of 4-bromoisoquinoline by experimental and theoretical DFT methods.

    Science.gov (United States)

    Arjunan, V; Thillai Govindaraja, S; Jayapraksh, A; Mohan, S

    2013-04-15

    Quantum chemical calculations of energy, structural parameters and vibrational wavenumbers of 4-bromoisoquinoline (4BIQ) were carried out by using B3LYP method using 6-311++G(**), cc-pVTZ and LANL2DZ basis sets. The optimised geometrical parameters obtained by DFT calculations are in good agreement with electron diffraction data. Interpretations of the experimental FTIR and FT-Raman spectra have been reported with the aid of the theoretical wavenumbers. The differences between the observed and scaled wavenumber values of most of the fundamentals are very small. The thermodynamic parameters have also been computed. Electronic properties of the molecule were discussed through the molecular electrostatic potential surface, HOMO-LUMO energy gap and NBO analysis. To provide precise assignments of (1)H and (13)CNMR spectra, isotropic shielding and chemical shifts were calculated with the Gauge-Invariant Atomic Orbital (GIAO) method. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Knowledge information management toolkit and method

    Science.gov (United States)

    Hempstead, Antoinette R.; Brown, Kenneth L.

    2006-08-15

    A system is provided for managing user entry and/or modification of knowledge information into a knowledge base file having an integrator support component and a data source access support component. The system includes processing circuitry, memory, a user interface, and a knowledge base toolkit. The memory communicates with the processing circuitry and is configured to store at least one knowledge base. The user interface communicates with the processing circuitry and is configured for user entry and/or modification of knowledge pieces within a knowledge base. The knowledge base toolkit is configured for converting knowledge in at least one knowledge base from a first knowledge base form into a second knowledge base form. A method is also provided.

  5. Assessment of two theoretical methods to estimate potentiometric titration curves of peptides: comparison with experiment.

    Science.gov (United States)

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A

    2006-03-09

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences

  6. Theoretical simulation of the dual-heat-flux method in deep body temperature measurements.

    Science.gov (United States)

    Huang, Ming; Chen, Wenxi

    2010-01-01

    Deep body temperature reveals individual physiological states, and is important in patient monitoring and chronobiological studies. An innovative dual-heat-flux method has been shown experimentally to be competitive with the conventional zero-heat-flow method in its performance, in terms of measurement accuracy and step response to changes in the deep temperature. We have utilized a finite element method to model and simulate the dynamic process of a dual-heat-flux probe in deep body temperature measurements to validate the fundamental principles of the dual-heat-flux method theoretically, and to acquire a detailed quantitative description of the thermal profile of the dual-heat-flux probe. The simulation results show that the estimated deep body temperature is influenced by the ambient temperature (linearly, at a maximum rate of 0.03 °C/°C) and the blood perfusion rate. The corresponding depth of the estimated temperature in the skin and subcutaneous tissue layer is consistent when using the dual-heat-flux probe. Insights in improving the performance of the dual-heat-flux method were discussed for further studies of dual-heat-flux probes, taking into account structural and geometric considerations.

  7. Theoretical model and experimental verification on the PID tracking method using liquid crystal optical phased array

    Science.gov (United States)

    Wang, Xiangru; Xu, Jianhua; Huang, Ziqiang; Wu, Liang; Zhang, Tianyi; Wu, Shuanghong; Qiu, Qi

    2017-02-01

    Liquid crystal optical phased array (LC-OPA) has been considered with great potential on the non-mechanical laser deflector because it is fabricated using photolithographic patterning technology which has been well advanced by the electronics and display industry. As a vital application of LC-OPA, free space laser communication has demonstrated its merits on communication bandwidth. Before data communication, ATP (acquisition, tracking and pointing) process costs relatively long time to result in a bottle-neck of free space laser communication. Meanwhile, dynamic real time accurate tracking is sensitive to keep a stable communication link. The electro-optic medium liquid crystal with low driving voltage can be used as the laser beam deflector. This paper presents a fast-track method using liquid crystal optical phased array as the beam deflector, CCD as a beacon light detector. PID (Proportion Integration Differentiation) loop algorithm is introduced as the controlling algorithm to generate the corresponding steering angle. To achieve the goal of fast and accurate tracking, theoretical analysis and experimental verification are demonstrated that PID closed-loop system can suppress the attitude random vibration. Meanwhile, theoretical analysis shows that tracking accuracy can be less than 6.5μrad, with a relative agreement with experimental results which is obtained after 10 adjustments that the tracking accuracy is less than12.6μrad.

  8. Data, Information, Knowledge, Wisdom (DIKW: A Semiotic Theoretical and Empirical Exploration of the Hierarchy and its Quality Dimension

    Directory of Open Access Journals (Sweden)

    Sasa Baskarada

    2013-03-01

    Full Text Available What exactly is the difference between data and information? What is the difference between data quality and information quality; is there any difference between the two? And, what are knowledge and wisdom? Are there such things as knowledge quality and wisdom quality? As these primitives are the most basic axioms of information systems research, it is somewhat surprising that consensus on exact definitions seems to be lacking. This paper presents a theoretical and empirical exploration of the sometimes directly quoted, and often implied Data, Information, Knowledge, Wisdom (DIKW hierarchy and its quality dimension. We first review relevant literature from a range of perspectives and develop and contextualise a theoretical DIKW framework through semiotics. The literature review identifies definitional commonalities and divergences from a scholarly perspective; the theoretical discussion contextualises the terms and their relationships within a semiotic framework and proposes relevant definitions grounded in that framework. Next, rooted in Wittgenstein’s ordinary language philosophy, we analyse 20 online news articles for their uses of the terms and present the results of an online focus group discussion comprising 16 information systems experts. The empirical exploration identifies a range of definitional ambiguities from a practical perspective.

  9. Systems and methods for enhancing optical information

    Science.gov (United States)

    DeVore, Peter Thomas Setsuda; Chou, Jason T.

    2018-01-02

    An Optical Information Transfer Enhancer System includes a first system for producing an information bearing first optical wave that is impressed with a first information having a first information strength wherein the first optical wave has a first shape. A second system produces a second optical wave. An information strength enhancer module receives the first and said second optical waves and impresses the first optical wave upon the second optical wave via cross-phase modulation (XPM) to produce an information-strength-enhanced second optical wave having a second information strength that is greater than the first information strength of the first optical wave. Following a center-wavelength changer by an Optical Information Transfer Enhancer System improves its performance.

  10. The theoretical preconditions for problem situation realization while studying information technology at school

    Directory of Open Access Journals (Sweden)

    Ольга Александровна Прусакова

    2012-03-01

    Full Text Available Within the framework of modern pedagogy and educational practice there have been worked out and realized various theoretical conceptions, theories, educational approaches including humanistic, personality-oriented, activity-oriented, competence-oriented. One of such approaches to education and personality development is the problem-solving approach.

  11. Studying Economic Space: Synthesis of Balance and Game-Theoretic Methods of Modelling

    Directory of Open Access Journals (Sweden)

    Natalia Gennadyevna Zakharchenko

    2015-12-01

    Full Text Available The article introduces questions about development of models used to study economic space. The author proposes the model that combines balance and game-theoretic methods for estimating system effects of economic agents’ interactions in multi-level economic space. The model is applied to research interactions between economic agents that are spatially heterogeneous within the Russian Far East. In the model the economic space of region is considered in a territorial dimension (the first level of decomposing space and also in territorial and product dimensions (the second level of decomposing space. The paper shows the mechanism of system effects formation that exists in the economic space of region. The author estimates system effects, analyses the real allocation of these effects between economic agents and identifies three types of local industrial markets: with zero, positive and negative system effects

  12. Group theoretical methods in physics. [Tuebingen, July 18-22, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, P; Rieckers, A

    1978-01-01

    This volume comprises the proceedings of the 6th International Colloquium on Group Theoretical Methods in Physics, held at Tuebingen in July 1977. Invited papers were presented on the following topics: supersymmetry and graded Lie algebras; concepts of order and disorder arising from molecular physics; symplectic structures and many-body physics; symmetry breaking in statistical mechanics and field theory; automata and systems as examples of applied (semi-) group theory; renormalization group; and gauge theories. Summaries are given of the contributed papers, which can be grouped as follows: supersymmetry, symmetry in particles and relativistic physics; symmetry in molecular and solid state physics; broken symmetry and phase transitions; structure of groups and dynamical systems; representations of groups and Lie algebras; and general symmetries, quantization. Those individual papers in scope for the TIC data base are being entered from ATOMINDEX tapes. (RWR)

  13. Valence and lowest Rydberg electronic states of phenol investigated by synchrotron radiation and theoretical methods

    Energy Technology Data Exchange (ETDEWEB)

    Limão-Vieira, P., E-mail: plimaovieira@fct.unl.pt; Ferreira da Silva, F.; Lange, E. [Laboratório de Colisões Atómicas e Moleculares, CEFITEC, Departamento de Física, Faculdade de Ciências e Tecnologia, Universidade NOVA de Lisboa, 2829-516 Caparica (Portugal); Duflot, D. [Univ. Lille, UMR 8523–Physique des Lasers Atomes et Molécules, F-59000 Lille (France); CNRS, UMR 8523, F-59000 Lille (France); Jones, N. C.; Hoffmann, S. V. [ISA, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C (Denmark); Śmiałek, M. A. [Department of Control and Power Engineering, Faculty of Ocean Engineering and Ship Technology, Gdańsk University of Technology, Gabriela Narutowicza 11/12, 80-233 Gdańsk (Poland); Department of Physical Sciences, The Open University, Walton Hall, MK7 6AA Milton Keynes (United Kingdom); Jones, D. B. [School of Chemical and Physical Sciences, Flinders University, GPO Box 2100, Adelaide, SA 5001 (Australia); Brunger, M. J. [School of Chemical and Physical Sciences, Flinders University, GPO Box 2100, Adelaide, SA 5001 (Australia); Institute of Mathematical Sciences, University of Malaya, 50603 Kuala Lumpur (Malaysia)

    2016-07-21

    We present the experimental high-resolution vacuum ultraviolet (VUV) photoabsorption spectra of phenol covering for the first time the full 4.3–10.8 eV energy-range, with absolute cross sections determined. Theoretical calculations on the vertical excitation energies and oscillator strengths were performed using time-dependent density functional theory and the equation-of-motion coupled cluster method restricted to single and double excitations level. These have been used in the assignment of valence and Rydberg transitions of the phenol molecule. The VUV spectrum reveals several new features not previously reported in the literature, with particular reference to the 6.401 eV transition, which is here assigned to the 3sσ/σ{sup ∗}(OH)←3π(3a″) transition. The measured absolute photoabsorption cross sections have been used to calculate the photolysis lifetime of phenol in the earth’s atmosphere (0–50 km).

  14. Group-theoretical method in the many-beam theory of electron diffraction

    International Nuclear Information System (INIS)

    Kogiso, Motokazu; Takahashi, Hidewo.

    1977-01-01

    A group-theoretical method is developed for the many-beam dynamical theory of the symmetric Laue case. When the incident wave is directed so that the Laue point lies on a symmetric position in the reciprocal lattice, the dispersion matrix in the fundamental equation can be reduced to a block diagonal form. The transformation matrix is composed of column vectors belonging to irreducible representations of the group of the incident wave vector. Without performing reduction, the reduced form of the dispersion matrix is determined from characters of representations. Practical application is made to the case of symmorphic crystals, where general reduced forms and all solvable examples are given in terms of some geometrical factors of reciprocal lattice arrangements. (auth.)

  15. Detection System of HTTP DDoS Attacks in a Cloud Environment Based on Information Theoretic Entropy and Random Forest

    Directory of Open Access Journals (Sweden)

    Mohamed Idhammad

    2018-01-01

    Full Text Available Cloud Computing services are often delivered through HTTP protocol. This facilitates access to services and reduces costs for both providers and end-users. However, this increases the vulnerabilities of the Cloud services face to HTTP DDoS attacks. HTTP request methods are often used to address web servers’ vulnerabilities and create multiple scenarios of HTTP DDoS attack such as Low and Slow or Flooding attacks. Existing HTTP DDoS detection systems are challenged by the big amounts of network traffic generated by these attacks, low detection accuracy, and high false positive rates. In this paper we present a detection system of HTTP DDoS attacks in a Cloud environment based on Information Theoretic Entropy and Random Forest ensemble learning algorithm. A time-based sliding window algorithm is used to estimate the entropy of the network header features of the incoming network traffic. When the estimated entropy exceeds its normal range the preprocessing and the classification tasks are triggered. To assess the proposed approach various experiments were performed on the CIDDS-001 public dataset. The proposed approach achieves satisfactory results with an accuracy of 99.54%, a FPR of 0.4%, and a running time of 18.5s.

  16. Internet security information system implement method

    International Nuclear Information System (INIS)

    Liu Baoxu; Mei Jie; Xu Rongsheng; An Dehai; Yu Mingjian; Chen Xiangyang; Zheng Peng

    1999-01-01

    On the basis of analysis of the key elements that will affect the Internet Security Information System, the author takes UNIX Operating System as an example, and provides the important stages that must be considered when implementing the Internet Security Information System. An implemental model of the Internet Security Information System is given

  17. Intelligent Information Retrieval: Diagnosing Information Need. Part I. The Theoretical Framework for Developing an Intelligent IR Tool.

    Science.gov (United States)

    Cole, Charles

    1998-01-01

    Suggests that the principles underlying the procedure used by doctors to diagnose a patient's disease are useful in the design of intelligent information-retrieval systems because the task of the doctor is conceptually similar to the computer or human intermediary's task in information retrieval: to draw out the user's query/information need.…

  18. Toward theoretical understanding of the fertility preservation decision-making process: examining information processing among young women with cancer.

    Science.gov (United States)

    Hershberger, Patricia E; Finnegan, Lorna; Altfeld, Susan; Lake, Sara; Hirshfeld-Cytron, Jennifer

    2013-01-01

    Young women with cancer now face the complex decision about whether to undergo fertility preservation. Yet little is known about how these women process information involved in making this decision. The purpose of this article is to expand theoretical understanding of the decision-making process by examining aspects of information processing among young women diagnosed with cancer. Using a grounded theory approach, 27 women with cancer participated in individual, semistructured interviews. Data were coded and analyzed using constant-comparison techniques that were guided by 5 dimensions within the Contemplate phase of the decision-making process framework. In the first dimension, young women acquired information primarily from clinicians and Internet sources. Experiential information, often obtained from peers, occurred in the second dimension. Preferences and values were constructed in the third dimension as women acquired factual, moral, and ethical information. Women desired tailored, personalized information that was specific to their situation in the fourth dimension; however, women struggled with communicating these needs to clinicians. In the fifth dimension, women offered detailed descriptions of clinician behaviors that enhance or impede decisional debriefing. Better understanding of theoretical underpinnings surrounding women's information processes can facilitate decision support and improve clinical care.

  19. Decomposition of overlapping protein complexes: A graph theoretical method for analyzing static and dynamic protein associations

    Directory of Open Access Journals (Sweden)

    Guimarães Katia S

    2006-04-01

    Full Text Available Abstract Background Most cellular processes are carried out by multi-protein complexes, groups of proteins that bind together to perform a specific task. Some proteins form stable complexes, while other proteins form transient associations and are part of several complexes at different stages of a cellular process. A better understanding of this higher-order organization of proteins into overlapping complexes is an important step towards unveiling functional and evolutionary mechanisms behind biological networks. Results We propose a new method for identifying and representing overlapping protein complexes (or larger units called functional groups within a protein interaction network. We develop a graph-theoretical framework that enables automatic construction of such representation. We illustrate the effectiveness of our method by applying it to TNFα/NF-κB and pheromone signaling pathways. Conclusion The proposed representation helps in understanding the transitions between functional groups and allows for tracking a protein's path through a cascade of functional groups. Therefore, depending on the nature of the network, our representation is capable of elucidating temporal relations between functional groups. Our results show that the proposed method opens a new avenue for the analysis of protein interaction networks.

  20. A novel game theoretic approach for modeling competitive information diffusion in social networks with heterogeneous nodes

    Science.gov (United States)

    Agha Mohammad Ali Kermani, Mehrdad; Fatemi Ardestani, Seyed Farshad; Aliahmadi, Alireza; Barzinpour, Farnaz

    2017-01-01

    Influence maximization deals with identification of the most influential nodes in a social network given an influence model. In this paper, a game theoretic framework is developed that models a competitive influence maximization problem. A novel competitive influence model is additionally proposed that incorporates user heterogeneity, message content, and network structure. The proposed game-theoretic model is solved using Nash Equilibrium in a real-world dataset. It is shown that none of the well-known strategies are stable and at least one player has the incentive to deviate from the proposed strategy. Moreover, violation of Nash equilibrium strategy by each player leads to their reduced payoff. Contrary to previous works, our results demonstrate that graph topology, as well as the nodes' sociability and initial tendency measures have an effect on the determination of the influential node in the network.

  1. The role of information systems in management decision making-an theoretical approach

    Directory of Open Access Journals (Sweden)

    PhD. Associate Professor Department of Management & Informatics Mihane Berisha-Namani

    2010-12-01

    Full Text Available In modern conditions of globalisation and development of information technology, information processing activities have come to be seen as essential to successful of businesses and organizations. Information has become essential to make decisions and crucial asset in organisations, whereas information systems is technology required for information processing. The application of information systems technology in business and organisations has opened up new possibilities for running and managing organisations, as well as has improved management decision making. The purpose of this paper is to give an understanding of the role that information systems have in management decision making and to discuss the possibilities how managers of organisations can make best use of information systems. The paper starts with identifying the functions of management and managerial roles and continue with information systems usage in three levels of decision making. It specifically addresses the way how information systems can help managers reduce uncertainty in decision making and includes some important implications of information systems usage for managers. Thus, this study provide a framework of effective use of information systems generally and offers an alternative approach to investigate the impact that information systems technology have in management decision making specifically

  2. A theoretical perspective to inform assessment and treatment strategies for animal hoarders.

    Science.gov (United States)

    Patronek, Gary J; Nathanson, Jane N

    2009-04-01

    Animal hoarding is a poorly understood, maladaptive, destructive behavior whose etiology and pathology are only beginning to emerge. We compare and contrast animal hoarding to the compulsive hoarding of objects and proceed to draw upon attachment theory, the literature of personality disorder and trauma, and our own clinical experience to propose a developmental trajectory. Throughout life, there is a persistent struggle to form a functional attachment style and achieve positive social integration. For some people, particularly those affected by a dysfunctional primary attachment experience in childhood, a protective, comforting relationship with animals may form an indelible imprint. In adulthood, when human attachment has been chronically problematic, compulsive caregiving of animals can become the primary means of maintaining or building a sense of self. Improving assessment and treatment of animal hoarders requires attention to contributing psychosocial conditions, while taking into account the centrality of the animals to the hoarder's identity, self-esteem and sense of control. It is our hope that the information presented will provide a basis upon which clinicians can focus their own counseling style, assessment, and methods of treatment.

  3. A comprehensive theoretical framework for personal information-related behaviors on the internet

    NARCIS (Netherlands)

    Beldad, Ardion Daroca; de Jong, Menno D.T.; Steehouder, M.F.

    2011-01-01

    Although there is near consensus on the need for privacy, the reality is that people's attitude toward their personal information privacy is complex. For instance, even when people claim that they value their information privacy, they often trade their personal information for tangible or intangible

  4. Research method of nuclear patent information

    International Nuclear Information System (INIS)

    Mo Dan; Gao An'na; Sun Chenglin; Wang Lei; You Xinfeng

    2010-01-01

    When faced with a huge amount of nuclear patent information, the key to effective research include: (1) Choose convenient way to search, quick access to nuclear technology related patents; (2) To overcome the language barrier, analysis the technical content of patent information; (3) Organize the publication date of retrieved patent documents, analysis the status and trends of nuclear technology development; (4) Research the patented technology of main applicants; (5) Always pay attention to the legal status of patent information, free use the invalid patents, at the same time avoid the patent infringement. Summary, patent information is important to obtain the latest technical information source, and the research work of patent information is a comprehensive understanding and mastery way for advanced nuclear technology. (authors)

  5. ISSLS prize winner: integrating theoretical and experimental methods for functional tissue engineering of the annulus fibrosus.

    Science.gov (United States)

    Nerurkar, Nandan L; Mauck, Robert L; Elliott, Dawn M

    2008-12-01

    Integrating theoretical and experimental approaches for annulus fibrosus (AF) functional tissue engineering. Apply a hyperelastic constitutive model to characterize the evolution of engineered AF via scalar model parameters. Validate the model and predict the response of engineered constructs to physiologic loading scenarios. There is need for a tissue engineered replacement for degenerate AF. When evaluating engineered replacements for load-bearing tissues, it is necessary to evaluate mechanical function with respect to the native tissue, including nonlinearity and anisotropy. Aligned nanofibrous poly-epsilon-caprolactone scaffolds with prescribed fiber angles were seeded with bovine AF cells and analyzed over 8 weeks, using experimental (mechanical testing, biochemistry, histology) and theoretical methods (a hyperelastic fiber-reinforced constitutive model). The linear region modulus for phi = 0 degrees constructs increased by approximately 25 MPa, and for phi = 90 degrees by approximately 2 MPa from 1 day to 8 weeks in culture. Infiltration and proliferation of AF cells into the scaffold and abundant deposition of s-GAG and aligned collagen was observed. The constitutive model had excellent fits to experimental data to yield matrix and fiber parameters that increased with time in culture. Correlations were observed between biochemical measures and model parameters. The model was successfully validated and used to simulate time-varying responses of engineered AF under shear and biaxial loading. AF cells seeded on nanofibrous scaffolds elaborated an organized, anisotropic AF-like extracellular matrix, resulting in improved mechanical properties. A hyperelastic fiber-reinforced constitutive model characterized the functional evolution of engineered AF constructs, and was used to simulate physiologically relevant loading configurations. Model predictions demonstrated that fibers resist shear even when the shearing direction does not coincide with the fiber direction

  6. Theoretical Mathematics

    Science.gov (United States)

    Stöltzner, Michael

    Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.

  7. Reference group theory with implications for information studies: a theoretical essay

    Directory of Open Access Journals (Sweden)

    E. Murell Dawson

    2001-01-01

    Full Text Available This article explores the role and implications of reference group theory in relation to the field of library and information science. Reference group theory is based upon the principle that people take the standards of significant others as a basis for making self-appraisals, comparisons, and choices regarding need and use of information. Research that applies concepts of reference group theory to various sectors of library and information studies can provide data useful in enhancing areas such as information-seeking research, special populations, and uses of information. Implications are promising that knowledge gained from like research can be beneficial in helping information professionals better understand the role theory plays in examining ways in which people manage their information and social worlds.

  8. TiO2 synthesized by microwave assisted solvothermal method: Experimental and theoretical evaluation

    International Nuclear Information System (INIS)

    Moura, K.F.; Maul, J.; Albuquerque, A.R.; Casali, G.P.; Longo, E.; Keyson, D.; Souza, A.G.; Sambrano, J.R.; Santos, I.M.G.

    2014-01-01

    In this study, a microwave assisted solvothermal method was used to synthesize TiO 2 with anatase structure. The synthesis was done using Ti (IV) isopropoxide and ethanol without templates or alkalinizing agents. Changes in structural features were observed with increasing time of synthesis and evaluated using periodic quantum chemical calculations. The anatase phase was obtained after only 1 min of reaction besides a small amount of brookite phase. Experimental Raman spectra are in accordance with the theoretical one. Micrometric spheres constituted by nanometric particles were obtained for synthesis from 1 to 30 min, while spheres and sticks were observed after 60 min. - Graphical abstract: FE-SEM images of anatase obtained with different periods of synthesis associated with the order–disorder degree. Display Omitted - Highlights: • Anatase microspheres were obtained by the microwave assisted hydrothermal method. • Only ethanol and titanium isopropoxide were used as precursors during the synthesis. • Raman spectra and XRD patterns were compared with quantum chemical calculations. • Time of synthesis increased the short-range disorder in one direction and decreased in another

  9. Beyond the SCS-CN method: A theoretical framework for spatially lumped rainfall-runoff response

    Science.gov (United States)

    Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.

    2016-06-01

    Since its introduction in 1954, the Soil Conservation Service curve number (SCS-CN) method has become the standard tool, in practice, for estimating an event-based rainfall-runoff response. However, because of its empirical origins, the SCS-CN method is restricted to certain geographic regions and land use types. Moreover, it does not describe the spatial variability of runoff. To move beyond these limitations, we present a new theoretical framework for spatially lumped, event-based rainfall-runoff modeling. In this framework, we describe the spatially lumped runoff model as a point description of runoff that is upscaled to a watershed area based on probability distributions that are representative of watershed heterogeneities. The framework accommodates different runoff concepts and distributions of heterogeneities, and in doing so, it provides an implicit spatial description of runoff variability. Heterogeneity in storage capacity and soil moisture are the basis for upscaling a point runoff response and linking ecohydrological processes to runoff modeling. For the framework, we consider two different runoff responses for fractions of the watershed area: "prethreshold" and "threshold-excess" runoff. These occur before and after infiltration exceeds a storage capacity threshold. Our application of the framework results in a new model (called SCS-CNx) that extends the SCS-CN method with the prethreshold and threshold-excess runoff mechanisms and an implicit spatial description of runoff. We show proof of concept in four forested watersheds and further that the resulting model may better represent geographic regions and site types that previously have been beyond the scope of the traditional SCS-CN method.

  10. BRIEF INTRODUCTION TO THEORETICAL INTENTION OF "NEEDLING METHOD FOR TRANQUILLIZATION AND CALMING THE MIND" FOR TREATMENT OF INSOMNIA

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A set of scientific theories and an effective acupuncture therapy for insomnia about "the needling method for tranquillization and calming the mind" are gradually formed through many years' theoretical and clinical studies. In this paper, the theoretical intention about "the needling method for tranquillization and calming the mind" for treatment of insomnia are briefly introduced mainly from the cause of disease,pathogenesis, therapeutic method and characteristics of composition of a prescription, etc. in order to provide a new train of thoughts and a new method for working out scientific and standard prescriptions in the treatment of insomnia.

  11. A Survey of Game Theoretic Approaches to Modelling Decision-Making in Information Warfare Scenarios

    Directory of Open Access Journals (Sweden)

    Kathryn Merrick

    2016-07-01

    Full Text Available Our increasing dependence on information technologies and autonomous systems has escalated international concern for information- and cyber-security in the face of politically, socially and religiously motivated cyber-attacks. Information warfare tactics that interfere with the flow of information can challenge the survival of individuals and groups. It is increasingly important that both humans and machines can make decisions that ensure the trustworthiness of information, communication and autonomous systems. Subsequently, an important research direction is concerned with modelling decision-making processes. One approach to this involves modelling decision-making scenarios as games using game theory. This paper presents a survey of information warfare literature, with the purpose of identifying games that model different types of information warfare operations. Our contribution is a systematic identification and classification of information warfare games, as a basis for modelling decision-making by humans and machines in such scenarios. We also present a taxonomy of games that map to information warfare and cyber crime problems as a precursor to future research on decision-making in such scenarios. We identify and discuss open research questions including the role of behavioural game theory in modelling human decision making and the role of machine decision-making in information warfare scenarios.

  12. Theoretical Study of Palladium Membrane Reactor Performance During Propane Dehydrogenation Using CFD Method

    Directory of Open Access Journals (Sweden)

    Kamran Ghasemzadeh

    2017-04-01

    Full Text Available This study presents a 2D-axisymmetric computational fluid dynamic (CFD model to investigate the performance Pd membrane reactor (MR during propane dehydrogenation process for hydrogen production. The proposed CFD model provided the local information of temperature and component concentration for the driving force analysis. After investigation of mesh independency of CFD model, the validation of CFD model results was carried out by other modeling data and a good agreement between CFD model results and theoretical data was achieved. Indeed, in the present model, a tubular reactor with length of 150 mm was considered, in which the Pt-Sn-K/Al2O3 as catalyst were filled in reaction zone. Hence, the effects of the important operating parameter (reaction temperature on the performances of membrane reactor (MR were studied in terms of propane conversion and hydrogen yield. The CFD results showed that the suggested MR system during propane dehydrogenation reaction presents higher performance with respect to once obtained in the conventional reactor (CR. In particular, by applying Pd membrane, was found that propane conversion can be increased from 41% to 49%. Moreover, the highest value of propane conversion (X = 91% was reached in case of Pd-Ag MR. It was also established that the feed flow rate of the MR is to be the one of the most important factors defining efficiency of the propane dehydrogenation process.

  13. Theoretical reflections on the paradigmatic construction of Information Science: considerations about the (s paradigm (s cognitive (s and social

    Directory of Open Access Journals (Sweden)

    Jonathas Luiz Carvalho Silva

    2013-07-01

    Full Text Available It presents a research about the theoretical and epistemological processes that influence the formation of the cognitive paradigm of Information Science (IS, noting the emergence of social paradigm within the domain analysis and hermeneutics of information. For this, we adopted the reflections of classical and contemporary authors, like Thomas Kuhn, Boaventura Santos, Capurro, Hjørland and Albrechtsen. We conclude that the perception paradigm in IS is a consolidated issue, however the social paradigm is still under construction, which will allow the creation of perceptions, interpretations and contributions in order to fill gaps left by other paradigms.

  14. A general information theoretical proof for the second law of thermodynamics

    International Nuclear Information System (INIS)

    Zhang, Qiren

    2008-01-01

    We show that the conservation and the non-additivity of information, together with the additivity of entropy makes entropy increase in an isolated system. The collapse of the entangled quantum state offers an example of the information non-additivity. Nevertheless, the later is also true in other fields, in which the interaction information is important. Examples are classical statistical mechanics, social statistics and financial processes. The second law of thermodynamics is thus proven in its most general form. It is exactly true, not only in quantum and classical physics but also in other processes in which the information is conservative and non-additive. (author)

  15. Theoretical and methodological basis of the comparative historical and legal method development

    Directory of Open Access Journals (Sweden)

    Д. А. Шигаль

    2015-05-01

    Full Text Available Problem setting. Development of any scientific method is always both a question of its structural and functional characteristics and place in the system of scientific methods, and a comment as for practicability of such methodological work. This paper attempts to give a detailed response to the major comments and objections arising in respect of the separation as an independent means of special and scientific knowledge of comparative historical and legal method. Recent research and publications analysis. Analyzing research and publications within the theme of the scientific article, it should be noted that attention to methodological issues of both general and legal science at the time was paid by such prominent foreign and domestic scholars as I. D. Andreev, Yu. Ya. Baskin, O. L. Bygych, M. A. Damirli, V. V. Ivanov, I. D. Koval'chenko, V. F. Kolomyitsev, D. V. Lukyanov, L. A. Luts, J. Maida, B. G. Mogilnytsky, N. M. Onishchenko, N. M. Parkhomenko, O. V. Petryshyn, S. P. Pogrebnyak, V. I. Synaisky, V. M. Syryh, O. F. Skakun, A. O. Tille, D. I. Feldman and others. It should be noted that, despite a large number of scientific papers in this field, the interest of research partnership in the methodology of history of state and law science still unfairly remains very low. Paper objective. The purpose of this scientific paper is theoretical and methodological rationale for the need of separation and development of comparative historical and legal method in the form of answers to more common questions and objections that arise in scientific partnership in this regard. Paper main body. Development of comparative historical and legal means of knowledge is quite justified because it meets the requirements of the scientific method efficiency, which criteria are the speed for achieving this goal, ease of use of one or another way of scientific knowledge, universality of research methods, convenience of techniques that are used and so on. Combining the

  16. Information systems research methods, epistemology, and applications

    National Research Council Canada - National Science Library

    Cater-Steel, Aileen; Al-Hakim, Latif

    2009-01-01

    ..., University of Dublin, Trinity College, IrelandChapter IV A Critical Theory Approach to Information Technology Transfer to the Developing World and a Critique of Maintained Assumptions in the Lite...

  17. Theoretical study of the electronic structure of f-element complexes by quantum chemical methods

    International Nuclear Information System (INIS)

    Vetere, V.

    2002-09-01

    This thesis is related to comparative studies of the chemical properties of molecular complexes containing lanthanide or actinide trivalent cations, in the context of the nuclear waste disposal. More precisely, our aim was a quantum chemical analysis of the metal-ligand bonding in such species. Various theoretical approaches were compared, for the inclusion of correlation (density functional theory, multiconfigurational methods) and of relativistic effects (relativistic scalar and 2-component Hamiltonians, relativistic pseudopotentials). The performance of these methods were checked by comparing computed structural properties to published experimental data, on small model systems: lanthanide and actinide tri-halides and on X 3 M-L species (X=F, Cl; M=La, Nd, U; L = NH 3 , acetonitrile, CO). We have thus shown the good performance of density functionals combined with a quasi-relativistic method, as well as of gradient-corrected functionals associated with relativistic pseudopotentials. In contrast, functionals including some part of exact exchange are less reliable to reproduce experimental trends, and we have given a possible explanation for this result . Then, a detailed analysis of the bonding has allowed us to interpret the discrepancies observed in the structural properties of uranium and lanthanides complexes, based on a covalent contribution to the bonding, in the case of uranium(III), which does not exist in the lanthanide(III) homologues. Finally, we have examined more sizeable systems, closer to experimental species, to analyse the influence of the coordination number, of the counter-ions and of the oxidation state of uranium, on the metal-ligand bonding. (author)

  18. Theoretical prediction of hysteretic rubber friction in ball on plate configuration by finite element method

    Directory of Open Access Journals (Sweden)

    2009-11-01

    Full Text Available This paper has investigated theoretically the influence of sliding speed and temperature on the hysteretic friction in case of a smooth, reciprocating steel ball sliding on smooth rubber plate by finite element method (FEM. Generalized Maxwell-models combined with Mooney-Rivlin model have been used to describe the material behaviour of the ethylenepropylene-diene-monomer (EPDM rubber studied. Additionally, the effect of the technique applied at the parameter identification of the material model and the number of Maxwell elements on the coefficient of friction (COF was also investigated. Finally, the open parameter of the Greenwood-Tabor analytical model has been determined from a fit to the FE results. By fitting, as usual, the Maxwell-model to the storage modulus master curve the predicted COF, in a broad frequency range, will be underestimated even in case of 40-term Maxwell-model. To obtain more accurate numerical prediction or to provide an upper limit for the hysteretic friction, in the interesting frequency range, the Maxwell parameters should be determined, as proposed, from a fit to the measured loss factor master curve. This conclusion can be generalized for all the FE simulations where the hysteresis plays an important role.

  19. Spectroscopic information from different theoretical descriptions of (un)polarized (e,e sup ' p) reactions

    CERN Document Server

    Radici, M; Dickhoff, W H

    2003-01-01

    We analyze the unpolarized and polarized electron-induced proton knockout reactions on sup 1 sup 6 O in different kinematical settings using two theoretical approaches. The first one is based on a relativistic mean-field distorted-wave description of the bound and scattering states of the proton, including a fully relativistic electromagnetic current operator. The second approach adopts the same current operator, but describes the proton properties on the basis of microscopic calculations of the self-energy in sup 1 sup 6 O below the Fermi energy and final-state damping in nuclear matter above the Fermi energy, using the same realistic short-range and tensor correlations. Good agreement with all unpolarized data is obtained at low and high Q sup 2 by using the same spectroscopic factors fixed by the low-Q sup 2 analysis. A reasonable agreement is achieved for polarization observables. (orig.)

  20. THEORETICAL AND METHODICAL APPROACHES TO THE FORMATION AND EVALUATION OF THE QUALITY OF TOURIST SERVICES

    Directory of Open Access Journals (Sweden)

    Nataliya Vasylykha

    2017-12-01

    Full Text Available The study, the results of which are described in the article, is devoted to analysing and substantiating approaches to the assessment and quality assurance of tourism services, which form their competitiveness, namely factors and indicators of quality. After all, the integration and globalization of the world society determine the development of tourism as a catalyst for these global processes, and world practice has proved that tourism can be an effective way to solve many socio-economic problems. The subject of the study is the peculiarities of assessing the quality of tourist services. Methodology. The methodological basis of the work is a system of general scientific and special scientific methods, mainly, in the process of research, there are used such methods as system-analytical and dialectical methods – for the theoretical generalization of the investigated material; structural and logical method – in systematizing factors and indicators of the quality of tourist services. The purpose of the article is a theoretical justification of approaches to the quality of tourist services and optimization of their quality assessment. In the research, approaches to the interpretation of the concept of quality are presented and analysed, features of services in general and tourism in particular are concentrated, and it is suggested to group and classify factors and indicators of their quality. The interpretation of the notion of quality is ambiguous, both in Ukrainian and in foreign literary sources, and depends on the point of view on this notion. In our opinion, the most thorough definition characterizes the quality of products and services as a complex feature that determines their suitability to the needs of the consumer. Taking into account the specificity of the term “service”, peculiarities determining the approaches to their evaluation are studied, such a service can be considered a product dominated by intangible elements and also

  1. A Theoretical Framework for Turnover Intention of Air Force Enlisted Information Systems Personnel

    Science.gov (United States)

    2003-03-25

    removal of poor performers, advancement opportunities for talented replacements, and decreases in pre-turnover withdrawal behaviors such as absenteeism ...in Employee Turnover Intentions and Its Determinants Among Telecommuters and Non- Telecommuters ,” Journal of Management Information Systems, 16: 147

  2. The success or failure of management information systems: A theoretical approach

    Energy Technology Data Exchange (ETDEWEB)

    Curlee, T.R.; Tonn, B.T.

    1987-03-01

    Work has been done by various disciplines to address the reasons why modern, computerized management information systems either succeed or fail. However, the studies are not based on a well-defined conceptual framework and the focus has been narrow. This report presents a comprehensive conceptual framework of how an information system is used within an organization. This framework not only suggests how the use of an information system may translate into productivity improvements for the implementing organization but also helps to identify why a system may succeed or fail. A major aspect of the model is its distinction between the objectives of the organization in its decision to implement an information system and the objectives of the individual employees who are to use the system. A divergence between these objectives can lead to system underutilization or misuse at the expense of the organization's overall productivity.

  3. Governance Methods Used in Externalizing Information Technology

    Science.gov (United States)

    Chan, Steven King-Lun

    2012-01-01

    Information technology (IT) is the largest capital expenditure in many firms and is an integral part of many organizations' strategies. However, the benefits that each company receives from its IT investments vary. One study by Weill (2004) found that the top performer in the sample was estimated to have as high as a 40% greater return on its…

  4. Human factors estimation methods using physiological informations

    International Nuclear Information System (INIS)

    Takano, Ken-ichi; Yoshino, Kenji; Nakasa, Hiroyasu

    1984-01-01

    To enhance the operational safety in the nuclear power plant, it is necessary to decrease abnormal phenomena due to human errors. Especially, it is essential to basically understand human behaviors under the work environment for plant maintenance workers, inspectors, and operators. On the above stand point, this paper presents the results of literature survey on the present status of human factors engineering technology applicable to the nuclear power plant and also discussed the following items: (1) Application fields where the ergonomical evaluation is needed for workers safety. (2) Basic methodology for investigating the human performance. (3) Features of the physiological information analysis among various types of ergonomical techniques. (4) Necessary conditions for the application of in-situ physiological measurement to the nuclear power plant. (5) Availability of the physiological information analysis. (6) Effectiveness of the human factors engineering methodology, especially physiological information analysis in the case of application to the nuclear power plant. The above discussions lead to the demonstration of high applicability of the physiological information analysis to nuclear power plant, in order to improve the work performance. (author)

  5. Theoretical and Practical Studies on a Possible Genetic Method for Tsetse Fly Control

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, C. F. [Tsetse Research Laboratory, School of Veterinary Science, University Of Bristol, Langford, Bristol (United Kingdom); Hill, W. G. [Institute of Animal Genetics, Edinburgh (United Kingdom)

    1968-06-15

    Chromosome translocations may be useful in pest control because they are a common type of mutation in a variety of organisms and, frequently, the heterozygote is semi-sterile and the homo- zygote folly fertile. It might be possible to induce such a translocation in a pest species, to breed from a selected ancestral pair of translocation homozygotes a large number of the homozygotes and to release these into a wild population. This would cause the production of heterozygotes in the wild population and hence would reduce the fertility of the population. This reduction would persist for a number of generations. Calculations, based on simplified assumptions, showed that this method of fertility reduction might be more economical than the use of sterilized males. In the present paper a theoretical comparison is made of the translocation and sterilized-male methods for the control of tsetse flies (Glossina sp.). A computer model has been set up which simulates, as far as possible, the known facts about birth, mating and death in a wild tsetse population. The predicted effects of releases of sterilized males and of translocation homozygotes are described and the modifications which would be caused by density-dependent mortality, migration and reduced viability of the translocation genotypes and sterilized males are indicated. It is concluded that to eradicate a well isolated wild population the numbers of translocation homozygotes required might well be considerably less than the number of sterilized males required for the same task. However, immigration into the population would greatly reduce the efficiency of the translocation method. The progress so far in attempting to produce a suitable translocation in Glossina austeni is described. Males have been treated with 5-7 krad of gamma radiation and a number of semi-sterile individuals have been selected from among their progeny. The semi-sterility is inherited and, by analogy with the results in other organisms, is

  6. PREFACE: XXXth International Colloquium on Group Theoretical Methods in Physics (ICGTMP) (Group30)

    Science.gov (United States)

    Brackx, Fred; De Schepper, Hennie; Van der Jeugt, Joris

    2015-04-01

    The XXXth International Colloquium on Group Theoretical Methods in Physics (ICGTMP), also known as the Group30 conference, took place in Ghent (Belgium) from Monday 14 to Friday 18 July 2014. The conference was organised by Ghent University (Department of Applied Mathematics, Computer Science and Statistics, and Department of Mathematical Analysis). The website http://www.group30.ugent.be is still available. The ICGTMP is one of the traditional conference series covering the most important topics of symmetry which are relevant to the interplay of present-day mathematics and physics. More than 40 years ago a group of enthusiasts, headed by H. Bacry of Marseille and A. Janner of Nijmegen, initiated a series of annual meetings with the aim to provide a common forum for scientists interested in group theoretical methods. At that time most of the participants belonged to two important communities: on the one hand solid state specialists, elementary particle theorists and phenomenologists, and on the other mathematicians eager to apply newly-discovered group and algebraic structures. The conference series has become a meeting point for scientists working at modelling physical phenomena through mathematical and numerical methods based on geometry and symmetry. It is considered as the oldest one among the conference series devoted to geometry and physics. It has been further broadened and diversified due to the successful applications of geometric and algebraic methods in life sciences and other areas. The first four meetings took place alternatively in Marseille and Nijmegen. Soon after, the conference acquired an international standing, especially following the 1975 colloquium in Nijmegen and the 1976 colloquium in Montreal. Since then it has been organized in many places around the world. It has become a bi-annual colloquium since 1990, the year it was organized in Moscow. This was the first time the colloquium took place in Belgium. There were 246 registered

  7. Computational Study of Chemical Reactivity Using Information-Theoretic Quantities from Density Functional Reactivity Theory for Electrophilic Aromatic Substitution Reactions.

    Science.gov (United States)

    Wu, Wenjie; Wu, Zemin; Rong, Chunying; Lu, Tian; Huang, Ying; Liu, Shubin

    2015-07-23

    The electrophilic aromatic substitution for nitration, halogenation, sulfonation, and acylation is a vastly important category of chemical transformation. Its reactivity and regioselectivity is predominantly determined by nucleophilicity of carbon atoms on the aromatic ring, which in return is immensely influenced by the group that is attached to the aromatic ring a priori. In this work, taking advantage of recent developments in quantifying nucleophilicity (electrophilicity) with descriptors from the information-theoretic approach in density functional reactivity theory, we examine the reactivity properties of this reaction system from three perspectives. These include scaling patterns of information-theoretic quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy and information gain at both molecular and atomic levels, quantitative predictions of the barrier height with both Hirshfeld charge and information gain, and energetic decomposition analyses of the barrier height for the reactions. To that end, we focused in this work on the identity reaction of the monosubstituted-benzene molecule reacting with hydrogen fluoride using boron trifluoride as the catalyst in the gas phase. We also considered 19 substituting groups, 9 of which are ortho/para directing and the other 9 meta directing, besides the case of R = -H. Similar scaling patterns for these information-theoretic quantities found for stable species elsewhere were disclosed for these reactions systems. We also unveiled novel scaling patterns for information gain at the atomic level. The barrier height of the reactions can reliably be predicted by using both the Hirshfeld charge and information gain at the regioselective carbon atom. The energy decomposition analysis ensued yields an unambiguous picture about the origin of the barrier height, where we showed that it is the electrostatic interaction that plays the dominant role, while the roles played by exchange-correlation and

  8. Understanding intention to use electronic information resources: A theoretical extension of the technology acceptance model (TAM).

    Science.gov (United States)

    Tao, Donghua

    2008-11-06

    This study extended the Technology Acceptance Model (TAM) by examining the roles of two aspects of e-resource characteristics, namely, information quality and system quality, in predicting public health students' intention to use e-resources for completing research paper assignments. Both focus groups and a questionnaire were used to collect data. Descriptive analysis, data screening, and Structural Equation Modeling (SEM) techniques were used for data analysis. The study found that perceived usefulness played a major role in determining students' intention to use e-resources. Perceived usefulness and perceived ease of use fully mediated the impact that information quality and system quality had on behavior intention. The research model enriches the existing technology acceptance literature by extending TAM. Representing two aspects of e-resource characteristics provides greater explanatory information for diagnosing problems of system design, development, and implementation.

  9. Principles and methods of quantum information technologies

    CERN Document Server

    Semba, Kouichi

    2016-01-01

    This book presents the research and development-related results of the “FIRST” Quantum Information Processing Project, which was conducted from 2010 to 2014 with the support of the Council for Science, Technology and Innovation of the Cabinet Office of the Government of Japan. The project supported 33 research groups and explored five areas: quantum communication, quantum metrology and sensing, coherent computing, quantum simulation, and quantum computing. The book is divided into seven main sections. Parts I through V, which consist of twenty chapters, focus on the system and architectural aspects of quantum information technologies, while Parts VI and VII, which consist of eight chapters, discuss the superconducting quantum circuit, semiconductor spin and molecular spin technologies.   Readers will be introduced to new quantum computing schemes such as quantum annealing machines and coherent Ising machines, which have now arisen as alternatives to standard quantum computers and are designed to successf...

  10. Information-theoretical approach to control of quantum-mechanical systems

    International Nuclear Information System (INIS)

    Kawabata, Shiro

    2003-01-01

    Fundamental limits on the controllability of quantum mechanical systems are discussed in the light of quantum information theory. It is shown that the amount of entropy-reduction that can be extracted from a quantum system by feedback controller is upper bounded by a sum of the decrease of entropy achievable in open-loop control and the mutual information between the quantum system and the controller. This upper bound sets a fundamental limit on the performance of any quantum controllers whose designs are based on the possibilities to attain low entropy states. An application of this approach pertaining to quantum error correction is also discussed

  11. Locating sensors for detecting source-to-target patterns of special nuclear material smuggling: a spatial information theoretic approach.

    Science.gov (United States)

    Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong

    2010-01-01

    In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  12. Locating Sensors for Detecting Source-to-Target Patterns of Special Nuclear Material Smuggling: A Spatial Information Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Xuesong Zhou

    2010-08-01

    Full Text Available In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  13. Innovation in Information Technology: Theoretical and Empirical Study in SMQR Section of Export Import in Automotive Industry

    Science.gov (United States)

    Edi Nugroho Soebandrija, Khristian; Pratama, Yogi

    2014-03-01

    This paper has the objective to provide the innovation in information technology in both theoretical and empirical study. Precisely, both aspects relate to the Shortage Mispacking Quality Report (SMQR) Claims in Export and Import in Automotive Industry. This paper discusses the major aspects of Innovation, Information Technology, Performance and Competitive Advantage. Furthermore, In the empirical study of PT. Astra Honda Motor (AHM) refers to SMQR Claims, Communication Systems, Analysis and Design Systems. Briefly both aspects of the major aspects and its empirical study are discussed in the Introduction Session. Furthermore, the more detail discussion is conducted in the related aspects in other sessions of this paper, in particular in Literature Review in term classical and updated reference of current research. The increases of SMQR claim and communication problem at PT. Astra Daihatsu Motor (PT. ADM) which still using the email cause the time of claim settlement become longer and finally it causes the rejected of SMQR claim by supplier. With presence of this problem then performed to design the integrated communication system to manage the communication process of SMQR claim between PT. ADM with supplier. The systems was analyzed and designed is expected to facilitate the claim communication process so that can be run in accordance with the procedure and fulfill the target of claim settlement time and also eliminate the difficulties and problems on the previous manual communication system with the email. The design process of the system using the approach of system development life cycle method by Kendall & Kendall (2006)which design process covers the SMQR problem communication process, judgment process by the supplier, claim process, claim payment process and claim monitoring process. After getting the appropriate system designs for managing the SMQR claim, furthermore performed the system implementation and can be seen the improvement in claim communication

  14. Information-Theoretic Analysis of a Family of Improper Discrete Constellations

    Directory of Open Access Journals (Sweden)

    Ignacio Santamaria

    2018-01-01

    Full Text Available Non-circular or improper Gaussian signaling has proven beneficial in several interference-limited wireless networks. However, all implementable coding schemes are based on finite discrete constellations rather than Gaussian signals. In this paper, we propose a new family of improper constellations generated by widely linear processing of a square M-QAM (quadrature amplitude modulation signal. This family of discrete constellations is parameterized by κ , the circularity coefficient and a phase ϕ . For uncoded communication systems, this phase should be optimized as ϕ * ( κ to maximize the minimum Euclidean distance between points of the improper constellation, therefore minimizing the bit error rate (BER. For the more relevant case of coded communications, where the coded symbols are constrained to be in this family of improper constellations using ϕ * ( κ , it is shown theoretically and further corroborated by simulations that, except for a shaping loss of 1.53 dB encountered at a high signal-to-noise ratio (snr, there is no rate loss with respect to the improper Gaussian capacity. In this sense, the proposed family of constellations can be viewed as the improper counterpart of the standard proper M-QAM constellations widely used in coded communication systems.

  15. Goal setting and action planning in the rehabilitation setting: development of a theoretically informed practice framework.

    Science.gov (United States)

    Scobbie, Lesley; Dixon, Diane; Wyke, Sally

    2011-05-01

    Setting and achieving goals is fundamental to rehabilitation practice but has been criticized for being a-theoretical and the key components of replicable goal-setting interventions are not well established. To describe the development of a theory-based goal setting practice framework for use in rehabilitation settings and to detail its component parts. Causal modelling was used to map theories of behaviour change onto the process of setting and achieving rehabilitation goals, and to suggest the mechanisms through which patient outcomes are likely to be affected. A multidisciplinary task group developed the causal model into a practice framework for use in rehabilitation settings through iterative discussion and implementation with six patients. Four components of a goal-setting and action-planning practice framework were identified: (i) goal negotiation, (ii) goal identification, (iii) planning, and (iv) appraisal and feedback. The variables hypothesized to effect change in patient outcomes were self-efficacy and action plan attainment. A theory-based goal setting practice framework for use in rehabilitation settings is described. The framework requires further development and systematic evaluation in a range of rehabilitation settings.

  16. Managing the risks of reputational disasters in Japan. Theoretical basis and need for information volunteers

    International Nuclear Information System (INIS)

    Itoh, Makoto

    2000-01-01

    This paper discusses how and why a disaster caused by a bad reputation (Fu-Hyo) occurs in Japan. We survey several cases of reputational disasters and develop a simple model of the process of how a reputational disaster occurs, lasts, and vanishes. We also show the necessity of third parties or information volunteers to reduce the damage of a reputational disaster. (author)

  17. Direction of coupling from phases of interacting oscillators: An information-theoretic approach

    Science.gov (United States)

    Paluš, Milan; Stefanovska, Aneta

    2003-05-01

    A directionality index based on conditional mutual information is proposed for application to the instantaneous phases of weakly coupled oscillators. Its abilities to distinguish unidirectional from bidirectional coupling, as well as to reveal and quantify asymmetry in bidirectional coupling, are demonstrated using numerical examples of quasiperiodic, chaotic, and noisy oscillators, as well as real human cardiorespiratory data.

  18. A geo-information theoretical approach to inductive erosion modelling based on terrain mapping units

    NARCIS (Netherlands)

    Suryana, N.

    1997-01-01

    Three main aspects of the research, namely the concept of object orientation, the development of an Inductive Erosion Model (IEM) and the development of a framework for handling uncertainty in the data or information resulting from a GIS are interwoven in this thesis. The first and the second aspect

  19. The theoretical foundations of value-informed pricing in the service-dominant logic of marketing

    NARCIS (Netherlands)

    Ingenbleek, P.T.M.

    2014-01-01

    Purpose – In the mainstream normative pricing literature, value assessment is virtually non-existent. Although the resource-based literature recognizes that pricing is a competence, value-informed pricing practices are still weakly grounded in theory. The purpose of this paper is to strengthen the

  20. Managing the risks of reputational disasters in Japan. Theoretical basis and need for information volunteers

    Energy Technology Data Exchange (ETDEWEB)

    Itoh, Makoto [University of Electro-Communications, Chofu, Tokyo (Japan)

    2000-07-01

    This paper discusses how and why a disaster caused by a bad reputation (Fu-Hyo) occurs in Japan. We survey several cases of reputational disasters and develop a simple model of the process of how a reputational disaster occurs, lasts, and vanishes. We also show the necessity of third parties or information volunteers to reduce the damage of a reputational disaster. (author)

  1. Quantitative assessment of drivers of recent global temperature variability: an information theoretic approach

    Science.gov (United States)

    Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.

    2017-12-01

    Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2, CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance ( TSI) and cosmic ray flux ( CR); El Niño Southern Oscillation ( ENSO) and Global Mean Temperature Anomaly ( GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 %), CH4 ({˜ } 19 %) and volcanic aerosols ({˜ }23 %) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 %) and ENSO ({˜ } 12 %) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.

  2. Theoretical and methodological significance of Information and Communication Technology in educational practice.

    NARCIS (Netherlands)

    Mooij, Ton

    2016-01-01

    In September 1998 the Research Network ‘ICT in Education and Training’ was initiated at the conference of the European Educational Research Association (EERA). The new network reflected the recognition and growing importance of information and communication technology (ICT) with respect to education

  3. An Informational-Theoretical Formulation of the Second Law of Thermodynamics

    Science.gov (United States)

    Ben-Naim, Arieh

    2009-01-01

    This paper presents a formulation of the second law of thermodynamics couched in terms of Shannon's measure of information. This formulation has an advantage over other formulations of the second law. First, it shows explicitly what is the thing that changes in a spontaneous process in an isolated system, which is traditionally referred to as the…

  4. Identification of Dynamic Flow Stress Curves Using the Virtual Fields Methods: Theoretical Feasibility Analysis

    Science.gov (United States)

    Leem, Dohyun; Kim, Jin-Hwan; Barlat, Frédéric; Song, Jung Han; Lee, Myoung-Gyu

    2018-03-01

    An inverse approach based on the virtual fields method (VFM) is presented to identify the material hardening parameters under dynamic deformation. This dynamic-VFM (D-VFM) method does not require load information for the parameter identification. Instead, it utilizes acceleration fields in a specimen's gage region. To investigate the feasibility of the proposed inverse approach for dynamic deformation, the virtual experiments using dynamic finite element simulations were conducted. The simulation could provide all the necessary data for the identification such as displacement, strain, and acceleration fields. The accuracy of the identification results was evaluated by changing several parameters such as specimen geometry, velocity, and traction boundary conditions. The analysis clearly shows that the D-VFM which utilizes acceleration fields can be a good alternative to the conventional identification procedure that uses load information. Also, it was found that proper deformation conditions are required for generating sufficient acceleration fields during dynamic deformation to enhance the identification accuracy with the D-VFM.

  5. Information-Theoretic Evidence for Predictive Coding in the Face-Processing System.

    Science.gov (United States)

    Brodski-Guerniero, Alla; Paasch, Georg-Friedrich; Wollstadt, Patricia; Özdemir, Ipek; Lizier, Joseph T; Wibral, Michael

    2017-08-23

    Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre)activated prior knowledge serving these predictions are still unknown. Based on the idea that such preactivated prior knowledge must be maintained until needed, we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time courses from MEG recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Preactivation of prior knowledge for faces showed as α-band-related and β-band-related AIS increases in content-specific areas; these AIS increases were behaviorally relevant in the brain's fusiform face area. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Our results support accounts indicating that activated prior knowledge and the corresponding predictions are signaled in low-frequency activity (information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains, such as prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare, mostly because this kind of evidence requires strong a priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach, we find that face-related prior knowledge and the derived predictions are represented in low-frequency brain activity. Copyright © 2017 the authors 0270-6474/17/378273-11$15.00/0.

  6. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  7. An Information-Theoretic Approach for Indirect Train Traffic Monitoring Using Building Vibration

    OpenAIRE

    Xu, Susu; Zhang, Lin; Zhang, Pei; Noh, Hae Young

    2017-01-01

    This paper introduces an indirect train traffic monitoring method to detect and infer real-time train events based on the vibration response of a nearby building. Monitoring and characterizing traffic events are important for cities to improve the efficiency of transportation systems (e.g., train passing, heavy trucks, and traffic). Most prior work falls into two categories: (1) methods that require intensive labor to manually record events or (2) systems that require deployment of dedicated ...

  8. Acceptance and Use of Information and Communications Technology: A UTAUT and Flow Based Theoretical Model

    OpenAIRE

    Alwahaishi, Saleh; Snásel, Václav

    2013-01-01

    The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT) are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As th...

  9. Consumers’ Acceptance and Use of Information and Communications Technology: A UTAUT and Flow Based Theoretical Model

    OpenAIRE

    Saleh Alwahaishi; Václav Snášel

    2013-01-01

    The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT) are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As th...

  10. An information-theoretic basis for uncertainty analysis: application to the QUASAR severe accident study

    International Nuclear Information System (INIS)

    Unwin, S.D.; Cazzoli, E.G.; Davis, R.E.; Khatib-Rahbar, M.; Lee, M.; Nourbakhsh, H.; Park, C.K.; Schmidt, E.

    1989-01-01

    The probabilistic characterization of uncertainty can be problematic in circumstances where there is a paucity of supporting data and limited experience on which to base engineering judgement. Information theory provides a framework in which to address this issue through reliance upon entropy-related principles of uncertainty maximization. We describe an application of such principles in the United States Nuclear Regulatory Commission-sponsored program QUASAR (Quantification and Uncertainty Analysis of Source Terms for Severe Accidents in Light Water Reactors). (author)

  11. Culture, agency and power: Theoretical reflections on informal economic networks and political process

    OpenAIRE

    Meagher, Kate

    2009-01-01

    Do network theory really offer a suitable concept for the theorization of informal processes of economic regulation and institutional change? This working paper challenges both essentialist and skeptical attitudes to networks through an examination of the positive and negative effects of network governance in contemporary societies in a range of regional contexts. The analysis focuses on three broad principles of non-state organization - culture, agency and power - and their role in shaping p...

  12. Information-theoretic analysis of the dynamics of an executable biological model.

    Directory of Open Access Journals (Sweden)

    Avital Sadot

    Full Text Available To facilitate analysis and understanding of biological systems, large-scale data are often integrated into models using a variety of mathematical and computational approaches. Such models describe the dynamics of the biological system and can be used to study the changes in the state of the system over time. For many model classes, such as discrete or continuous dynamical systems, there exist appropriate frameworks and tools for analyzing system dynamics. However, the heterogeneous information that encodes and bridges molecular and cellular dynamics, inherent to fine-grained molecular simulation models, presents significant challenges to the study of system dynamics. In this paper, we present an algorithmic information theory based approach for the analysis and interpretation of the dynamics of such executable models of biological systems. We apply a normalized compression distance (NCD analysis to the state representations of a model that simulates the immune decision making and immune cell behavior. We show that this analysis successfully captures the essential information in the dynamics of the system, which results from a variety of events including proliferation, differentiation, or perturbations such as gene knock-outs. We demonstrate that this approach can be used for the analysis of executable models, regardless of the modeling framework, and for making experimentally quantifiable predictions.

  13. Discovery and information-theoretic characterization of transcription factor binding sites that act cooperatively.

    Science.gov (United States)

    Clifford, Jacob; Adami, Christoph

    2015-09-02

    Transcription factor binding to the surface of DNA regulatory regions is one of the primary causes of regulating gene expression levels. A probabilistic approach to model protein-DNA interactions at the sequence level is through position weight matrices (PWMs) that estimate the joint probability of a DNA binding site sequence by assuming positional independence within the DNA sequence. Here we construct conditional PWMs that depend on the motif signatures in the flanking DNA sequence, by conditioning known binding site loci on the presence or absence of additional binding sites in the flanking sequence of each site's locus. Pooling known sites with similar flanking sequence patterns allows for the estimation of the conditional distribution function over the binding site sequences. We apply our model to the Dorsal transcription factor binding sites active in patterning the Dorsal-Ventral axis of Drosophila development. We find that those binding sites that cooperate with nearby Twist sites on average contain about 0.5 bits of information about the presence of Twist transcription factor binding sites in the flanking sequence. We also find that Dorsal binding site detectors conditioned on flanking sequence information make better predictions about what is a Dorsal site relative to background DNA than detection without information about flanking sequence features.

  14. Information-theoretic approach to lead-lag effect on financial markets

    Science.gov (United States)

    Fiedor, Paweł

    2014-08-01

    Recently the interest of researchers has shifted from the analysis of synchronous relationships of financial instruments to the analysis of more meaningful asynchronous relationships. Both types of analysis are concentrated mostly on Pearson's correlation coefficient and consequently intraday lead-lag relationships (where one of the variables in a pair is time-lagged) are also associated with them. Under the Efficient-Market Hypothesis such relationships are not possible as all information is embedded in the prices, but in real markets we find such dependencies. In this paper we analyse lead-lag relationships of financial instruments and extend known methodology by using mutual information instead of Pearson's correlation coefficient. Mutual information is not only a more general measure, sensitive to non-linear dependencies, but also can lead to a simpler procedure of statistical validation of links between financial instruments. We analyse lagged relationships using New York Stock Exchange 100 data not only on an intraday level, but also for daily stock returns, which have usually been ignored.

  15. Lateral Information Processing by Spiking Neurons: A Theoretical Model of the Neural Correlate of Consciousness

    Directory of Open Access Journals (Sweden)

    Marc Ebner

    2011-01-01

    Full Text Available Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”. Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot” suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of

  16. Maintenance and methods of forming theoretical knowledge and methodical and practical abilities in area of physical culture for students, future specialists on social work

    Directory of Open Access Journals (Sweden)

    Leyfa A.V.

    2009-12-01

    Full Text Available The value of theoretical knowledge, methodical, practical studies, skills in forming physical activity of students is rotined. The level of mastering of components of physical activity is closely associate with the basic blocks of professional preparation of students and their future professional activity. Theoretical knowledge on discipline the «Physical culture» assist the certain affecting depth and breadth of mastering of knowledge of professional preparation.

  17. Versatile Formal Methods Applied to Quantum Information.

    Energy Technology Data Exchange (ETDEWEB)

    Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.

  18. Archaeological culture and medieval ethnic community: theoretical and methodical problems of correlation (the case of medieval Bulgaria

    Directory of Open Access Journals (Sweden)

    Izmaylov Iskander L.

    2014-09-01

    Full Text Available Problems related to archaeological culture and ethnos comparison in the case of medieval Bulgaria are discussed in the article. According to the author, in recent years it has become evident that the traditional concept and methodology of the study of the Bulgars’ ethnogenesis and ethnic history are in contradiction with the facts accumulated. The methods of “archaeological ethno-genetics”, which dictated solving problems of ethnogenesis of the ancient population belonging to an archaeological culture in direct correlation with ethnicity, are currently being criticized. According to modern ideas about ethnos and ethnicity, ethnicity is based upon identity with a complex hierarchical nature. Contemporary methodology requires proceeding with the integrated study of the problems of ethnogenesis on the basis of archaeology and ethnology. This kind of analysis is based upon the study of the medieval Bulgar mentality as a source of information on key aspects of ethno-political ideas. The analysis of authentic historical sources, historiographical tradition elements and folklore materials makes it possible to reconstruct the basic ideas that were significant for an ethnic group. The archaeological culture of the population of Bulgaria is characterized by two clearly distinguished and interconnected elements – the common Muslim culture and that of the elite military “druzhina” (squad. These elements directly characterize the Bulgar ethno-political community. These theoretical conclusions and empirical research concerning the case of the medieval Bulgars’ ethnogenesis attest to the productivity of ethnological synthesis techniques on an interdisciplinary basis.

  19. The quantization of the attention function under a Bayes information theoretic model

    International Nuclear Information System (INIS)

    Wynn, H.P.; Sebastiani, P.

    2001-01-01

    Bayes experimental design using entropy, or equivalently negative information, as a criterion is fairly well developed. The present work applies this model but at a primitive level in statistical sampling. It is assumed that the observer/experimentor is allowed to place a window over the support of a sampling distribution and only 'pay for' observations that fall in this window. The window can be modeled with an 'attention function', simply the indicator function of the window. The understanding is that the cost of the experiment is only the number of paid for observations: n. For fixed n and under the information model it turns out that for standard problems the optimal structure for the window, in the limit amongst all types of window including disjoint regions, is discrete. That is to say it is optimal to observe the world (in this sense) through discrete slits. It also shows that in this case Bayesians with different priors will receive different samples because typically the optimal attention windows will be disjoint. This property we refer to as the quantization of the attention function

  20. Information processing in network architecture of genome controlled signal transduction circuit. A proposed theoretical explanation.

    Science.gov (United States)

    Chakraborty, Chiranjib; Sarkar, Bimal Kumar; Patel, Pratiksha; Agoramoorthy, Govindasamy

    2012-01-01

    In this paper, Shannon information theory has been applied to elaborate cell signaling. It is proposed that in the cellular network architecture, four components viz. source (DNA), transmitter (mRNA), receiver (protein) and destination (another protein) are involved. The message transmits from source (DNA) to transmitter (mRNA) and then passes through a noisy channel reaching finally the receiver (protein). The protein synthesis process is here considered as the noisy channel. Ultimately, signal is transmitted from receiver to destination (another protein). The genome network architecture elements were compared with genetic alphabet L = {A, C, G, T} with a biophysical model based on the popular Shannon information theory. This study found the channel capacity as maximum for zero error (sigma = 0) and at this condition, transition matrix becomes a unit matrix with rank 4. The transition matrix will be erroneous and finally at sigma = 1 channel capacity will be localized maxima with a value of 0.415 due to the increased value at sigma. On the other hand, minima exists at sigma = 0.75, where all transition probabilities become 0.25 and uncertainty will be maximum resulting in channel capacity with the minima value of zero.

  1. The Methods of Information Security Based on Blurring of System

    Directory of Open Access Journals (Sweden)

    Mikhail Andreevich Styugin

    2016-03-01

    Full Text Available The paper present the model of researching system with own known input, output and set of discrete internal states. These theoretical objects like an absolutely protected from research system and an absolutely indiscernible data transfer channel are defined. Generalization of the principle of Shannon Secrecy are made. The method of system blurring is defined. Theoretically cryptographically strong of absolutely indiscernible data transfer channel is proved and its practical unbreakable against unreliable pseudo random number generator is shown. This paper present system with blurring of channel named Pseudo IDTC and shown asymptotic complexity of break this system compare with AES and GOST.

  2. The value of private patient information in the physician-patient relationship: a game-theoretic account.

    Science.gov (United States)

    De Jaegher, Kris

    2012-01-01

    This paper presents a game-theoretical model of the physician-patient relationship. There is a conflict of interests between physician and patient, in that the physician prefers the patient to always obtain a particular treatment, even if the patient would not consider this treatment in his interest. The patient obtains imperfect cues of whether or not he needs the treatment. The effect of an increase in the quality of the patient's private information is studied, in the form of an improvement in the quality of his cues. It is shown that when the patient's information improves in this sense, he may either become better off or worse off. The precise circumstances under which either result is obtained are derived.

  3. Parenting around child snacking: development of a theoretically-guided, empirically informed conceptual model

    OpenAIRE

    Davison, Kirsten K.; Blake, Christine E.; Blaine, Rachel E.; Younginer, Nicholas A.; Orloski, Alexandria; Hamtil, Heather A.; Ganter, Claudia; Bruton, Yasmeen P.; Vaughn, Amber E; Fisher, Jennifer O.

    2015-01-01

    Background: Snacking contributes to excessive energy intakes in children. Yet factors shaping child snacking are virtually unstudied. This study examines food parenting practices specific to child snacking among low-income caregivers. Methods: Semi-structured interviews were conducted in English or Spanish with 60 low-income caregivers of preschool-aged children (18 non-Hispanic white, 22 African American/Black, 20 Hispanic; 92 % mothers). A structured interview guide was used to solicit care...

  4. INFORMATION AND COMMUNICATION TECHNOLOGIES (ICT IN PHYSICAL EDUCATION. A THEORETICAL REVIEW

    Directory of Open Access Journals (Sweden)

    Mateo Rodríguez Quijada

    2015-01-01

    Full Text Available In this review we tour the treatment of the Information and Communication Technologies (ICT in the field of physical education by the professed and the students. For this purpose we review the existing lines of research on the topic and how the most remarkable works of different authors, with special attention to the situation in the autonomous community of Galicia. Finally the main problems related to the use of these technologies in classrooms are analyzed. All this in order t to shed light on a very topical issue regarding the education of our youth. Studies show that ICTs are increasingly present in the field of physical education, but much remains to be done to make an effective use of them in education. 

  5. An Information-theoretic Approach to Optimize JWST Observations and Retrievals of Transiting Exoplanet Atmospheres

    Science.gov (United States)

    Howe, Alex R.; Burrows, Adam; Deming, Drake

    2017-01-01

    We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope (JWST) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.

  6. An Information-Theoretic Perspective on the Quantum Bit Commitment Impossibility Theorem

    Directory of Open Access Journals (Sweden)

    Marius Nagy

    2018-03-01

    Full Text Available This paper proposes a different approach to pinpoint the causes for which an unconditionally secure quantum bit commitment protocol cannot be realized, beyond the technical details on which the proof of Mayers’ no-go theorem is constructed. We have adopted the tools of quantum entropy analysis to investigate the conditions under which the security properties of quantum bit commitment can be circumvented. Our study has revealed that cheating the binding property requires the quantum system acting as the safe to harbor the same amount of uncertainty with respect to both observers (Alice and Bob as well as the use of entanglement. Our analysis also suggests that the ability to cheat one of the two fundamental properties of bit commitment by any of the two participants depends on how much information is leaked from one side of the system to the other and how much remains hidden from the other participant.

  7. AN INFORMATION-THEORETIC APPROACH TO OPTIMIZE JWST OBSERVATIONS AND RETRIEVALS OF TRANSITING EXOPLANET ATMOSPHERES

    Energy Technology Data Exchange (ETDEWEB)

    Howe, Alex R.; Burrows, Adam [Department of Astronomy, University of Michigan, 1085 S. University, Ann Arbor, MI 48109 (United States); Deming, Drake, E-mail: arhowe@umich.edu, E-mail: burrows@astro.princeton.edu, E-mail: ddeming@astro.umd.edu [Department of Astronomy, University of Maryland College Park, MD 20742 (United States)

    2017-01-20

    We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope ( JWST ) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.

  8. Theoretical Background for Predicting the Properties of Petroleum Fluids via Group Contribution Methods

    Czech Academy of Sciences Publication Activity Database

    Bogdanić, Grozdana; Pavlíček, Jan; Wichterle, Ivan

    2012-01-01

    Roč. 42, SI (2012), s. 1873-1878 E-ISSN 1877-7058. [International Congress of Chemical and Process Engineering CHISA 2012 and 15th Conference PRES 2012 /20./. Prague, 25.08.2012-29.08.2012] Institutional support: RVO:67985858 Keywords : petroleum fluids * prediction * physico-chemical properties Subject RIV: CF - Physical ; Theoretical Chemistry

  9. Transactors, Transformers and Beyond. A Multi-Method Development of a Theoretical Typology of Leadership.

    Science.gov (United States)

    Pearce, Craig L.; Sims, Henry P., Jr.; Cox, Jonathan F.; Ball, Gail; Schnell, Eugene; Smith, Ken A.; Trevino, Linda

    2003-01-01

    To extend the transactional-transformational model of leadership, four theoretical behavioral types of leadership were developed based on literature review and data from studies of executive behavior (n=253) and subordinate attitudes (n=208). Confirmatory factor analysis of a third data set (n=702) support the existence of four leadership types:…

  10. A decision-theoretic approach to collaboration: Principal description methods and efficient heuristic approximations

    NARCIS (Netherlands)

    Oliehoek, F.A.; Visser, A.; Babuška, R.; Groen, F.C.A

    2010-01-01

    This chapter gives an overview of the state of the art in decision-theoretic models to describe cooperation between multiple agents in a dynamic environment. Making (near-) optimal decisions in such settings gets harder when the number of agents grows or the uncertainty about the environment

  11. Information in Our World: Conceptions of Information and Problems of Method in Information Science

    Science.gov (United States)

    Ma, Lai

    2012-01-01

    Many concepts of information have been proposed and discussed in library and information science. These concepts of information can be broadly categorized as empirical and situational information. Unlike nomenclatures in many sciences, however, the concept of information in library and information science does not bear a generally accepted…

  12. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework

    Directory of Open Access Journals (Sweden)

    French Simon D

    2012-04-01

    Full Text Available Abstract Background There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF to advance the science of implementation research. Methods The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s of delivery could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? Results A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. Conclusions We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be

  13. Anatomy of a Spin: The Information-Theoretic Structure of Classical Spin Systems

    Directory of Open Access Journals (Sweden)

    Vikram S. Vijayaraghavan

    2017-05-01

    Full Text Available Collective organization in matter plays a significant role in its expressed physical properties. Typically, it is detected via an order parameter, appropriately defined for each given system’s observed emergent patterns. Recent developments in information theory, however, suggest quantifying collective organization in a system- and phenomenon-agnostic way: decomposing the system’s thermodynamic entropy density into a localized entropy, that is solely contained in the dynamics at a single location, and a bound entropy, that is stored in space as domains, clusters, excitations, or other emergent structures. As a concrete demonstration, we compute this decomposition and related quantities explicitly for the nearest-neighbor Ising model on the 1D chain, on the Bethe lattice with coordination number k = 3 , and on the 2D square lattice, illustrating its generality and the functional insights it gives near and away from phase transitions. In particular, we consider the roles that different spin motifs play (in cluster bulk, cluster edges, and the like and how these affect the dependencies between spins.

  14. Parenting around child snacking: development of a theoretically-guided, empirically informed conceptual model.

    Science.gov (United States)

    Davison, Kirsten K; Blake, Christine E; Blaine, Rachel E; Younginer, Nicholas A; Orloski, Alexandria; Hamtil, Heather A; Ganter, Claudia; Bruton, Yasmeen P; Vaughn, Amber E; Fisher, Jennifer O

    2015-09-17

    Snacking contributes to excessive energy intakes in children. Yet factors shaping child snacking are virtually unstudied. This study examines food parenting practices specific to child snacking among low-income caregivers. Semi-structured interviews were conducted in English or Spanish with 60 low-income caregivers of preschool-aged children (18 non-Hispanic white, 22 African American/Black, 20 Hispanic; 92% mothers). A structured interview guide was used to solicit caregivers' definitions of snacking and strategies they use to decide what, when and how much snack their child eats. Interviews were audio-recorded, transcribed verbatim and analyzed using an iterative theory-based and grounded approach. A conceptual model of food parenting specific to child snacking was developed to summarize the findings and inform future research. Caregivers' descriptions of food parenting practices specific to child snacking were consistent with previous models of food parenting developed based on expert opinion [1, 2]. A few noteworthy differences however emerged. More than half of participants mentioned permissive feeding approaches (e.g., my child is the boss when it comes to snacks). As a result, permissive feeding was included as a higher order feeding dimension in the resulting model. In addition, a number of novel feeding approaches specific to child snacking emerged including child-centered provision of snacks (i.e., responding to a child's hunger cues when making decisions about snacks), parent unilateral decision making (i.e., making decisions about a child's snacks without any input from the child), and excessive monitoring of snacks (i.e., monitoring all snacks provided to and consumed by the child). The resulting conceptual model includes four higher order feeding dimensions including autonomy support, coercive control, structure and permissiveness and 20 sub-dimensions. This study formulates a language around food parenting practices specific to child snacking

  15. Blind information-theoretic multiuser detection algorithms for DS-CDMA and WCDMA downlink systems.

    Science.gov (United States)

    Waheed, Khuram; Salem, Fathi M

    2005-07-01

    Code division multiple access (CDMA) is based on the spread-spectrum technology and is a dominant air interface for 2.5G, 3G, and future wireless networks. For the CDMA downlink, the transmitted CDMA signals from the base station (BS) propagate through a noisy multipath fading communication channel before arriving at the receiver of the user equipment/mobile station (UE/MS). Classical CDMA single-user detection (SUD) algorithms implemented in the UE/MS receiver do not provide the required performance for modern high data-rate applications. In contrast, multi-user detection (MUD) approaches require a lot of a priori information not available to the UE/MS. In this paper, three promising adaptive Riemannian contra-variant (or natural) gradient based user detection approaches, capable of handling the highly dynamic wireless environments, are proposed. The first approach, blind multiuser detection (BMUD), is the process of simultaneously estimating multiple symbol sequences associated with all the users in the downlink of a CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. This approach is applicable to CDMA systems with relatively short spreading codes but becomes impractical for systems using long spreading codes. We also propose two other adaptive approaches, namely, RAKE -blind source recovery (RAKE-BSR) and RAKE-principal component analysis (RAKE-PCA) that fuse an adaptive stage into a standard RAKE receiver. This adaptation results in robust user detection algorithms with performance exceeding the linear minimum mean squared error (LMMSE) detectors for both Direct Sequence CDMA (DS-CDMA) and wide-band CDMA (WCDMA) systems under conditions of congestion, imprecise channel estimation and unmodeled multiple access interference (MAI).

  16. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    Science.gov (United States)

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  17. Application of geo-information science methods in ecotourism exploitation

    Science.gov (United States)

    Dong, Suocheng; Hou, Xiaoli

    2004-11-01

    Application of geo-information science methods in ecotourism development was discussed in the article. Since 1990s, geo-information science methods, which take the 3S (Geographic Information System, Global Positioning System, and Remote Sensing) as core techniques, has played an important role in resources reconnaissance, data management, environment monitoring, and regional planning. Geo-information science methods can easily analyze and convert geographic spatial data. The application of 3S methods is helpful to sustainable development in tourism. Various assignments are involved in the development of ecotourism, such as reconnaissance of ecotourism resources, drawing of tourism maps, dealing with mass data, and also tourism information inquire, employee management, quality management of products. The utilization of geo-information methods in ecotourism can make the development more efficient by promoting the sustainable development of tourism and the protection of eco-environment.

  18. Derivation of Human Chromatic Discrimination Ability from an Information-Theoretical Notion of Distance in Color Space.

    Science.gov (United States)

    da Fonseca, María; Samengo, Inés

    2016-12-01

    The accuracy with which humans detect chromatic differences varies throughout color space. For example, we are far more precise when discriminating two similar orange stimuli than two similar green stimuli. In order for two colors to be perceived as different, the neurons representing chromatic information must respond differently, and the difference must be larger than the trial-to-trial variability of the response to each separate color. Photoreceptors constitute the first stage in the processing of color information; many more stages are required before humans can consciously report whether two stimuli are perceived as chromatically distinguishable. Therefore, although photoreceptor absorption curves are expected to influence the accuracy of conscious discriminability, there is no reason to believe that they should suffice to explain it. Here we develop information-theoretical tools based on the Fisher metric that demonstrate that photoreceptor absorption properties explain about 87% of the variance of human color discrimination ability, as tested by previous behavioral experiments. In the context of this theory, the bottleneck in chromatic information processing is determined by photoreceptor absorption characteristics. Subsequent encoding stages modify only marginally the chromatic discriminability at the photoreceptor level.

  19. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    Science.gov (United States)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  20. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework.

    Science.gov (United States)

    French, Simon D; Green, Sally E; O'Connor, Denise A; McKenzie, Joanne E; Francis, Jill J; Michie, Susan; Buchbinder, Rachelle; Schattner, Peter; Spike, Neil; Grimshaw, Jeremy M

    2012-04-24

    There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF) to advance the science of implementation research. The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be maintained as the primary framework to guide researchers through a

  1. How do small groups make decisions? : A theoretical framework to inform the implementation and study of clinical competency committees.

    Science.gov (United States)

    Chahine, Saad; Cristancho, Sayra; Padgett, Jessica; Lingard, Lorelei

    2017-06-01

    In the competency-based medical education (CBME) approach, clinical competency committees are responsible for making decisions about trainees' competence. However, we currently lack a theoretical model for group decision-making to inform this emerging assessment phenomenon. This paper proposes an organizing framework to study and guide the decision-making processes of clinical competency committees.This is an explanatory, non-exhaustive review, tailored to identify relevant theoretical and evidence-based papers related to small group decision-making. The search was conducted using Google Scholar, Web of Science, MEDLINE, ERIC, and PsycINFO for relevant literature. Using a thematic analysis, two researchers (SC & JP) met four times between April-June 2016 to consolidate the literature included in this review.Three theoretical orientations towards group decision-making emerged from the review: schema, constructivist, and social influence. Schema orientations focus on how groups use algorithms for decision-making. Constructivist orientations focus on how groups construct their shared understanding. Social influence orientations focus on how individual members influence the group's perspective on a decision. Moderators of decision-making relevant to all orientations include: guidelines, stressors, authority, and leadership.Clinical competency committees are the mechanisms by which groups of clinicians will be in charge of interpreting multiple assessment data points and coming to a shared decision about trainee competence. The way in which these committees make decisions can have huge implications for trainee progression and, ultimately, patient care. Therefore, there is a pressing need to build the science of how such group decision-making works in practice. This synthesis suggests a preliminary organizing framework that can be used in the implementation and study of clinical competency committees.

  2. Spatial memory: Theoretical basis and comparative review on experimental methods in rodents.

    Science.gov (United States)

    Paul, Carrillo-Mora; Magda, Giordano; Abel, Santamaría

    2009-11-05

    The assessment of learning and memory in animal models has been widely employed in scientific research for a long time. Among these models, those representing diseases with primary processes of affected memory - such as amnesia, dementia, brain aging, etc. - studies dealing with the toxic effects of specific drugs, and other exploring neurodevelopment, trauma, epilepsy and neuropsychiatric disorders, are often called on to employ these tools. There is a diversity of experimental methods assessing animal learning and memory skills. Overall, mazes are the devices mostly used today to test memory in rodents; there are several types of them, but their real usefulness, advantages and applications remain to be fully established and depend on the particular variant selected by the experimenter. The aims of the present article are first, to briefly review the accumulated knowledge in regard to spatial memory tasks; second, to bring the reader information on the different types of rodent mazes available to test spatial memory; and third, to elucidate the usefulness and limitations of each of these devices.

  3. Implementation of 2D Discrete Wavelet Transform by Number Theoretic Transform and 2D Overlap-Save Method

    Directory of Open Access Journals (Sweden)

    Lina Yang

    2014-01-01

    Full Text Available To reduce the computation complexity of wavelet transform, this paper presents a novel approach to be implemented. It consists of two key techniques: (1 fast number theoretic transform(FNTT In the FNTT, linear convolution is replaced by the circular one. It can speed up the computation of 2D discrete wavelet transform. (2 In two-dimensional overlap-save method directly calculating the FNTT to the whole input sequence may meet two difficulties; namely, a big modulo obstructs the effective implementation of the FNTT and a long input sequence slows the computation of the FNTT down. To fight with such deficiencies, a new technique which is referred to as 2D overlap-save method is developed. Experiments have been conducted. The fast number theoretic transform and 2D overlap-method have been used to implement the dyadic wavelet transform and applied to contour extraction in pattern recognition.

  4. 48 CFR 2905.101 - Methods of disseminating information.

    Science.gov (United States)

    2010-10-01

    ... information. 2905.101 Section 2905.101 Federal Acquisition Regulations System DEPARTMENT OF LABOR ACQUISITION PLANNING PUBLICIZING CONTRACT ACTIONS Dissemination of Information 2905.101 Methods of disseminating... dissemination of information concerning procurement actions. The Division of Acquisition Management Services...

  5. Axiomatic Evaluation Method and Content Structure for Information Appliances

    Science.gov (United States)

    Guo, Yinni

    2010-01-01

    Extensive studies have been conducted to determine how best to present information in order to enhance usability, but not what information is needed to be presented for effective decision making. Hence, this dissertation addresses the factor structure of the nature of information needed for presentation and proposes a more effective method than…

  6. Information-theoretic analysis of rotational distributions from quantal and quasiclassical computations of reactive and nonreactive scattering

    International Nuclear Information System (INIS)

    Bernstein, R.B.

    1976-01-01

    An information-theoretic approach to the analysis of rotational excitation cross sections was developed by Levine, Bernstein, Johnson, Procaccia, and coworkers and applied to state-to-state cross sections available from numerical computations of reactive and nonreactive scattering (for example, by Wyatt and Kuppermann and their coworkers and by Pack and Pattengill and others). The rotational surprisals are approximately linear in the energy transferred, thereby accounting for the so-called ''exponential gap law'' for rotational relaxation discovered experimentally by Polanyi, Woodall, and Ding. For the ''linear surprisal'' case the unique relation between the surprisal parameter theta/sub R/ and the first moment of the rotational energy distribution provides a link between the pattern of the rotational state distribution and those features of the potential surface which govern the average energy transfer

  7. Fuzzy Search Method for Hi Education Information Security

    Directory of Open Access Journals (Sweden)

    Grigory Grigorevich Novikov

    2016-03-01

    Full Text Available The main reason of the research is how to use fuzzy search method for information security of Hi Education or some similar purposes. So many sensitive information leaks are through non SUMMARY 149 classified documents legal publishing. That’s why many intelligence services so love to use the «mosaic» information collection method. This article is about how to prevent it.

  8. Geometrical Fuzzy Search Method for the Business Information Security Systems

    Directory of Open Access Journals (Sweden)

    Grigory Grigorievich Novikov

    2014-12-01

    Full Text Available The main reason of the article is how to use one of new fuzzy search method for information security of business or some other purposes. So many sensitive information leaks are through non-classified documents legal publishing. That’s why many intelligence services like to use the “mosaic” information collection method so much: This article is about how to prevent it.

  9. A Theoretical Model of Resource-Oriented Music Therapy with Informal Hospice Caregivers during Pre-Bereavement.

    Science.gov (United States)

    Potvin, Noah; Bradt, Joke; Ghetti, Claire

    2018-03-09

    Over the past decade, caregiver pre-bereavement has received increased scholarly and clinical attention across multiple healthcare fields. Pre-bereavement represents a nascent area for music therapy to develop best practices in and an opportunity to establish clinical relevancy in the interdisciplinary team. This study was an exploratory inquiry into the role of music therapy with pre-bereaved informal hospice caregivers. This study intended to articulate (a) what pre-bereavement needs are present for informal hospice caregivers, (b) which of those needs were addressed in music, and (c) the process by which music therapy addressed those needs. A constructivist grounded theory methodology using situational analysis was used. We interviewed 14 currently bereaved informal hospice caregivers who had participated in music therapy with the care recipient. Analysis resulted in a theoretical model of resource-oriented music therapy promoting caregiver resilience. The resource, caregivers' stable caring relationships with care recipients through their pre-illness identities (i.e., spouse, parent, or child), is amplified through music therapy. Engagement with this resource mediates the risk of increased care burden and results in resilience fostering purposefulness and value in caregiving. Resource-oriented music therapy provides a unique clinical avenue for supporting caregivers through pre-bereavement, and was acknowledged by caregivers as a unique and integral hospice service. Within this model, caregivers are better positioned to develop meaning from the experience of providing care through the death of a loved one.

  10. The theoretical study of full spectrum analysis method for airborne gamma-ray spectrometric data

    International Nuclear Information System (INIS)

    Ni Weichong

    2011-01-01

    Spectra of airborne gamma-ray spectrometry was found to be the synthesis of spectral components of radioelement sources by analyzing the constitution of radioactive sources for airborne gamma-ray spectrometric survey and establishing the models of gamma-ray measurement. The mathematical equation for analysising airborne gamma-ray full spectrometric data can be expressed into matrix and related expansions were developed for the mineral resources exploration, environmental radiation measurement, nuclear emergency monitoring, and so on. Theoretical study showed that the atmospheric radon could be directly computed by airborne gamma-ray spectrometric data with full spectrum analysis without the use of the accessional upward-looking detectors. (authors)

  11. Current and future prospects for the application of systematic theoretical methods to the study of problems in physical oceanography

    Energy Technology Data Exchange (ETDEWEB)

    Constantin, A., E-mail: adrian.constantin@kcl.ac.uk [Department of Mathematics, King' s College London, Strand, London WC2R 2LS (United Kingdom); Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna (Austria); Johnson, R.S., E-mail: r.s.johnson@ncl.ac.uk [School of Mathematics & Statistics, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)

    2016-09-07

    Highlights: • Systematic theoretical methods in studies of equatorial ocean dynamics. • Linear wave-current interactions in stratified flows. • Exact solutions – Kelvin waves, azimuthal non-uniform currents. • Three-dimensional nonlinear currents. • Hamiltonian formulation for the governing equations and for structure-preserving/enhancing approximations. - Abstract: This essay is a commentary on the pivotal role of systematic theoretical methods in physical oceanography. At some level, there will always be a conflict between theory and experiment/data collection: Which is pre-eminent? Which should come first? This issue appears to be particularly marked in physical oceanography, to the extreme detriment of the development of the subject. It is our contention that the classical theory of fluids, coupled with methods from the theory of differential equations, can play a significant role in carrying the subject, and our understanding, forward. We outline the philosophy behind a systematic theoretical approach, highlighting some aspects of equatorial ocean dynamics where these methods have already been successful, paving the way for much more in the future and leading, we expect, to the better understanding of this and many other types of ocean flow. We believe that the ideas described here promise to reveal a rich and beautiful dynamical structure.

  12. Development of methods for theoretical analysis of nuclear reactors (Phase II), I-V, Part IV, Fuel depletion

    International Nuclear Information System (INIS)

    Pop-Jordanov, J.

    1962-10-01

    This report includes the analysis of plutonium isotopes from U 238 depletion chain. Two theoretical approaches for solving the depletion of fuel are shown. One results in the system of differential equations that can be solved only by using electronic calculators and the second, Machinari-Goto method enables obtaining analytical equations for approximative values of particular nuclei. In addition, differential equations are given for different approximation levels in calculating Pu 239 , as well as relations between the released energy and irradiation [sr

  13. Studying collaborative information seeking: Experiences with three methods

    DEFF Research Database (Denmark)

    Hyldegård, Jette Seiden; Hertzum, Morten; Hansen, Preben

    2015-01-01

    , however, benefit from a discussion of methodological issues. This chapter describes the application of three methods for collecting and analyzing data in three CIS studies. The three methods are Multidimensional Exploration, used in a CIS study of students’ in-formation behavior during a group assignment......; Task-structured Observation, used in a CIS study of patent engineers; and Condensed Observation, used in a CIS study of information-systems development. The three methods are presented in the context of the studies for which they were devised, and the experiences gained using the methods are discussed....... The chapter shows that different methods can be used for collecting and analyzing data about CIS incidents. Two of the methods focused on tasks and events in work settings, while the third was applied in an educational setting. Commonalities and differences among the methods are discussed to inform decisions...

  14. IDEF method for designing seismic information system in CTBT verification

    International Nuclear Information System (INIS)

    Zheng Xuefeng; Shen Junyi; Jin Ping; Zhang Huimin; Zheng Jiangling; Sun Peng

    2004-01-01

    Seismic information system is of great importance for improving the capability of CTBT verification. A large amount of money has been appropriated for the research in this field in the U.S. and some other countries in recent years. However, designing and developing a seismic information system involves various technologies about complex system design. This paper discusses the IDEF0 method to construct function models and the IDEF1x method to make information models systemically, as well as how they are used in designing seismic information system in CTBT verification. (authors)

  15. Evaluation of Information Requirements of Reliability Methods in Engineering Design

    DEFF Research Database (Denmark)

    Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema

    2010-01-01

    This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...

  16. A Game Theoretic Optimization Method for Energy Efficient Global Connectivity in Hybrid Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    JongHyup Lee

    2016-08-01

    Full Text Available For practical deployment of wireless sensor networks (WSN, WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections.

  17. A game-theoretic method for cross-layer stochastic resilient control design in CPS

    Science.gov (United States)

    Shen, Jiajun; Feng, Dongqin

    2018-03-01

    In this paper, the cross-layer security problem of cyber-physical system (CPS) is investigated from the game-theoretic perspective. Physical dynamics of plant is captured by stochastic differential game with cyber-physical influence being considered. The sufficient and necessary condition for the existence of state-feedback equilibrium strategies is given. The attack-defence cyber interactions are formulated by a Stackelberg game intertwined with stochastic differential game in physical layer. The condition such that the Stackelberg equilibrium being unique and the corresponding analytical solutions are both provided. An algorithm is proposed for obtaining hierarchical security strategy by solving coupled games, which ensures the operational normalcy and cyber security of CPS subject to uncertain disturbance and unexpected cyberattacks. Simulation results are given to show the effectiveness and performance of the proposed algorithm.

  18. A Game Theoretic Optimization Method for Energy Efficient Global Connectivity in Hybrid Wireless Sensor Networks

    Science.gov (United States)

    Lee, JongHyup; Pak, Dohyun

    2016-01-01

    For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections. PMID:27589743

  19. Theoretical treatment of molecular photoionization based on the R-matrix method

    International Nuclear Information System (INIS)

    Tashiro, Motomichi

    2012-01-01

    The R-matrix method was implemented to treat molecular photoionization problem based on the UK R-matrix codes. This method was formulated to treat photoionization process long before, however, its application has been mostly limited to photoionization of atoms. Application of the method to valence photoionization as well as inner-shell photoionization process will be presented.

  20. Electron transfer driven decomposition of adenine and selected analogs as probed by experimental and theoretical methods

    Science.gov (United States)

    Cunha, T.; Mendes, M.; Ferreira da Silva, F.; Eden, S.; García, G.; Bacchus-Montabonel, M.-C.; Limão-Vieira, P.

    2018-04-01

    We report on a combined experimental and theoretical study of electron-transfer-induced decomposition of adenine (Ad) and a selection of analog molecules in collisions with potassium (K) atoms. Time-of-flight negative ion mass spectra have been obtained in a wide collision energy range (6-68 eV in the centre-of-mass frame), providing a comprehensive investigation of the fragmentation patterns of purine (Pu), adenine (Ad), 9-methyl adenine (9-mAd), 6-dimethyl adenine (6-dimAd), and 2-D adenine (2-DAd). Following our recent communication about selective hydrogen loss from the transient negative ions (TNIs) produced in these collisions [T. Cunha et al., J. Chem. Phys. 148, 021101 (2018)], this work focuses on the production of smaller fragment anions. In the low-energy part of the present range, several dissociation channels that are accessible in free electron attachment experiments are absent from the present mass spectra, notably NH2 loss from adenine and 9-methyl adenine. This can be understood in terms of a relatively long transit time of the K+ cation in the vicinity of the TNI tending to enhance the likelihood of intramolecular electron transfer. In this case, the excess energy can be redistributed through the available degrees of freedom inhibiting fragmentation pathways. Ab initio theoretical calculations were performed for 9-methyl adenine (9-mAd) and adenine (Ad) in the presence of a potassium atom and provided a strong basis for the assignment of the lowest unoccupied molecular orbitals accessed in the collision process.

  1. Information theoretic measures of network coordination in high-frequency scalp EEG reveal dynamic patterns associated with seizure termination.

    Science.gov (United States)

    Stamoulis, Catherine; Schomer, Donald L; Chang, Bernard S

    2013-08-01

    How a seizure terminates is still under-studied and, despite its clinical importance, remains an obscure phase of seizure evolution. Recent studies of seizure-related scalp EEGs at frequencies >100 Hz suggest that neural activity, in the form of oscillations and/or neuronal network interactions, may play an important role in preictal/ictal seizure evolution (Andrade-Valenca et al., 2011; Stamoulis et al., 2012). However, the role of high-frequency activity in seizure termination, is unknown, if it exists at all. Using information theoretic measures of network coordination, this study investigated ictal and immediate postictal neurodynamic interactions encoded in scalp EEGs from a relatively small sample of 8 patients with focal epilepsy and multiple seizures originating in temporal and/or frontal brain regions, at frequencies ≤ 100 Hz and >100 Hz, respectively. Despite some heterogeneity in the dynamics of these interactions, consistent patterns were also estimated. Specifically, in several seizures, linear or non-linear increase in high-frequency neuronal coordination during ictal intervals, coincided with a corresponding decrease in coordination at frequencies interval, which continues during the postictal interval. This may be one of several possible mechanisms that facilitate seizure termination. In fact, inhibition of pairwise interactions between EEGs by other signals in their spatial neighborhood, quantified by negative interaction information, was estimated at frequencies ≤ 100 Hz, at least in some seizures. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Neutron thermalization in absorbing infinite homogeneous media: theoretical methods; Methodes theoriques pour l'etude de la thermalisation des neutrons dans les milieux absorbants infinis et homogenes

    Energy Technology Data Exchange (ETDEWEB)

    Cadilhac, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-11-15

    After a general survey of the theory of neutron thermalization in homogeneous media, one introduces, through a proper formulation, a simplified model generalizing both the Horowitz model (generalized heavy free gas approximation) and the proton gas model. When this model is used, the calculation of spectra is reduced to the solution of linear second order differential equations. Since it depends on two arbitrary functions, the model gives a good approximation of any usual moderator for reactor physics purposes. The choice of these functions is discussed from a theoretical point of view; a method based on the consideration of the first two moments of the scattering law is investigated. Finally, the possibility of discriminating models by using experimental informations is considered. (author) [French] Apres un passage en revue de generalites sur la thermalisation des neutrons dans les milieux homogenes, on developpe un formalisme permettant de definir et d'etudier un modele simplifie de thermaliseur. Ce modele generalise l'approximation proposee par J. HOROWITZ (''gaz lourd generalise'') et comporte comme cas particulier le modele ''hydrogene gazeux monoatomique''. Il ramene le calcul des spectres a la resolution d'equations differentielles lineaires du second ordre. Il fait intervenir deux fonctions arbitraires, ce qui lui permet de representer les thermaliseurs usuels de facon satisfaisante pour les besoins de la physique des reacteurs. L'ajustement theorique de ces fonctions est discute; on etudie une methode basee sur la consideration des deux premiers moments de la loi de diffusion. On envisage enfin la possibilite de discriminer les modeles d'apres des renseignements d'origine experimentale. (auteur)

  3. Method of Improving Personal Name Search in Academic Information Service

    Directory of Open Access Journals (Sweden)

    Heejun Han

    2012-12-01

    Full Text Available All academic information on the web or elsewhere has its creator, that is, a subject who has created the information. The subject can be an individual, a group, or an institution, and can be a nation depending on the nature of the relevant information. Most information is composed of a title, an author, and contents. An essay which is under the academic information category has metadata including a title, an author, keyword, abstract, data about publication, place of publication, ISSN, and the like. A patent has metadata including the title, an applicant, an inventor, an attorney, IPC, number of application, and claims of the invention. Most web-based academic information services enable users to search the information by processing the meta-information. An important element is to search information by using the author field which corresponds to a personal name. This study suggests a method of efficient indexing and using the adjacent operation result ranking algorithm to which phrase search-based boosting elements are applied, and thus improving the accuracy of the search results of personal names. It also describes a method for providing the results of searching co-authors and related researchers in searching personal names. This method can be effectively applied to providing accurate and additional search results in the academic information services.

  4. Self-informant Agreement for Personality and Evaluative Person Descriptors: Comparing Methods for Creating Informant Measures.

    Science.gov (United States)

    Simms, Leonard J; Zelazny, Kerry; Yam, Wern How; Gros, Daniel F

    2010-05-01

    Little attention typically is paid to the way self-report measures are translated for use in self-informant agreement studies. We studied two possible methods for creating informant measures: (a) the traditional method in which self-report items were translated from the first- to the third-person and (b) an alternative meta-perceptual method in which informants were directed to rate their perception of the targets' self-perception. We hypothesized that the latter method would yield stronger self-informant agreement for evaluative personality dimensions measured by indirect item markers. We studied these methods in a sample of 303 undergraduate friendship dyads. Results revealed mean-level differences between methods, similar self-informant agreement across methods, stronger agreement for Big Five dimensions than for evaluative dimensions, and incremental validity for meta-perceptual informant rating methods. Limited power reduced the interpretability of several sparse acquaintanceship effects. We conclude that traditional informant methods are appropriate for most personality traits, but meta-perceptual methods may be more appropriate when personality questionnaire items reflect indirect indicators of the trait being measured, which is particularly likely for evaluative traits.

  5. EXPLANATORY METHODS OF MARKETING DATA ANALYSIS – THEORETICAL AND METHODOLOGICAL CONSIDERATIONS

    Directory of Open Access Journals (Sweden)

    Rozalia GABOR

    2010-01-01

    Full Text Available Explanatory methods of data analysis – also named by some authors supervised learning methods - enable researchers to identify and analyse configurations of relations between two or several variables, most of them with a high accuracy, as there is possibility of testing statistic significance by calculating the confidence level associated with validation of relation concerned across the entire population and not only the surveyed sample. The paper shows some of these methods, respectively: variance analysis, covariance analysis, segmentation and discriminant analysis with the mention - for every method – of applicability area for marketing research.

  6. Classifying and Designing the Educational Methods with Information Communications Technoligies

    Directory of Open Access Journals (Sweden)

    I. N. Semenova

    2013-01-01

    Full Text Available The article describes the conceptual apparatus for implementing the Information Communications Technologies (ICT in education. The authors suggest the classification variants of the related teaching methods according to the following component combinations: types of students work with information, goals of ICT incorporation into the training process, individualization degrees, contingent involvement, activity levels and pedagogical field targets, ideology of informational didactics, etc. Each classification can solve the educational tasks in the context of the partial paradigm of modern didactics; any kind of methods implies the particular combination of activities in educational environment.The whole spectrum of classifications provides the informational functional basis for the adequate selection of necessary teaching methods in accordance with the specified goals and planned results. The potential variants of ICT implementation methods are given for different teaching models. 

  7. Method s for Measuring Productivity in Libraries and Information Centres

    OpenAIRE

    Mohammad Alaaei

    2009-01-01

      Within Information centers, productivity is the result of optimal and effective use of information resources, service quality improvement, increased user satisfaction, pleasantness of working environment, increased motivation and enthusiasm of staff to work better. All contribute to the growth and development of information centers. Thus these centers would need to be familiar with methods employed in productivity measurement. Productivity is one of the criteria for evaluating system perfor...

  8. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  9. State-of-the-Art: Research Theoretical Framework of Information Systems Implementation Research in the Health Sector in Sub-Saharan Africa

    DEFF Research Database (Denmark)

    Tetteh, Godwin Kofi

    2014-01-01

    This study is about the state-of-the-art of reference theories and theoretical framework of information systems implementation research in the health industry in the Sub-Saharan countries from a process perspective. A process – variance framework, Poole et al, (2000), Markus & Robey, (1988......) and Shaw & Jarvenpaa, (1997) is employed to examine reference theories employed in research conducted on information systems implementation in the health sector in the Sub-Saharan region and published between 2003 and 2013. Using a number of key words and searching on a number of databases, EBSCO, CSA...... the process theoretical framework to enhance our insight into successful information systems implementation in the region. It is our optimism that the process based theoretical framework will be useful for, information system practitioners and organisational managers and researchers in the health sector...

  10. Theoretical study of electron transfer mechanism in biological systems with a QM (MRSCI+DFT)/MM method

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Toshikazu [Research Program for Computational Science, RIKEN 2-1, Hirosawa, Wako, Saitama 351-0198 (Japan)

    2007-07-15

    The goal of this project is to understand the charge separation mechanisms in biological systems using the molecular orbital theories. Specially, the charge separation in the photosynthetic reaction center is focused on, since the efficiency in use of the solar energy is extraordinary and the reason for it is still kept unknown. Here, a QM/MM theoretical scheme is employed to take the effects of the surrounding proteins onto the pigments into account. To describe such excited electronic structures, a unified theory by MRSCI and DFT is newly invented. For atoms in the MM space, a new sampling method has also been created, based on the statistical physics. By using these theoretical framework, the excited and positively charged states of the special pair, that is, chlorophyll dimmer are planning to be calculated this year.

  11. Theoretical study of electron transfer mechanism in biological systems with a QM (MRSCI+DFT)/MM method

    International Nuclear Information System (INIS)

    Takada, Toshikazu

    2007-01-01

    The goal of this project is to understand the charge separation mechanisms in biological systems using the molecular orbital theories. Specially, the charge separation in the photosynthetic reaction center is focused on, since the efficiency in use of the solar energy is extraordinary and the reason for it is still kept unknown. Here, a QM/MM theoretical scheme is employed to take the effects of the surrounding proteins onto the pigments into account. To describe such excited electronic structures, a unified theory by MRSCI and DFT is newly invented. For atoms in the MM space, a new sampling method has also been created, based on the statistical physics. By using these theoretical framework, the excited and positively charged states of the special pair, that is, chlorophyll dimmer are planning to be calculated this year

  12. Unified Theoretical Frame of a Joint Transmitter-Receiver Reduced Dimensional STAP Method for an Airborne MIMO Radar

    Directory of Open Access Journals (Sweden)

    Guo Yiduo

    2016-10-01

    Full Text Available The unified theoretical frame of a joint transmitter-receiver reduced dimensional Space-Time Adaptive Processing (STAP method is studied for an airborne Multiple-Input Multiple-Output (MIMO radar. First, based on the transmitted waveform diverse characteristics of the transmitted waveform of the airborne MIMO radar, a uniform theoretical frame structure for the reduced dimensional joint adaptive STAP is constructed. Based on it, three reduced dimensional STAP fixed structures are established. Finally, three reduced rank STAP algorithms, which are suitable for a MIMO system, are presented corresponding to the three reduced dimensional STAP fixed structures. The simulations indicate that the joint adaptive algorithms have preferable clutter suppression and anti-interference performance.

  13. Deriving harmonised forest information in Europe using remote sensing methods

    DEFF Research Database (Denmark)

    Seebach, Lucia Maria

    the need for harmonised forest information can be satisfied using remote sensing methods. In conclusion, the study showed that it is possible to derive harmonised forest information of high spatial detail in Europe with remote sensing. The study also highlighted the imperative provision of accuracy...

  14. 48 CFR 1205.101 - Methods of disseminating information.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Methods of disseminating information. 1205.101 Section 1205.101 Federal Acquisition Regulations System DEPARTMENT OF TRANSPORTATION... disseminating information. (b) The DOT Office of Small and Disadvantaged Business Utilization (S-40), 400 7th...

  15. Barriers and facilitators to preventing pressure ulcers in nursing home residents: A qualitative analysis informed by the Theoretical Domains Framework.

    Science.gov (United States)

    Lavallée, Jacqueline F; Gray, Trish A; Dumville, Jo; Cullum, Nicky

    2018-06-01

    Pressure ulcers are areas of localised damage to the skin and underlying tissue; and can cause pain, immobility, and delay recovery, impacting on health-related quality of life. The individuals who are most at risk of developing a pressure ulcer are those who are seriously ill, elderly, have impaired mobility and/or poor nutrition; thus, many nursing home residents are at risk. To understand the context of pressure ulcer prevention in nursing homes and to explore the potential barriers and facilitators to evidence-informed practices. Semi-structured interviews were conducted with nursing home nurses, healthcare assistants and managers, National Health Service community-based wound specialist nurses (known in the UK as tissue viability nurses) and a nurse manager in the North West of England. The interview guide was developed using the Theoretical Domains Framework to explore the barriers and facilitators to pressure ulcer prevention in nursing home residents. Data were analysed using a framework analysis and domains were identified as salient based on their frequency and the potential strength of their impact. 25 participants (nursing home: 2 managers, 7 healthcare assistants, 11 qualified nurses; National Health Service community services: 4 tissue viability nurses, 1 manager) were interviewed. Depending upon the behaviours reported and the context, the same domain could be classified as both a barrier and a facilitator. We identified seven domains as relevant in the prevention of pressure ulcers in nursing home residents mapping to four "barrier" domains and six "facilitator" domains. The four "barrier" domains were knowledge, physical skills, social influences and environmental context and resources and the six "facilitator" domains were interpersonal skills, environmental context and resources, social influences, beliefs about capabilities, beliefs about consequences and social/professional role and identity). Knowledge and insight into these barriers and

  16. Does Macaulay Duration Provide The Most Cost-Effective Immunization Method – A Theoretical Approach

    Directory of Open Access Journals (Sweden)

    Zaremba Leszek

    2017-02-01

    Full Text Available In the following, we offer a theoretical approach that attempts to explain (Comments 1-3 why and when the Macaulay duration concept happens to be a good approximation of a bond’s price sensitivity. We are concerned with the basic immunization problem with a single liability to be discharged at a future time q. Our idea is to divide the class K of all shifts a(t of a term structure of interest rates s(t into many classes and then to find a sufficient and necessary condition a given bond portfolio, dependent on a class of shifts, must satisfy to secure immunization at time q against all shifts a(t from that class. For this purpose, we introduce the notions of dedicated duration and dedicated convexity. For each class of shifts, we show how to choose from a bond market under consideration a portfolio with maximal dedicated convexity among all immunizing portfolios. We demonstrate that the portfolio yields the maximal unanticipated rate of return and appears to be uniquely determined as a barbell strategy (portfolio built up with 2 zero-coupon bearing bonds with maximal and respective minimal dedicated durations. Finally, an open problem addressed to researchers performing empirical studies is formulated.

  17. Quantitative fluorescence lifetime spectroscopy in turbid media: comparison of theoretical, experimental and computational methods

    International Nuclear Information System (INIS)

    Vishwanath, Karthik; Mycek, Mary-Ann; Pogue, Brian

    2002-01-01

    A Monte Carlo model developed to simulate time-resolved fluorescence propagation in a semi-infinite turbid medium was validated against previously reported theoretical and computational results. Model simulations were compared to experimental measurements of fluorescence spectra and lifetimes on tissue-simulating phantoms for single and dual fibre-optic probe geometries. Experiments and simulations using a single probe revealed that scattering-induced artefacts appeared in fluorescence emission spectra, while fluorescence lifetimes were unchanged. Although fluorescence lifetime measurements are generally more robust to scattering artefacts than are measurements of fluorescence spectra, in the dual-probe geometry scattering-induced changes in apparent lifetime were predicted both from diffusion theory and via Monte Carlo simulation, as well as measured experimentally. In all cases, the recovered apparent lifetime increased with increasing scattering and increasing source-detector separation. Diffusion theory consistently underestimated the magnitude of these increases in apparent lifetime (predicting a maximum increase of ∼15%), while Monte Carlo simulations and experiment were closely matched (showing increases as large as 30%). These results indicate that quantitative simulations of time-resolved fluorescence propagation in turbid media will be important for accurate recovery of fluorophore lifetimes in biological spectroscopy and imaging applications. (author)

  18. Illumination of interior spaces by bended hollow light guides: Application of the theoretical light propagation method

    Energy Technology Data Exchange (ETDEWEB)

    Darula, Stanislav; Kocifaj, Miroslav; Kittler, Richard [ICA, Slovak Academy of Sciences, Bratislava (Slovakia); Kundracik, Frantisek [Department of Experimental Physics, FMPI, Comenius University, Bratislava (Slovakia)

    2010-12-15

    To ensure comfort and healthy conditions in interior spaces the thermal, acoustics and daylight factors of the environment have to be considered in the building design. Due to effective energy performance in buildings the new technology and applications also in daylight engineering are sought such as tubular light guides. These allow the transport of natural light into the building core reducing energy consumption. A lot of installations with various geometrical and optical properties can be applied in real buildings. The simplest set of tubular light guide consists of a transparent cupola, direct tube with high reflected inner surface and a ceiling cover or diffuser redistributing light into the interior. Such vertical tubular guide is often used on flat roofs. When the roof construction is inclined a bend in the light guide system has to be installed. In this case the cupola is set on the sloped roof which collects sunlight and skylight from the seen part of the sky hemisphere as well as that reflected from the ground and opposite facades. In comparison with the vertical tube some additional light losses and distortions of the propagated light have to be expected in bended tubular light guides. Recently the theoretical model of light propagation was already published and its applications are presented in this study solving illuminance distributions on the ceiling cover interface and further illuminance distribution on the working plane in the interior. (author)

  19. Theoretical investigation of dielectric corona pre-ionization TEA nitrogen laser based on transmission line method

    Science.gov (United States)

    Bahrampour, Alireza; Fallah, Robabeh; Ganjovi, Alireza A.; Bahrampour, Abolfazl

    2007-07-01

    This paper models the dielectric corona pre-ionization, capacitor transfer type of flat-plane transmission line traveling wave transverse excited atmospheric pressure nitrogen laser by a non-linear lumped RLC electric circuit. The flat-plane transmission line and the pre-ionizer dielectric are modeled by a lumped linear RLC and time-dependent non-linear RC circuit, respectively. The main discharge region is considered as a time-dependent non-linear RLC circuit where its resistance value is also depends on the radiated pre-ionization ultra violet (UV) intensity. The UV radiation is radiated by the resistance due to the surface plasma on the pre-ionizer dielectric. The theoretical predictions are in a very good agreement with the experimental observations. The electric circuit equations (including the ionization rate equations), the equations of laser levels population densities and propagation equation of laser intensities, are solved numerically. As a result, the effects of pre-ionizer dielectric parameters on the electrical behavior and output laser intensity are obtained.

  20. Analysis of acquisition patterns : A theoretical and empirical evaluation of alternative methods

    NARCIS (Netherlands)

    Paas, LJ; Molenaar, IW

    The order in which consumers acquire nonconsumable products, such as durable and financial products, provides key information for marketing activities, for example, cross-sell lead generation. This paper advocates the desirable features of nonparametric scaling for analyzing acquisition patterns. We

  1. Analysis of acquisition patterns: A theoretical and empirical evaluation of alternative methods

    NARCIS (Netherlands)

    Paas, L.J.; Molenaar, I.W.

    2005-01-01

    The order in which consumers acquire nonconsumable products, such as durable and financial products, provides key information for marketing activities, for example, cross-sell lead generation. This paper advocates the desirable features of nonparametric scaling for analyzing acquisition patterns. We

  2. Theoretical study on new bias factor methods to effectively use critical experiments for improvement of prediction accuracy of neutronic characteristics

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Mori, Takamasa; Takeda, Toshikazu

    2007-01-01

    Extended bias factor methods are proposed with two new concepts, the LC method and the PE method, in order to effectively use critical experiments and to enhance the applicability of the bias factor method for the improvement of the prediction accuracy of neutronic characteristics of a target core. Both methods utilize a number of critical experimental results and produce a semifictitious experimental value with them. The LC and PE methods define the semifictitious experimental values by a linear combination of experimental values and the product of exponentiated experimental values, respectively, and the corresponding semifictitious calculation values by those of calculation values. A bias factor is defined by the ratio of the semifictitious experimental value to the semifictitious calculation value in both methods. We formulate how to determine weights for the LC method and exponents for the PE method in order to minimize the variance of the design prediction value obtained by multiplying the design calculation value by the bias factor. From a theoretical comparison of these new methods with the conventional method which utilizes a single experimental result and the generalized bias factor method which was previously proposed to utilize a number of experimental results, it is concluded that the PE method is the most useful method for improving the prediction accuracy. The main advantages of the PE method are summarized as follows. The prediction accuracy is necessarily improved compared with the design calculation value even when experimental results include large experimental errors. This is a special feature that the other methods do not have. The prediction accuracy is most effectively improved by utilizing all the experimental results. From these facts, it can be said that the PE method effectively utilizes all the experimental results and has a possibility to make a full-scale-mockup experiment unnecessary with the use of existing and future benchmark

  3. Theoretical study of the F2 molecule using the variational cellular method

    International Nuclear Information System (INIS)

    Lima, M.A.P.; Leite, J.R.; Fazzio, A.

    1981-02-01

    Variational Cellular Method calculations for F 2 have been carried out at several internuclear distances. The ground and excited state potential curves are presented. The overall agreement between the VCM results and ab initio calculations is fairly good. (Author) [pt

  4. Performance analysis of demodulation with diversity -- A combinatorial approach I: Symmetric function theoretical methods

    Directory of Open Access Journals (Sweden)

    Jean-Louis Dornstetter

    2002-12-01

    Full Text Available This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.

  5. Performance analysis of demodulation with diversity -- A combinatorial approach I: Symmetric function theoretical methods

    OpenAIRE

    Jean-Louis Dornstetter; Daniel Krob; Jean-Yves Thibon; Ekaterina A. Vassilieva

    2002-01-01

    This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.

  6. Collecting Information for Rating Global Assessment of Functioning (GAF): Sources of Information and Methods for Information Collection.

    Science.gov (United States)

    I H, Monrad Aas

    2014-11-01

    Global Assessment of Functioning (GAF) is an assessment instrument that is known worldwide. It is widely used for rating the severity of illness. Results from evaluations in psychiatry should characterize the patients. Rating of GAF is based on collected information. The aim of the study is to identify the factors involved in collecting information that is relevant for rating GAF, and gaps in knowledge where it is likely that further development would play a role for improved scoring. A literature search was conducted with a combination of thorough hand search and search in the bibliographic databases PubMed, PsycINFO, Google Scholar, and Campbell Collaboration Library of Systematic Reviews. Collection of information for rating GAF depends on two fundamental factors: the sources of information and the methods for information collection. Sources of information are patients, informants, health personnel, medical records, letters of referral and police records about violence and substance abuse. Methods for information collection include the many different types of interview - unstructured, semi-structured, structured, interviews for Axis I and II disorders, semistructured interviews for rating GAF, and interviews of informants - as well as instruments for rating symptoms and functioning, and observation. The different sources of information, and methods for collection, frequently result in inconsistencies in the information collected. The variation in collected information, and lack of a generally accepted algorithm for combining collected information, is likely to be important for rated GAF values, but there is a fundamental lack of knowledge about the degree of importance. Research to improve GAF has not reached a high level. Rated GAF values are likely to be influenced by both the sources of information used and the methods employed for information collection, but the lack of research-based information about these influences is fundamental. Further development of

  7. Theoretical study of fiber Raman amplifiers by broadband pumps through moment method

    International Nuclear Information System (INIS)

    Teimorpour, M. H.; Pourmoghadas, A.; Rahimi, L.; Farman, F.; Bahrampour, A.

    2007-01-01

    The governing equations of Raman optical fiber amplifier with broadband pumps in the steady state are a system of Uncountable Nonlinear Ordinary Differential Equations. In this paper, the Moment Method is used to reduce the uncountable system of Nonlinear Ordinary Differential Equations to a system of finite number of Nonlinear Ordinary Differential Equations. This system of equations is solved numerically. It is shown that the Moment Method is a precise and fast technique for analysis of optical fiber Raman Amplifier with broadband pumps.

  8. Constructing the principles: Method and metaphysics in the progress of theoretical physics

    Science.gov (United States)

    Glass, Lawrence C.

    This thesis presents a new framework for the philosophy of physics focused on methodological differences found in the practice of modern theoretical physics. The starting point for this investigation is the longstanding debate over scientific realism. Some philosophers have argued that it is the aim of science to produce an accurate description of the world including explanations for observable phenomena. These scientific realists hold that our best confirmed theories are approximately true and that the entities they propose actually populate the world, whether or not they have been observed. Others have argued that science achieves only frameworks for the prediction and manipulation of observable phenomena. These anti-realists argue that truth is a misleading concept when applied to empirical knowledge. Instead, focus should be on the empirical adequacy of scientific theories. This thesis argues that the fundamental distinction at issue, a division between true scientific theories and ones which are empirically adequate, is best explored in terms of methodological differences. In analogy with the realism debate, there are at least two methodological strategies. Rather than focusing on scientific theories as wholes, this thesis takes as units of analysis physical principles which are systematic empirical generalizations. The first possible strategy, the conservative, takes the assumption that the empirical adequacy of a theory in one domain serves as good evidence for such adequacy in other domains. This then motivates the application of the principle to new domains. The second strategy, the innovative, assumes that empirical adequacy in one domain does not justify the expectation of adequacy in other domains. New principles are offered as explanations in the new domain. The final part of the thesis is the application of this framework to two examples. On the first, Lorentz's use of the aether is reconstructed in terms of the conservative strategy with respect to

  9. Theoretical Proof and Empirical Confirmation of a Continuous Labeling Method Using Naturally 13C-Depleted Carbon Dioxide

    Institute of Scientific and Technical Information of China (English)

    Weixin Cheng; Feike A. Dijkstra

    2007-01-01

    Continuous isotope labeling and tracing is often needed to study the transformation, movement, and allocation of carbon in plant-soil systems. However, existing labeling methods have numerous limitations. The present study introduces a new continuous labeling method using naturally 13C-depleted CO2. We theoretically proved that a stable level of 13C-CO2 abundance In a labeling chamber can be maintained by controlling the rate of CO2-free air injection and the rate of ambient airflow with coupling of automatic control of CO2 concentration using a CO2 analyzer. The theoretical results were tested and confirmed in a 54 day experiment in a plant growth chamber. This new continuous labeling method avoids the use of radioactive 14C or expensive 13C-enriched CO2 required by existing methods and therefore eliminates issues of radiation safety or unaffordable isotope cost, as well as creating new opportunities for short- or long-term labeling experiments under a controlled environment.

  10. Evaluation of information-theoretic similarity measures for content-based retrieval and detection of masses in mammograms

    International Nuclear Information System (INIS)

    Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y.; Floyd, Carey E.

    2007-01-01

    The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses

  11. An Information Theoretical Analysis of Human Insulin-Glucose System Toward the Internet of Bio-Nano Things.

    Science.gov (United States)

    Abbasi, Naveed A; Akan, Ozgur B

    2017-12-01

    Molecular communication is an important tool to understand biological communications with many promising applications in Internet of Bio-Nano Things (IoBNT). The insulin-glucose system is of key significance among the major intra-body nanonetworks, since it fulfills metabolic requirements of the body. The study of biological networks from information and communication theoretical (ICT) perspective is necessary for their introduction in the IoBNT framework. Therefore, the objective of this paper is to provide and analyze for the first time in the literature, a simple molecular communication model of the human insulin-glucose system from ICT perspective. The data rate, channel capacity, and the group propagation delay are analyzed for a two-cell network between a pancreatic beta cell and a muscle cell that are connected through a capillary. The results point out a correlation between an increase in insulin resistance and a decrease in the data rate and channel capacity, an increase in the insulin transmission rate, and an increase in the propagation delay. We also propose applications for the introduction of the system in the IoBNT framework. Multi-cell insulin glucose system models may be based on this simple model to help in the investigation, diagnosis, and treatment of insulin resistance by means of novel IoBNT applications.

  12. On the Adaptation of an Agile Information Systems Development Method

    NARCIS (Netherlands)

    Aydin, M.N.; Harmsen, F.; van Slooten, C.; Stegwee, R.A.

    2005-01-01

    Little specific research has been conducted to date on the adaptation of agile information systems development (ISD) methods. This article presents the work practice in dealing with the adaptation of such a method in the ISD department of one of the leading financial institutes in Europe. Two forms

  13. Adaptation of an Agile Information System Development Method

    NARCIS (Netherlands)

    Aydin, M.N.; Harmsen, A.F.; van Hillegersberg, Jos; Stegwee, R.A.; Siau, K.

    2007-01-01

    Little specific research has been conducted to date on the adaptation of agile information systems development (ISD) methods. This chapter presents the work practice in dealing with the adaptation of such a method in the ISD department of one of the leading financial institutes in Europe. The

  14. How Qualitative Methods Can be Used to Inform Model Development.

    Science.gov (United States)

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  15. Theoretical methods for determination of core parameters in uranium-plutonium lattices

    International Nuclear Information System (INIS)

    Pop-Jordanov, J.; Bosevski, T.; Matausek, M.; Stefanovic, D.; Strugar, P.

    1972-01-01

    The prediction of plutonium production in power reactors depends essentially on how the change of neutron energy spectra in a reactor cell during burn-up is determined. In the epithermal region, where the build-up of plutonium occurs, the slowing down effects are particularly important, whereas, on the other hand, the thermal neutron spectrum is strongly influenced by the low-lying plutonium resonances. For accurate analysis, multi-group numerical methods are required, which, applied to burn-up prediction, are extremely laborious and time consuming even for large computers. This paper contains a comprehensive review of the methods of core parameter determination in the uranium-plutonium lattices developed in Yugoslavia during the last few years. Faced with the problem of using small computers, the authors had to find new approaches combining physical evidence and mathematical elegance. The main feature of these approaches is the tendency to proceed with analytical treatment as far as possible and then to include suitable numerical improvements. With this philosophy, which is generally overlooked when using large computers, fast and reasonably accurate methods were developed. The methods include original means for adequate treatment of neutron spectra and cell geometry effects,especially suitable for U-Pu systems. In particular, procedures based on the energy dependent boundary conditions, the discrete energy representation, the improved collision probabilities and the Green function slowing down solutions were developed and applied. Results obtained with these methods are presented and compared with those of the experiments and those obtained with other methods. (author)

  16. Theoretical methods for determination of core parameters in uranium-plutonium lattices

    Energy Technology Data Exchange (ETDEWEB)

    Pop-Jordanov, J.; Bosevski, T.; Matausek, M.; Stefanovic, D.; Strugar, P. [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)

    1972-07-01

    The prediction of plutonium production in power reactors depends essentially on how the change of neutron energy spectra in a reactor cell during burn-up is determined. In the epithermal region, where the build-up of plutonium occurs, the slowing down effects are particularly important, whereas, on the other hand, the thermal neutron spectrum is strongly influenced by the low-lying plutonium resonances. For accurate analysis, multi-group numerical methods are required, which, applied to burn-up prediction, are extremely laborious and time consuming even for large computers. This paper contains a comprehensive review of the methods of core parameter determination in the uranium-plutonium lattices developed in Yugoslavia during the last few years. Faced with the problem of using small computers, the authors had to find new approaches combining physical evidence and mathematical elegance. The main feature of these approaches is the tendency to proceed with analytical treatment as far as possible and then to include suitable numerical improvements. With this philosophy, which is generally overlooked when using large computers, fast and reasonably accurate methods were developed. The methods include original means for adequate treatment of neutron spectra and cell geometry effects,especially suitable for U-Pu systems. In particular, procedures based on the energy dependent boundary conditions, the discrete energy representation, the improved collision probabilities and the Green function slowing down solutions were developed and applied. Results obtained with these methods are presented and compared with those of the experiments and those obtained with other methods. (author)

  17. Sharing information: Mixed-methods investigation of brief experiential interprofessional

    Science.gov (United States)

    Cocksedge, Simon; Barr, Nicky; Deakin, Corinne

    In UK health policy ‘sharing good information is pivotal to improving care quality, safety, and effectiveness. Nevertheless, educators often neglect this vital communication skill. The consequences of brief communication education interventions for healthcare workers are not yet established. This study investigated a three-hour interprofessional experiential workshop (group work, theoretical input, rehearsal) training healthcare staff in sharing information using a clear structure (PARSLEY). Staff in one UK hospital participated. Questionnaires were completed before, immediately after, and eight weeks after training, with semistructured interviews seven weeks after training. Participants (n=76) were from assorted healthcare occupations (26% non-clinical). Knowledge significantly increased immediately after training. Self-efficacy, outcome expectancy, and motivation to use the structure taught were significantly increased immediately following training and at eight weeks. Respondents at eight weeks (n=35) reported their practice in sharing information had changed within seven days of training. Seven weeks after training, most interviewees (n=13) reported confidently using the PARSLEY structure regularly in varied settings. All had re-evaluated their communication practice. Brief training altered self-reported communication behaviour of healthcare staff, with sustained changes in everyday work. As sharing information is central to communication curricula, health policy, and shared decision-making, the effectiveness of brief teaching interventions has economic and educational implications.

  18. The Army Method Revisited: The Historical and Theoretical Backgrounds of the Military Intensive Language Programs.

    Science.gov (United States)

    Bayuk, Milla; Bayuk, Barry S.

    A program currently in use by the military that gives instruction in the so-called "sensitive" languages is based on the "Army Method" which was initiated in military language programs during World War II. Attention to the sensitive language program initiated a review of the programs, especially those conducted by the military intelligence schools…

  19. Measuring subjective meaning structures by the laddering method: Theoretical considerations and methodological problems

    DEFF Research Database (Denmark)

    Grunert, Klaus G.; Grunert, Suzanne C.

    1995-01-01

    Starting from a general model of measuring cognitive structures for predicting consumer behaviour, we discuss laddering as a possible method to obtain estimates of consumption-relevant cognitive structures which will have predictive validity. Four criteria for valid measurement are derived and ap...

  20. Theoretical Significance in Q Methodology: A Qualitative Approach to a Mixed Method

    Science.gov (United States)

    Ramlo, Susan

    2015-01-01

    Q methodology (Q) has offered researchers a unique scientific measure of subjectivity since William Stephenson's first article in 1935. Q's focus on subjectivity includes self-referential meaning and interpretation. Q is most often identified with its technique (Q-sort) and its method (factor analysis to group people); yet, it consists of a…

  1. Systems identification: a theoretical method applied to tracer kinetics in aquatic microcosms

    International Nuclear Information System (INIS)

    Halfon, E.; Georgia Univ., Athens

    1974-01-01

    A mathematical model of radionuclide kinetics in a laboratory microcosm was built and the transfer parameters estimated by multiple regression and system identification techniques. Insight into the functioning of the system was obtained from analysis of the model. Methods employed have allowed movements of radioisotopes not directly observable in the experimental systems to be distinguished. Results are generalized to whole ecosystems

  2. Theoretical study (ab initio and DFT methods on acidic dissociation constant of xylenol orange in aqueous solution

    Directory of Open Access Journals (Sweden)

    F. Kiani

    2017-07-01

    Full Text Available Analytical measurement of materials requires exact knowledge of their acid dissociation constant (pKa values. In recent years, quantum mechanical calculations have been extensively used to study of acidities in the aqueous solutions and the results were compared with the experimental values. In this study, a theoretical study was carried out on xylenol orange (in water solution by ab initio method. We calculated the pKa values of xylenol orange in water, using high-level ab initio (PM3, DFT (HF, B3LYP/6-31+G(d and SCRF methods. The experimental determination of these values (pKa,s is a challenge because xylenol orange has a low solubility in water. We considered several ionization reactions and equilibriums in water that constitute the indispensable theoretical basis to calculate the pKa values of xylenol orange. The results show that the calculated pKa values have a comparable agreement with the experimentally determined pKa values. Therefore, this method can be used to predict such properties for indicators, drugs and other important molecules.

  3. Theoretical and methodical peculiarities of training of student's female on a soccer

    Directory of Open Access Journals (Sweden)

    Galuza S.S.

    2012-09-01

    Full Text Available Generals are considered on planning and leadthrough of employments football with students. The anatomic physiological features of activity of womanish organism are generalized. Directions of increase of health of students are considered. It is set that in Ukrainian higher educational establishments from 70 to 90 % certain rejections have all of students in a state of health. The necessity of increase of indexes of health is marked on the basis of forming of proof motivation to regular employments by physical exercises and use of the proper effective methods. Most perspective and optimum is the use of such methods in extracurricular time. It is rotined that the use of positive complex influence of employments on the different systems of organism improves the bodily condition of students football.

  4. Theoretical studies on CH+ ion molecule using configuration interaction method and its spectroscopic properties

    International Nuclear Information System (INIS)

    Machado, F.B.C.

    1985-01-01

    The use of the configuration (CI) method for the calculation of very accurate potential energy curves and dipole moment functions, and then their use in the comprehension of spectroscopic properties of diatomic molecules is presented. The spectroscopic properties of CH + and CD + such as: vibrational levels, spectroscopic constants, averaged dipole moments for all vibrational levels, radiative transition probabilities for emission and absorption, and radiative lifetimes are verificated. (M.J.C.) [pt

  5. What is new in the study of differential equations by group theoretical methods

    International Nuclear Information System (INIS)

    Winternitz, P.

    1986-11-01

    Several recent developments have made the application of group theory to the solving of differential equations more powerful than it used to be. The ones discussed here are: 1. The advent of symbol manipulating computer languages that greatly simplify the construction of the symmetry group of an equation 2. Methods of finding all subgroups of a given Lie symmetry group 3. The theory of infinite dimensional Lie algebras 4. The combination of group theory and singularity analysis

  6. A critical assessment of theoretical methods for finding reaction pathways and transition states of surface processes

    International Nuclear Information System (INIS)

    Klimes, JirI; Michaelides, Angelos; Bowler, David R

    2010-01-01

    The performance of a variety of techniques for locating transition states on potential energy surfaces is evaluated within the density functional theory framework. Diffusion of a water molecule across NaCl(001) and HCl bond breaking on the same surface are treated as general test cases; the former is an example of a low barrier diffusion process and the latter an example of a relatively high barrier covalent bond rupture event. The methods considered include the nudged elastic band (NEB), Dewar, Healy and Stewart (DHS), dimer, constrained optimization (CO), activation-relaxation technique (ART) and one-side growing string (OGS) as well as novel combinations of the DHS with growing string (DHS + GS) and DHS plus climbing image (CI-DHS). A key conclusion to come from this study is that the NEB method is relatively fast, especially when just a single (climbing) image is used. Indeed, using more images represents an unnecessary computational burden for our set of processes. The dimer method exhibits variable performance; being poor for the water diffusion processes, which have small activation energies, but much more efficient for the HCl bond breaking process which has a higher barrier. When only a poor initial guess of the transition state geometry is available, the CI-DHS scheme is one of the most efficient techniques considered. And as a means to quickly establish an approximate minimum energy pathway the DHS + GS scheme offers some potential.

  7. Experimental and Theoretical Structural Investigation of AuPt Nanoparticles Synthesized Using a Direct Electrochemical Method.

    Science.gov (United States)

    Lapp, Aliya S; Duan, Zhiyao; Marcella, Nicholas; Luo, Long; Genc, Arda; Ringnalda, Jan; Frenkel, Anatoly I; Henkelman, Graeme; Crooks, Richard M

    2018-05-11

    In this report, we examine the structure of bimetallic nanomaterials prepared by an electrochemical approach known as hydride-terminated (HT) electrodeposition. It has been shown previously that this method can lead to deposition of a single Pt monolayer on bulk-phase Au surfaces. Specifically, under appropriate electrochemical conditions and using a solution containing PtCl 4 2- , a monolayer of Pt atoms electrodeposits onto bulk-phase Au immediately followed by a monolayer of H atoms. The H atom capping layer prevents deposition of Pt multilayers. We applied this method to ∼1.6 nm Au nanoparticles (AuNPs) immobilized on an inert electrode surface. In contrast to the well-defined, segregated Au/Pt structure of the bulk-phase surface, we observe that HT electrodeposition leads to the formation of AuPt quasi-random alloy NPs rather than the core@shell structure anticipated from earlier reports relating to deposition onto bulk phases. The results provide a good example of how the phase behavior of macro materials does not always translate to the nano world. A key component of this study was the structure determination of the AuPt NPs, which required a combination of electrochemical methods, electron microscopy, X-ray absorption spectroscopy, and theory (DFT and MD).

  8. A diffusion-theoretical method to calculate the neutron flux distribution in multisphere configurations

    International Nuclear Information System (INIS)

    Schuerrer, F.

    1980-01-01

    For characterizing heterogene configurations of pebble-bed reactors the fine structure of the flux distribution as well as the determination of the macroscopic neutronphysical quantities are of interest. When calculating system parameters of Wigner-Seitz-cells the usual codes for neutron spectra calculation always neglect the modulation of the neutron flux by the influence of neighbouring spheres. To judge the error arising from that procedure it is necessary to determinate the flux distribution in the surrounding of a spherical fuel element. In the present paper an approximation method to calculate the flux distribution in the two-sphere model is developed. This method is based on the exactly solvable problem of the flux determination of a point source of neutrons in an infinite medium, which contains a spherical perturbation zone eccentric to the point source. An iteration method allows by superposing secondary fields and alternately satisfying the conditions of continuity on the surface of each of the two fuel elements to advance to continually improving approximations. (orig.) 891 RW/orig. 892 CKA [de

  9. Impact source identification in finite isotropic plates using a time-reversal method: theoretical study

    International Nuclear Information System (INIS)

    Chen, Chunlin; Yuan, Fuh-Gwo

    2010-01-01

    This paper aims to identify impact sources on plate-like structures based on the synthetic time-reversal (T-R) concept using an array of sensors. The impact source characteristics, namely, impact location and impact loading time history, are reconstructed using the invariance of time-reversal concept, reciprocal theory, and signal processing algorithms. Numerical verification for two finite isotropic plates under low and high velocity impacts is performed to demonstrate the versatility of the synthetic T-R method for impact source identification. The results show that the impact location and time history of the impact force with various shapes and frequency bands can be readily obtained with only four sensors distributed around the impact location. The effects of time duration and the inaccuracy in the estimated impact location on the accuracy of the time history of the impact force using the T-R method are investigated. Since the T-R technique retraces all the multi-paths of reflected waves from the geometrical boundaries back to the impact location, it is well suited for quantifying the impact characteristics for complex structures. In addition, this method is robust against noise and it is suggested that a small number of sensors is sufficient to quantify the impact source characteristics through simple computation; thus it holds promise for the development of passive structural health monitoring (SHM) systems for impact monitoring in near real-time

  10. Justification of computational methods to ensure information management systems

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. Due to the diversity and complexity of organizational management tasks a large enterprise, the construction of an information management system requires the establishment of interconnected complexes of means, implementing the most efficient way collect, transfer, accumulation and processing of information necessary drivers handle different ranks in the governance process. The main trends of the construction of integrated logistics management information systems can be considered: the creation of integrated data processing systems by centralizing storage and processing of data arrays; organization of computer systems to realize the time-sharing; aggregate-block principle of the integrated logistics; Use a wide range of peripheral devices with the unification of information and hardware communication. Main attention is paid to the application of the system of research of complex technical support, in particular, the definition of quality criteria for the operation of technical complex, the development of information base analysis methods of management information systems and define the requirements for technical means, as well as methods of structural synthesis of the major subsystems of integrated logistics. Thus, the aim is to study on the basis of systematic approach of integrated logistics management information system and the development of a number of methods of analysis and synthesis of complex logistics that are suitable for use in the practice of engineering systems design. The objective function of the complex logistics management information systems is the task of gathering systems, transmission and processing of specified amounts of information in the regulated time intervals with the required degree of accuracy while minimizing the reduced costs for the establishment and operation of technical complex. Achieving the objective function of the complex logistics to carry out certain organization of interaction of information

  11. THE THEORETICAL AND METHODICAL APPROACH TO AN ASSESSMENT OF A LEVEL OF DEVELOPMENT OF THE ENTERPRISE IN CONDITIONS OF GLOBALIZATION

    Directory of Open Access Journals (Sweden)

    Tatiana Shved

    2016-11-01

    Full Text Available The subject of this article is theoretical, methodical and practical aspects of enterprise development in conditions of globalization. The purpose of this research is to provide theoretical and methodical approach to an assessment of a level of development of the enterprise, which is based on the relationship between the factors and influence, illustrating the effect of the internal and external environment of enterprises functioning, and indicates the level of development of the enterprise. Methodology. Theoretical basis of the study was the examination and rethinking of the main achievements of world and domestic science on the development of enterprises. To achieve the objectives of the research following methods were used: systemic and structural analysis for the formation of methodical approaches to the selection of the factors, influencing the development of enterprises; abstract and logical – for the formulation of conclusions and proposals; the method of valuation and expert assessments to the implementation of the proposed theoretical and methodical approach to an assessment of a level of development of the enterprise in conditions of globalization. Results of the research is the proposed theoretical and methodical to an assessment of a level of development of the enterprise in conditions of globalization, which is associated with the idea of development of the enterprise as a system with inputs–factors, influencing on the development , and outputs – indicators of the level of enterprise development within these factors. So, the chosen factors – resources, financial-economic activity, innovation and investment activities, competition, government influence, and foreign trade. Indicators that express these factors, are capital productivity, labour productivity, material efficiency within the first factor; the profitability of the activity, the coefficient of current assets, the total liquidity coefficient, financial stability

  12. Oxygen termination of homoepitaxial diamond surface by ozone and chemical methods: An experimental and theoretical perspective

    Science.gov (United States)

    Navas, Javier; Araujo, Daniel; Piñero, José Carlos; Sánchez-Coronilla, Antonio; Blanco, Eduardo; Villar, Pilar; Alcántara, Rodrigo; Montserrat, Josep; Florentin, Matthieu; Eon, David; Pernot, Julien

    2018-03-01

    Phenomena related with the diamond surface of both power electronic and biosensor devices govern their global behaviour. In particular H- or O-terminations lead to wide variations in their characteristics. To study the origins of such aspects in greater depth, different methods to achieve oxygen terminated diamond were investigated following a multi-technique approach. DFT calculations were then performed to understand the different configurations between the C and O atoms. Three methods for O-terminating the diamond surface were performed: two physical methods with ozone at different pressures, and an acid chemical treatment. X-ray photoelectron spectroscopy, spectroscopic ellipsometry, HRTEM, and EELS were used to characterize the oxygenated surface. Periodic-DFT calculations were undertaken to understand the effect of the different ways in which the oxygen atoms are bonded to carbon atoms on the diamond surface. XPS results showed the presence of hydroxyl or ether groups, composed of simple Csbnd O bonds, and the acid treatment resulted in the highest amount of O on the diamond surface. In turn, ellipsometry showed that the different treatments led to the surface having different optical properties, such as a greater refraction index and extinction coefficient in the case of the sample subjected to acid treatment. TEM analysis showed that applying temperature treatment improved the distribution of the oxygen atoms at the interface and that this generates a thinner amount of oxygen at each position and higher interfacial coverage. Finally, DFT calculations showed both an increase in the number of preferential electron transport pathways when π bonds and ether groups appear in the system, and also the presence of states in the middle of the band gap when there are π bonds, Cdbnd C or Cdbnd O.

  13. Usability Evaluation Methods for Special Interest Internet Information Services

    Directory of Open Access Journals (Sweden)

    Eva-Maria Schön

    2014-06-01

    Full Text Available The internet provides a wide range of scientific information for different areas of research, used by the related scientific communities. Often the design or architecture of these web pages does not correspond to the mental model of their users. As a result the wanted information is difficult to find. Methods established by Usability Engineering and User Experience can help to increase the appeal of scientific internet information services by analyzing the users’ requirements. This paper describes a procedure to analyze and optimize scientific internet information services that can be accomplished with relatively low effort. It consists of a combination of methods that already have been successfully applied to practice: Personas, usability inspections, Online Questionnaire, Kano model and Web Analytics.

  14. Interface methods for using intranet portal organizational memory information system.

    Science.gov (United States)

    Ji, Yong Gu; Salvendy, Gavriel

    2004-12-01

    In this paper, an intranet portal is considered as an information infrastructure (organizational memory information system, OMIS) supporting organizational learning. The properties and the hierarchical structure of information and knowledge in an intranet portal OMIS was identified as a problem for navigation tools of an intranet portal interface. The problem relates to navigation and retrieval functions of intranet portal OMIS and is expected to adversely affect user performance, satisfaction, and usefulness. To solve the problem, a conceptual model for navigation tools of an intranet portal interface was proposed and an experiment using a crossover design was conducted with 10 participants. In the experiment, a separate access method (tabbed tree tool) was compared to an unified access method (single tree tool). The results indicate that each information/knowledge repository for which a user has a different structural knowledge should be handled separately with a separate access to increase user satisfaction and the usefulness of the OMIS and to improve user performance in navigation.

  15. Theoretical methods for creep and stress relaxation studies of SSC coil

    International Nuclear Information System (INIS)

    McAdams, J.; Markley, F.

    1992-04-01

    Extrapolation of laboratory measurements of SSC coil properties to the actual construction of SSC magnets requires mathematical models of the experimental data. A variety of models were used to approximate the data collected from creep and stress relaxation experiments performed on Kapton film and SSC coil samples. The coefficients for these mathematical models were found by performing a least-squares fit via the program MINUIT. Once the semiempirical expressions for the creep data were found, they were converted to expressions for stress relaxation using an approximate I pn of the Laplace integral relating the two processes. The data sets from creep experiments were also converted directly to stress relaxation data by numeric integration. Both of these methods allow comparison of data from two different methods of measuring viscoelastic properties. Three companion papers presented at this conference will present: Stress relaxation in SSC 50mm dipole coil. Measurement of the elastic modulus of Kapton perpendicular to the plane of the film at room and cryogenic temperatures. Temperature dependence of the viscoelastic properties of SSC coil insulation (Kapton)

  16. Method of experimental and theoretical modeling for multiple pressure tube rupture for RBMK reactor

    International Nuclear Information System (INIS)

    Medvedeva, N.Y.; Goldstein, R.V.; Burrows, J.A.

    2001-01-01

    The rupture of single RBMK reactor channels has occurred at a number of stations with a variety of initiating events. It is assumed in RBMK Safety Cases that the force of the escaping fluid will not cause neighbouring channels to break. This assumption has not been justified. A chain reaction of tube breaks could over-pressurise the reactor cavity leading to catastrophic failure of the containment. To validate the claims of the RBMK Safety Cases the Electrogorsk Research and Engineering Centre, in participation with experts from the Institute of Mechanics of RAS, has developed the method of interacting multiscale physical and mathematical modelling for coupled thermophysical, hydrogasodynamic processes and deformation and break processes causing and (or) accompanying potential failures, design and beyond the design RBMK reactor accidents. To realise the method the set of rigs, physical and mathematical models and specialized computer codes are under creation. This article sets out an experimental philosophy and programme for achieving this objective to solve the problem of credibility or non-credibility for multiple fuel channel rupture in RBMK.(author)

  17. Development and implementation of theoretical methods for the description of electronically core-excited states

    Energy Technology Data Exchange (ETDEWEB)

    Wenzel, Jan

    2016-03-23

    My PhD project mainly consists of two important parts. One was to enhance and develop variants of the core-valence-separation-algebraic-diagrammatic-construction (CVS-ADC) method and implement all approaches efficiently in the adcman program, which is part of the Q-chem program package. Secondly, I benchmarked these implementations and simulated X-ray absorption spectra of small- and medium-sized molecules from different fields. In this thesis, I present my implementations, as well as the results and applications obtained with the CVS-ADC methods and give a general introduction into quantum chemical methods. At first, I implemented the CVS-ADC approach up to the extended second in an efficient way. The program is able to deal with systems up to 500 basis functions in an adequate computational time, which allows for accurate calculations of medium-sized closed-shell molecules, e.g. acenaphthenequinone (ANQ). Afterwards, the CVS-ADC implementation was extended for the first time to deal with open-shell systems, i.e. ions and radicals, which implies a treatment of unrestricted wave functions and spin-orbitals. The resulting method is denoted as CVS-UADC(2)-x. For the first time, I applied the CVS approximation to the the third order ADC scheme, derived the working equations, and implemented the CVS-ADC(3) method in adcman. As the last step, I applied the CVS formalism for the first time to the ISR approach to enable calculations of core-excited state properties and densities. To benchmark all restricted and unrestricted CVS-ADC/CVS-ISR methods up to third order in perturbation theory, I chose a set of small molecules, e.g. carbon monoxide (CO). The calculated values of core-excitation energies, transition moments and static dipole moments are compared with experimental data or other approaches, thereby estimating complete basis set (CBS) limits. Furthermore, a comprehensive study of different basis sets is performed. In combination with the CBS limit of the aug

  18. The theoretical study of passive and active optical devices via planewave based transfer (scattering) matrix method and other approaches

    Energy Technology Data Exchange (ETDEWEB)

    Zhuo, Ye [Iowa State Univ., Ames, IA (United States)

    2011-01-01

    In this thesis, we theoretically study the electromagnetic wave propagation in several passive and active optical components and devices including 2-D photonic crystals, straight and curved waveguides, organic light emitting diodes (OLEDs), and etc. Several optical designs are also presented like organic photovoltaic (OPV) cells and solar concentrators. The first part of the thesis focuses on theoretical investigation. First, the plane-wave-based transfer (scattering) matrix method (TMM) is briefly described with a short review of photonic crystals and other numerical methods to study them (Chapter 1 and 2). Next TMM, the numerical method itself is investigated in details and developed in advance to deal with more complex optical systems. In chapter 3, TMM is extended in curvilinear coordinates to study curved nanoribbon waveguides. The problem of a curved structure is transformed into an equivalent one of a straight structure with spatially dependent tensors of dielectric constant and magnetic permeability. In chapter 4, a new set of localized basis orbitals are introduced to locally represent electromagnetic field in photonic crystals as alternative to planewave basis. The second part of the thesis focuses on the design of optical devices. First, two examples of TMM applications are given. The first example is the design of metal grating structures as replacements of ITO to enhance the optical absorption in OPV cells (chapter 6). The second one is the design of the same structure as above to enhance the light extraction of OLEDs (chapter 7). Next, two design examples by ray tracing method are given, including applying a microlens array to enhance the light extraction of OLEDs (chapter 5) and an all-angle wide-wavelength design of solar concentrator (chapter 8). In summary, this dissertation has extended TMM which makes it capable of treating complex optical systems. Several optical designs by TMM and ray tracing method are also given as a full complement of this

  19. Finite element methods for engineering sciences. Theoretical approach and problem solving techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chaskalovic, J. [Ariel University Center of Samaria (Israel); Pierre and Marie Curie (Paris VI) Univ., 75 (France). Inst. Jean le Rond d' Alembert

    2008-07-01

    This self-tutorial offers a concise yet thorough grounding in the mathematics necessary for successfully applying FEMs to practical problems in science and engineering. The unique approach first summarizes and outlines the finite-element mathematics in general and then, in the second and major part, formulates problem examples that clearly demonstrate the techniques of functional analysis via numerous and diverse exercises. The solutions of the problems are given directly afterwards. Using this approach, the author motivates and encourages the reader to actively acquire the knowledge of finite- element methods instead of passively absorbing the material, as in most standard textbooks. The enlarged English-language edition, based on the original French, also contains a chapter on the approximation steps derived from the description of nature with differential equations and then applied to the specific model to be used. Furthermore, an introduction to tensor calculus using distribution theory offers further insight for readers with different mathematical backgrounds. (orig.)

  20. Theoretical treatment of photodissociation of water by time-dependent quantum mechanical methods

    International Nuclear Information System (INIS)

    Weide, K.

    1993-01-01

    An algorithm for wavepacket propagation, based on Kosloff's method of expansion of the time evolution operator in terms of Chebychev polynomials, and some details of its implementation are described. With the programs developed, quantum-mechanical calculations for up to three independent molecular coordinates are possible and feasible and therefore photodissociation of non-rotating triatomic molecules can be treated exactly. The angular degree of freedom here is handled by expansion in terms of free diatomic rotor states. The time-dependent wave packet picture is compared with the more traditional view of stationary wave functions, and both are used to interpret computational results where appropriate. Two-dimensional calculations have been performed to explain several experimental observations about water photodissociation. All calculations are based on ab initio potential energy surfaces, and it is explained in each case why it is reasonable to neglect the third degree of freedom. Many experimental results are reproduced quantitatively. (orig.) [de

  1. Field-theoretic methods in strongly-coupled models of general gauge mediation

    International Nuclear Information System (INIS)

    Fortin, Jean-François; Stergiou, Andreas

    2013-01-01

    An often-exploited feature of the operator product expansion (OPE) is that it incorporates a splitting of ultraviolet and infrared physics. In this paper we use this feature of the OPE to perform simple, approximate computations of soft masses in gauge-mediated supersymmetry breaking. The approximation amounts to truncating the OPEs for hidden-sector current–current operator products. Our method yields visible-sector superpartner spectra in terms of vacuum expectation values of a few hidden-sector IR elementary fields. We manage to obtain reasonable approximations to soft masses, even when the hidden sector is strongly coupled. We demonstrate our techniques in several examples, including a new framework where supersymmetry breaking arises both from a hidden sector and dynamically. Our results suggest that strongly-coupled models of supersymmetry breaking are naturally split

  2. Field-theoretic Methods in Strongly-Coupled Models of General Gauge Mediation

    CERN Document Server

    Fortin, Jean-Francois

    2013-01-01

    An often-exploited feature of the operator product expansion (OPE) is that it incorporates a splitting of ultraviolet and infrared physics. In this paper we use this feature of the OPE to perform simple, approximate computations of soft masses in gauge-mediated supersymmetry breaking. The approximation amounts to truncating the OPEs for hidden-sector current-current operator products. Our method yields visible-sector superpartner spectra in terms of vacuum expectation values of a few hidden-sector IR elementary fields. We manage to obtain reasonable approximations to soft masses, even when the hidden sector is strongly coupled. We demonstrate our techniques in several examples, including a new framework where supersymmetry-breaking arises both from a hidden sector and dynamically.

  3. The Theoretical and Methodical Foundations of Formation and Development of the Managerial Knowledge of Enterprise

    Directory of Open Access Journals (Sweden)

    Denysiuk Olga V.

    2017-05-01

    Full Text Available The article defines the relationship between the concepts of «managerial competency» and «managerial knowledge of enterprise». By generalizing the conceptual provisions, a typology of the enterprise’s competencies has been developed. In order to clarify the contents of the concept of managerial competency, the classification attributes of the managerial knowledge of enterprise have been allocated. The need to use management standards (management of business processes, staff, quality, projects, and production in the processes of formation and development of the managerial competencies of the enterprise has been substantiated. The composition of the methodical support of formation and development of the managerial competencies of enterprise have been provided.

  4. INVESTIGATIONS OF THE FLOW INTO A STORAGE TANK BY MEANS OF ADVANCED EXPERIMENTAL AND THEORETICAL METHODS

    DEFF Research Database (Denmark)

    Jordan, Ulrike; Shah, Louise Jivan; Furbo, Simon

    2003-01-01

    that the luminescence intensity depends on the water temperature, the temperature fields in the tank can be visualized and also be recorded with a camera. The measurements were compared with calculations of the flow and temperature fields carried out with the Computational Fluid Dynamics (CFD) tool Fluent. In future...... is to study the influence of the inlet device geometry and of the operating conditions (the flow rate, draw-off volume, and temperatures) on the thermal stratification in the tank. Measurements of the flow and temperature fields were carried out with two visualization techniques: - To visualize the flow field...... a method called Particle Image Velocimetry (PIV) was applied. Particles with a size of 1 to 10 mm were seeded in the water and then illuminated by a laser within a narrow plane. In order to measure the three velocity components of the flow within the plane, the particle displacements between laser pulses...

  5. An Improved Information Hiding Method Based on Sparse Representation

    Directory of Open Access Journals (Sweden)

    Minghai Yao

    2015-01-01

    Full Text Available A novel biometric authentication information hiding method based on the sparse representation is proposed for enhancing the security of biometric information transmitted in the network. In order to make good use of abundant information of the cover image, the sparse representation method is adopted to exploit the correlation between the cover and biometric images. Thus, the biometric image is divided into two parts. The first part is the reconstructed image, and the other part is the residual image. The biometric authentication image cannot be restored by any one part. The residual image and sparse representation coefficients are embedded into the cover image. Then, for the sake of causing much less attention of attackers, the visual attention mechanism is employed to select embedding location and embedding sequence of secret information. Finally, the reversible watermarking algorithm based on histogram is utilized for embedding the secret information. For verifying the validity of the algorithm, the PolyU multispectral palmprint and the CASIA iris databases are used as biometric information. The experimental results show that the proposed method exhibits good security, invisibility, and high capacity.

  6. System and method for acquisition management of subject position information

    Science.gov (United States)

    Carrender, Curt

    2005-12-13

    A system and method for acquisition management of subject position information that utilizes radio frequency identification (RF ID) to store position information in position tags. Tag programmers receive position information from external positioning systems, such as the Global Positioning System (GPS), from manual inputs, such as keypads, or other tag programmers. The tag programmers program each position tag with the received position information. Both the tag programmers and the position tags can be portable or fixed. Implementations include portable tag programmers and fixed position tags for subject position guidance, and portable tag programmers for collection sample labeling. Other implementations include fixed tag programmers and portable position tags for subject route recordation. Position tags can contain other associated information such as destination address of an affixed subject for subject routing.

  7. System and method for acquisition management of subject position information

    Energy Technology Data Exchange (ETDEWEB)

    Carrender, Curt [Morgan Hill, CA

    2007-01-23

    A system and method for acquisition management of subject position information that utilizes radio frequency identification (RF ID) to store position information in position tags. Tag programmers receive position information from external positioning systems, such as the Global Positioning System (GPS), from manual inputs, such as keypads, or other tag programmers. The tag programmers program each position tag with the received position information. Both the tag programmers and the position tags can be portable or fixed. Implementations include portable tag programmers and fixed position tags for subject position guidance, and portable tag programmers for collection sample labeling. Other implementations include fixed tag programmers and portable position tags for subject route recordation. Position tags can contain other associated information such as destination address of an affixed subject for subject routing.

  8. Financial time series analysis based on information categorization method

    Science.gov (United States)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  9. Fundamental energy limits of SET-based Brownian NAND and half-adder circuits. Preliminary findings from a physical-information-theoretic methodology

    Science.gov (United States)

    Ercan, İlke; Suyabatmaz, Enes

    2018-06-01

    The saturation in the efficiency and performance scaling of conventional electronic technologies brings about the development of novel computational paradigms. Brownian circuits are among the promising alternatives that can exploit fluctuations to increase the efficiency of information processing in nanocomputing. A Brownian cellular automaton, where signals propagate randomly and are driven by local transition rules, can be made computationally universal by embedding arbitrary asynchronous circuits on it. One of the potential realizations of such circuits is via single electron tunneling (SET) devices since SET technology enable simulation of noise and fluctuations in a fashion similar to Brownian search. In this paper, we perform a physical-information-theoretic analysis on the efficiency limitations in a Brownian NAND and half-adder circuits implemented using SET technology. The method we employed here establishes a solid ground that enables studying computational and physical features of this emerging technology on an equal footing, and yield fundamental lower bounds that provide valuable insights into how far its efficiency can be improved in principle. In order to provide a basis for comparison, we also analyze a NAND gate and half-adder circuit implemented in complementary metal oxide semiconductor technology to show how the fundamental bound of the Brownian circuit compares against a conventional paradigm.

  10. Theoretical comparison of performance using transfer functions for reactivity meters based on inverse kinetic method and simple feedback method

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro; Tashiro, Shoichi; Tojo, Masayuki

    2017-01-01

    The performance of two digital reactivity meters, one based on the conventional inverse kinetic method and the other one based on simple feedback theory, are compared analytically using their respective transfer functions. The latter one is proposed by one of the authors. It has been shown that the performance of the two reactivity meters become almost identical when proper system parameters are selected for each reactivity meter. A new correlation between the system parameters of the two reactivity meters is found. With this correlation, filter designers can easily determine the system parameters for the respective reactivity meters to obtain identical performance. (author)

  11. Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities

    KAUST Repository

    Nielsen, Frank

    2016-12-09

    Information-theoreticmeasures, such as the entropy, the cross-entropy and the Kullback-Leibler divergence between two mixture models, are core primitives in many signal processing tasks. Since the Kullback-Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte Carlo stochastic integration, approximated or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy, the Kullback-Leibler and the α-divergences of mixtures. We illustrate the versatile method by reporting our experiments for approximating the Kullback-Leibler and the α-divergences between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures and Gamma mixtures.

  12. Stacking interactions between carbohydrate and protein quantified by combination of theoretical and experimental methods.

    Directory of Open Access Journals (Sweden)

    Michaela Wimmerová

    Full Text Available Carbohydrate-receptor interactions are an integral part of biological events. They play an important role in many cellular processes, such as cell-cell adhesion, cell differentiation and in-cell signaling. Carbohydrates can interact with a receptor by using several types of intermolecular interactions. One of the most important is the interaction of a carbohydrate's apolar part with aromatic amino acid residues, known as dispersion interaction or CH/π interaction. In the study presented here, we attempted for the first time to quantify how the CH/π interaction contributes to a more general carbohydrate-protein interaction. We used a combined experimental approach, creating single and double point mutants with high level computational methods, and applied both to Ralstonia solanacearum (RSL lectin complexes with α-L-Me-fucoside. Experimentally measured binding affinities were compared with computed carbohydrate-aromatic amino acid residue interaction energies. Experimental binding affinities for the RSL wild type, phenylalanine and alanine mutants were -8.5, -7.1 and -4.1 kcal x mol(-1, respectively. These affinities agree with the computed dispersion interaction energy between carbohydrate and aromatic amino acid residues for RSL wild type and phenylalanine, with values -8.8, -7.9 kcal x mol(-1, excluding the alanine mutant where the interaction energy was -0.9 kcal x mol(-1. Molecular dynamics simulations show that discrepancy can be caused by creation of a new hydrogen bond between the α-L-Me-fucoside and RSL. Observed results suggest that in this and similar cases the carbohydrate-receptor interaction can be driven mainly by a dispersion interaction.

  13. Discussion of a method for providing general risk information by linking with the nuclear information

    International Nuclear Information System (INIS)

    Shobu, Nobuhiro; Yokomizo, Shirou; Umezawa, Sayaka

    2004-06-01

    'Risk information navigator (http://www.ricotti.jp/risknavi/)', an internet tool for arousing public interest and fostering people's risk literacy, has been developed as the contents for the official website of Techno Community Square 'RICOTTI' (http://www.ricotti.jp) at TOKAI village. In this report we classified the risk information into the fields, Health/Daily Life', 'Society/Crime/Disaster' and Technology/Environment/Energy', for the internet tool contents. According to these categories we discussed a method for providing various risk information on general fields by linking with the information on nuclear field. The web contents are attached to this report with the CD-R media. (author)

  14. Theoretical and Experimental Studies of Dissimilar Secondary Metallurgy Methods for Improving Steel Cleanliness

    Science.gov (United States)

    Pitts-Baggett, April

    Due to a continual increasing industry demand for clean steels, a multi-depth sampling approach was developed to gain a more detailed depiction of the reactions occurring in the ladle throughout the Ladle Metallurgy Furnace (LMF) processing. This sampling technique allows for the ability for samples to be reached at depths, which have not been able to be captured before, of approximately 1.5 m below the slag layer. These samples were also taken in conjunction with samples taken just under the slag layer as well as in between those samples. Additional samples were also taken during the processing including multi-point slag sampling. The heats were divided in to five key processing steps: Start of heat (S), after Alloying (A), after desulfurization/start of pre-Rinse (R), prior to Ca treatment (C), and End of heat (E). Sampling sets were collected to compare the effects of silicon, desulfurization rates, slag emulsification, slag evolution and inclusion evolution. By gaining the ability to gather multiple depths, it was determined that the slag emulsification has the ability to follow the flow pattern of the ladle deeper into the ladle than previously seen in literature. Inclusion evolution has been shown by numerous researchers; however, this study showed differences in the inclusion grouping and distribution at the different depths of the ladle through Automated Feature Analysis (AFA). Also, the inclusion path was seen to change depending on both the silicon content and the sulfur content of the steel. This method was applied to develop a desulfurization model at Nucor Steel Tuscaloosa, Inc. (NSTI). In addition to a desulfurization model, a calcium (Ca) model was also developed. The Ca model was applied to target a finished inclusion region based on the conditions up to the wire treatment. These conditions included time, silicon content, and sulfur concentration. Due to the inability of this model to handle every process variable, a new procedure was created to

  15. Aligning professional skills and active learning methods: an application for information and communications technology engineering

    Science.gov (United States)

    Llorens, Ariadna; Berbegal-Mirabent, Jasmina; Llinàs-Audet, Xavier

    2017-07-01

    Engineering education is facing new challenges to effectively provide the appropriate skills to future engineering professionals according to market demands. This study proposes a model based on active learning methods, which is expected to facilitate the acquisition of the professional skills most highly valued in the information and communications technology (ICT) market. The theoretical foundations of the study are based on the specific literature on active learning methodologies. The Delphi method is used to establish the fit between learning methods and generic skills required by the ICT sector. An innovative proposition is therefore presented that groups the required skills in relation to the teaching method that best develops them. The qualitative research suggests that a combination of project-based learning and the learning contract is sufficient to ensure a satisfactory skills level for this profile of engineers.

  16. Theoretical Frameworks, Methods, and Procedures for Conducting Phenomenological Studies in Educational Settings

    Directory of Open Access Journals (Sweden)

    Pelin Yüksel

    2015-01-01

    Full Text Available The main purposes of phenomenological research are to seek reality from individuals’ narratives of their experiences and feelings, and to produce in-depth descriptions of the phenomenon. Phenomenological research studies in educational settings generally embody lived experience, perception, and feelings of participants about a phenomenon. This study aims to provide a general framework for researchers who are interested in phenomenological studies especially in educational setting. Additionally, the study provides a guide for researchers on how to conduct a phenomenological research and how to collect and analyze phenomenal data. The first part of the paper explains the underpinnings of the research methodology consisting of methodological framework and key phenomenological concepts. The second part provides guidance for a phenomenological research in education settings, focusing particularly on phenomenological data collection procedure and phenomenological data analysis methods.Keywords: Phenomenology, phenomenological inquiry, phenomenological data analysis Eğitim Ortamlarında Fenomenal Çalışmaları Yürütmek İçin Teorik Çerçeveler, Yöntemler ve ProsedürlerÖzFenomenolojik araştırmaların temel amacı, bireyin deneyimlerinden ve duygularından yola çıkarak belli bir fenomenan üzerinde yaptığı anlatılarında gerçeği aramak ve bu fenomenana yönelik derinlemesine açıklamalar üretmektir. Eğitim ortamlarında fenomenolojik araştırmalar genellikle araştırmaya katılanların belli bir fenomenan hakkında yaşantıları, deneyimleri, algıları ve duyguları somutlaştırmak için kullanılır. Bu çalışma, özellikle eğitim ortamlarında fenomenolojik çalışmalarla ilgilenen araştırmacılar için genel bir çerçeve sunmayı amaçlamaktadır. Ayrıca, çalışmada fenomenolojik araştırmalar için veri toplamak ve bu fenomenal verileri analiz yapmak için araştırmacılara yön gösterici bir k

  17. Research on a Method of Geographical Information Service Load Balancing

    Science.gov (United States)

    Li, Heyuan; Li, Yongxing; Xue, Zhiyong; Feng, Tao

    2018-05-01

    With the development of geographical information service technologies, how to achieve the intelligent scheduling and high concurrent access of geographical information service resources based on load balancing is a focal point of current study. This paper presents an algorithm of dynamic load balancing. In the algorithm, types of geographical information service are matched with the corresponding server group, then the RED algorithm is combined with the method of double threshold effectively to judge the load state of serve node, finally the service is scheduled based on weighted probabilistic in a certain period. At the last, an experiment system is built based on cluster server, which proves the effectiveness of the method presented in this paper.

  18. Enablers and barriers to physical activity in overweight and obese pregnant women: an analysis informed by the theoretical domains framework and COM-B model.

    Science.gov (United States)

    Flannery, C; McHugh, S; Anaba, A E; Clifford, E; O'Riordan, M; Kenny, L C; McAuliffe, F M; Kearney, P M; Byrne, M

    2018-05-21

    Obesity during pregnancy is associated with increased risk of gestational diabetes mellitus (GDM) and other complications. Physical activity is a modifiable lifestyle factor that may help to prevent these complications but many women reduce their physical activity levels during pregnancy. Interventions targeting physical activity in pregnancy are on-going but few identify the underlying behaviour change mechanisms by which the intervention is expected to work. To enhance intervention effectiveness, recent tools in behavioural science such as the Theoretical Domains Framework (TDF) and COM-B model (capability, opportunity, motivation and behaviour) have been employed to understand behaviours for intervention development. Using these behaviour change methods, this study aimed to identify the enablers and barriers to physical activity in overweight and obese pregnant women. Semi-structured interviews were conducted with a purposive sample of overweight and obese women at different stages of pregnancy attending a public antenatal clinic in a large academic maternity hospital in Cork, Ireland. Interviews were recorded and transcribed into NVivo V.10 software. Data analysis followed the framework approach, drawing on the TDF and the COM-B model. Twenty one themes were identified and these mapped directly on to the COM-B model of behaviour change and ten of the TDF domains. Having the social opportunity to engage in physical activity was identified as an enabler; pregnant women suggested being active was easier when supported by their partners. Knowledge was a commonly reported barrier with women lacking information on safe activities during pregnancy and describing the information received from their midwife as 'limited'. Having the physical capability and physical opportunity to carry out physical activity were also identified as barriers; experiencing pain, a lack of time, having other children, and working prevented women from being active. A wide range of barriers

  19. Theoretical study of the dependence of single impurity Anderson model on various parameters within distributional exact diagonalization method

    Science.gov (United States)

    Syaina, L. P.; Majidi, M. A.

    2018-04-01

    Single impurity Anderson model describes a system consisting of non-interacting conduction electrons coupled with a localized orbital having strongly interacting electrons at a particular site. This model has been proven successful to explain the phenomenon of metal-insulator transition through Anderson localization. Despite the well-understood behaviors of the model, little has been explored theoretically on how the model properties gradually evolve as functions of hybridization parameter, interaction energy, impurity concentration, and temperature. Here, we propose to do a theoretical study on those aspects of a single impurity Anderson model using the distributional exact diagonalization method. We solve the model Hamiltonian by randomly generating sampling distribution of some conducting electron energy levels with various number of occupying electrons. The resulting eigenvalues and eigenstates are then used to define the local single-particle Green function for each sampled electron energy distribution using Lehmann representation. Later, we extract the corresponding self-energy of each distribution, then average over all the distributions and construct the local Green function of the system to calculate the density of states. We repeat this procedure for various values of those controllable parameters, and discuss our results in connection with the criteria of the occurrence of metal-insulator transition in this system.

  20. Experimental and theoretical analysis of the rate of solvent equilibration in the hanging drop method of protein crystal growth

    Science.gov (United States)

    Fowlis, William W.; Delucas, Lawrence J.; Twigg, Pamela J.; Howard, Sandra B.; Meehan, Edward J.

    1988-01-01

    The principles of the hanging-drop method of crystal growth are discussed, and the rate of water evaporation in a water droplet (containing protein, buffer, and a precipitating agent) suspended above a well containing a double concentration of precipitating agent is investigated theoretically. It is shown that, on earth, the rate of evaporation may be determined from diffusion theory and the colligative properties of solutions. The parameters affecting the rate of evaporation include the temperature, the vapor pressure of water, the ionization constant of the salt, the volume of the drop, the contact angle between the droplet and the coverslip, the number of moles of salt in the droplet, the number of moles of water and salt in the well, the molar volumes of water and salt, the distance from the droplet to the well, and the coefficient of diffusion of water vapor through air. To test the theoretical equations, hanging-drop experiments were conducted using various reagent concentrations in 25-microliter droplets and measuring the evaporation times at 4 C and 25 C. The results showed good agreement with the theory.

  1. Control method for biped locomotion robots based on ZMP information

    International Nuclear Information System (INIS)

    Kume, Etsuo

    1994-01-01

    The Human Acts Simulation Program (HASP) started as a ten year program of Computing and Information Systems Center (CISC) at Japan Atomic Energy Research Institute (JAERI) in 1987. A mechanical design study of biped locomotion robots for patrol and inspection in nuclear facilities is being performed as an item of the research scope. One of the goals of our research is to design a biped locomotion robot for practical use in nuclear facilities. So far, we have been studying for several dynamic walking patterns. In conventional control methods for biped locomotion robots, the program control is used based on preset walking patterns, so it dose not have the robustness such as a dynamic change of walking pattern. Therefore, a real-time control method based on dynamic information of the robot states is necessary for the high performance of walking. In this study a new control method based on Zero Moment Point (ZMP) information is proposed as one of real-time control methods. The proposed method is discussed and validated based on the numerical simulation. (author)

  2. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  3. Information loss method to measure node similarity in networks

    Science.gov (United States)

    Li, Yongli; Luo, Peng; Wu, Chong

    2014-09-01

    Similarity measurement for the network node has been paid increasing attention in the field of statistical physics. In this paper, we propose an entropy-based information loss method to measure the node similarity. The whole model is established based on this idea that less information loss is caused by seeing two more similar nodes as the same. The proposed new method has relatively low algorithm complexity, making it less time-consuming and more efficient to deal with the large scale real-world network. In order to clarify its availability and accuracy, this new approach was compared with some other selected approaches on two artificial examples and synthetic networks. Furthermore, the proposed method is also successfully applied to predict the network evolution and predict the unknown nodes' attributions in the two application examples.

  4. Hybrid methods to represent incomplete and uncertain information

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, C. [NASA Goddard Space Flight Center, Greenbelt, MD (United States)

    1996-12-31

    Decision making is cast in the semiotic context of perception, decision, and action loops. Towards the goal of properly grounding hybrid representations of information and uncertainty from this semiotic perspective, we consider the roles of and relations among the mathematical components of General Information Theory (GIT), particularly among fuzzy sets, possibility theory, probability theory, and random sets. We do so by using a clear distinction between the syntactic, mathematical formalism and the semantic domains of application of each of these fields, placing the emphasis on available measurement and action methods appropriate for each formalism, to which and from which the decision-making process flows.

  5. Development and content validation of the information assessment method for patients and consumers.

    Science.gov (United States)

    Pluye, Pierre; Granikov, Vera; Bartlett, Gillian; Grad, Roland M; Tang, David L; Johnson-Lafleur, Janique; Shulha, Michael; Barbosa Galvão, Maria Cristiane; Ricarte, Ivan Lm; Stephenson, Randolph; Shohet, Linda; Hutsul, Jo-Anne; Repchinsky, Carol A; Rosenberg, Ellen; Burnand, Bernard; Légaré, France; Dunikowski, Lynn; Murray, Susan; Boruff, Jill; Frati, Francesca; Kloda, Lorie; Macaulay, Ann; Lagarde, François; Doray, Geneviève

    2014-02-18

    Online consumer health information addresses health problems, self-care, disease prevention, and health care services and is intended for the general public. Using this information, people can improve their knowledge, participation in health decision-making, and health. However, there are no comprehensive instruments to evaluate the value of health information from a consumer perspective. We collaborated with information providers to develop and validate the Information Assessment Method for all (IAM4all) that can be used to collect feedback from information consumers (including patients), and to enable a two-way knowledge translation between information providers and consumers. Content validation steps were followed to develop the IAM4all questionnaire. The first version was based on a theoretical framework from information science, a critical literature review and prior work. Then, 16 laypersons were interviewed on their experience with online health information and specifically their impression of the IAM4all questionnaire. Based on the summaries and interpretations of interviews, questionnaire items were revised, added, and excluded, thus creating the second version of the questionnaire. Subsequently, a panel of 12 information specialists and 8 health researchers participated in an online survey to rate each questionnaire item for relevance, clarity, representativeness, and specificity. The result of this expert panel contributed to the third, current, version of the questionnaire. The current version of the IAM4all questionnaire is structured by four levels of outcomes of information seeking/receiving: situational relevance, cognitive impact, information use, and health benefits. Following the interviews and the expert panel survey, 9 questionnaire items were confirmed as relevant, clear, representative, and specific. To improve readability and accessibility for users with a lower level of literacy, 19 items were reworded and all inconsistencies in using a

  6. Method of accounting and suppressing the instability of dosimetric information

    International Nuclear Information System (INIS)

    Fejtek, Ya.

    1977-01-01

    To account for dosimetric information instability differential and integral correcting factors are proposed. The differential factor converts signals of dosimeters irradiated during short but different periods of time into equivalent signals related to a certain period of time. The factor excludes the effect of signal instability in the case of short exposures. The integral factor represents a generalization of the differential one for prolonged exposures. The statistical integral factor is derived. An example of processing experimental data using the analytical method developed is presented. The method is pointed out to have been introduced in the state personal dosimetry service in Czechoslovakia [ru

  7. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L.; Lee, Danny C.

    2000-11-01

    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  8. A review of Web information seeking research: considerations of method and foci of interest

    Directory of Open Access Journals (Sweden)

    Konstantina Martzoukou

    2005-01-01

    Full Text Available Introduction. This review shows that Web information seeking research suffers from inconsistencies in method and a lack of homogeneity in research foci. Background. Qualitative and quantitative methods are needed to produce a comprehensive view of information seeking. Studies also recommend observation as one of the most fundamental ways of gaining direct knowledge of behaviour. User-centred research emphasises the importance of holistic approaches, which incorporate physical, cognitive, and affective elements. Problems. Comprehensive studies are limited; many approaches are problematic and a consistent methodological framework has not been developed. Research has often failed to ensure appropriate samples that ensure both quantitative validity and qualitative consistency. Typically, observation has been based on simulated rather than real information needs and most studies show little attempt to examine holistically different characteristics of users in the same research schema. Research also deals with various aspects of cognitive style and ability with variant definitions of expertise and different layers of user experience. Finally the effect of social and cultural elements has not been extensively investigated. Conclusion. The existing limitations in method and the plethora of different approaches allow little progress and fewer comparisons across studies. There is urgent need for establishing a theoretical framework on which future studies can be based so that information seeking behaviour can be more holistically understood, and results can be generalised.

  9. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  10. A human-machine interface evaluation method: A difficulty evaluation method in information searching (DEMIS)

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2009-01-01

    A human-machine interface (HMI) evaluation method, which is named 'difficulty evaluation method in information searching (DEMIS)', is proposed and demonstrated with an experimental study. The DEMIS is based on a human performance model and two measures of attentional-resource effectiveness in monitoring and detection tasks in nuclear power plants (NPPs). Operator competence and HMI design are modeled to be most significant factors to human performance. One of the two effectiveness measures is fixation-to-importance ratio (FIR) which represents attentional resource (eye fixations) spent on an information source compared to importance of the information source. The other measure is selective attention effectiveness (SAE) which incorporates FIRs for all information sources. The underlying principle of the measures is that the information source should be selectively attended to according to its informational importance. In this study, poor performance in information searching tasks is modeled to be coupled with difficulties caused by poor mental models of operators or/and poor HMI design. Human performance in information searching tasks is evaluated by analyzing the FIR and the SAE. Operator mental models are evaluated by a questionnaire-based method. Then difficulties caused by a poor HMI design are evaluated by a focused interview based on the FIR evaluation and then root causes leading to poor performance are identified in a systematic way.

  11. Prediction of stress- and strain-based forming limits of automotive thin sheets by numerical, theoretical and experimental methods

    Science.gov (United States)

    Béres, Gábor; Weltsch, Zoltán; Lukács, Zsolt; Tisza, Miklós

    2018-05-01

    Forming limit is a complex concept of limit values related to the onset of local necking in the sheet metal. In cold sheet metal forming, major and minor limit strains are influenced by the sheet thickness, strain path (deformation history) as well as material parameters and microstructure. Forming Limit Curves are plotted in ɛ1 - ɛ2 coordinate system providing the classic strain-based Forming Limit Diagram (FLD). Using the appropriate constitutive model, the limit strains can be changed into the stress-based Forming Limit Diagram (SFLD), irrespective of the strain path. This study is about the effect of the hardening model parameters on defining of limit stress values during Nakazima tests for automotive dual phase (DP) steels. Five limit strain pairs were specified experimentally with the loading of five different sheet geometries, which performed different strain-paths from pure shear (-2ɛ2=ɛ1) up to biaxial stretching (ɛ2=ɛ1). The former works of Hill, Levy-Tyne and Keeler-Brazier made possible some kind of theoretical strain determination, too. This was followed by the stress calculation based on the experimental and theoretical strain data. Since the n exponent in the Nádai expression is varying with the strain at some DP steels, we applied the least-squares method to fit other hardening model parameters (Ludwik, Voce, Hockett-Sherby) to calculate the stress fields belonging to each limit strains. The results showed that each model parameters could produce some discrepancies between the limit stress states in the range of higher equivalent strains than uniaxial stretching. The calculated hardening models were imported to FE code to extend and validate the results by numerical simulations.

  12. Promoting physical therapists’ of research evidence to inform clinical practice: part 1 - theoretical foundation, evidence, and description of the PEAK program

    Science.gov (United States)

    2014-01-01

    Background There is a need for theoretically grounded and evidence-based interventions that enhance the use of research evidence in physical therapist practice. This paper and its companion paper introduce the Physical therapist-driven Education for Actionable Knowledge translation (PEAK) program, an educational program designed to promote physical therapists’ integration of research evidence into clinical decision-making. The pedagogical foundations for the PEAK educational program include Albert Bandura’s social cognitive theory and Malcolm Knowles’s adult learning theory. Additionally, two complementary frameworks of knowledge translation, the Promoting Action on Research Implementation in Health Services (PARiHS) and Knowledge to Action (KTA) Cycle, were used to inform the organizational elements of the program. Finally, the program design was influenced by evidence from previous attempts to facilitate the use of research in practice at the individual and organizational levels. Discussion The 6-month PEAK program consisted of four consecutive and interdependent components. First, leadership support was secured and electronic resources were acquired and distributed to participants. Next, a two-day training workshop consisting of didactic and small group activities was conducted that addressed the five steps of evidence based practice. For five months following the workshop, participants worked in small groups to review and synthesize literature around a group-selected area of common clinical interest. Each group contributed to the generation of a “Best Practices List” - a list of locally generated, evidence-based, actionable behaviors relevant to the groups’ clinical practice. Ultimately, participants agreed to implement the Best Practices List in their clinical practice. Summary This, first of two companion papers, describes the underlying pedagogical theories, knowledge translation frameworks, and research evidence used to derive the PEAK program

  13. Promoting physical therapists' of research evidence to inform clinical practice: part 1--theoretical foundation, evidence, and description of the PEAK program.

    Science.gov (United States)

    Tilson, Julie K; Mickan, Sharon

    2014-06-25

    There is a need for theoretically grounded and evidence-based interventions that enhance the use of research evidence in physical therapist practice. This paper and its companion paper introduce the Physical therapist-driven Education for Actionable Knowledge translation (PEAK) program, an educational program designed to promote physical therapists' integration of research evidence into clinical decision-making. The pedagogical foundations for the PEAK educational program include Albert Bandura's social cognitive theory and Malcolm Knowles's adult learning theory. Additionally, two complementary frameworks of knowledge translation, the Promoting Action on Research Implementation in Health Services (PARiHS) and Knowledge to Action (KTA) Cycle, were used to inform the organizational elements of the program. Finally, the program design was influenced by evidence from previous attempts to facilitate the use of research in practice at the individual and organizational levels. The 6-month PEAK program consisted of four consecutive and interdependent components. First, leadership support was secured and electronic resources were acquired and distributed to participants. Next, a two-day training workshop consisting of didactic and small group activities was conducted that addressed the five steps of evidence based practice. For five months following the workshop, participants worked in small groups to review and synthesize literature around a group-selected area of common clinical interest. Each group contributed to the generation of a "Best Practices List" - a list of locally generated, evidence-based, actionable behaviors relevant to the groups' clinical practice. Ultimately, participants agreed to implement the Best Practices List in their clinical practice. This, first of two companion papers, describes the underlying pedagogical theories, knowledge translation frameworks, and research evidence used to derive the PEAK program - an educational program designed to

  14. Fish stock assessment under data limitations developing a new method based on a size-structured theoretical ecology framework

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros

    catch is known, important quantities about the stock (e.g. biomass of spawners, recruitment) can be quantified. The method is tested using simulated data and validated using a subset of available data from data-rich fish stocks. The implementation of the method as a software package in the R programming......Fish stock assessment is an integral part of every fisheries management system. Modern assessment methods require data about the fishery and the stock, such as catches, survey estimates, aging information and life history parameters, all of which is difficult and expensive to gather. However......, the majority of global fish catches comes from species that lack an official assessment due to lack of data. That is true especially for small scale fisheries and fisheries in developing countries. New methods are in need that require little amount of easily attainable data and provide scientific advice...

  15. THEORETICAL AND METHODOLOGICAL APPROACHES TO THE STUDY OF THE IMPACT OF INFORMATION TECHNOLOGY ON SOCIAL CONNECTIONS AMONG YOUTH

    Directory of Open Access Journals (Sweden)

    Sofia Alexandrovna Zverkova

    2015-11-01

    Full Text Available The urgency is due to the virtualization of communication in modern society, especially among young people, affecting social relations and social support services. Stressed the need for a more in-depth study of network virtualization of social relations of society, due to the ambiguous consequences of this phenomenon among the youth.Purpose. Analyze classic and contemporary theoretical and methodological approaches to the study of social ties and social support in terms of technological progress.Results. The article presents a sociological analysis of theoretical and methodological approaches to the study of problems of interaction and social support among youth through strong and weak social ties in cyberspace and in the real world.Practical implications. The analysis gives the opportunity for a wide range of examining social relations in various fields of sociology, such as sociology of youth, sociology of communications.

  16. A direct vulnerable atherosclerotic plaque elasticity reconstruction method based on an original material-finite element formulation: theoretical framework

    Science.gov (United States)

    Bouvier, Adeline; Deleaval, Flavien; Doyley, Marvin M.; Yazdani, Saami K.; Finet, Gérard; Le Floc'h, Simon; Cloutier, Guy; Pettigrew, Roderic I.; Ohayon, Jacques

    2013-12-01

    The peak cap stress (PCS) amplitude is recognized as a biomechanical predictor of vulnerable plaque (VP) rupture. However, quantifying PCS in vivo remains a challenge since the stress depends on the plaque mechanical properties. In response, an iterative material finite element (FE) elasticity reconstruction method using strain measurements has been implemented for the solution of these inverse problems. Although this approach could resolve the mechanical characterization of VPs, it suffers from major limitations since (i) it is not adapted to characterize VPs exhibiting high material discontinuities between inclusions, and (ii) does not permit real time elasticity reconstruction for clinical use. The present theoretical study was therefore designed to develop a direct material-FE algorithm for elasticity reconstruction problems which accounts for material heterogeneities. We originally modified and adapted the extended FE method (Xfem), used mainly in crack analysis, to model material heterogeneities. This new algorithm was successfully applied to six coronary lesions of patients imaged in vivo with intravascular ultrasound. The results demonstrated that the mean relative absolute errors of the reconstructed Young's moduli obtained for the arterial wall, fibrosis, necrotic core, and calcified regions of the VPs decreased from 95.3±15.56%, 98.85±72.42%, 103.29±111.86% and 95.3±10.49%, respectively, to values smaller than 2.6 × 10-8±5.7 × 10-8% (i.e. close to the exact solutions) when including modified-Xfem method into our direct elasticity reconstruction method.

  17. Informed consent in colonoscopy: A comparative analysis of 2 methods.

    Science.gov (United States)

    Sanguinetti, J M; Lotero Polesel, J C; Iriarte, S M; Ledesma, C; Canseco Fuentes, S E; Caro, L E

    2015-01-01

    The manner in which informed consent is obtained varies. The aim of this study is to evaluate the level of knowledge about colonoscopy and comparing 2 methods of obtaining informed consent. A comparative, cross-sectional, observational study was conducted on patients that underwent colonoscopy in a public hospital (Group A) and in a private hospital (Group B). Group A received information verbally from a physician, as well as in the form of printed material, and Group B only received printed material. A telephone survey was carried out one or 2 weeks later. The study included a total of 176 subjects (group A [n=55] and group B [n=121]). As regards education level, 69.88% (n=123) of the patients had completed university education, 23.29% (n= 41) secondary level, 5.68% (n=10) primary level, and the remaining subjects (n=2) had not completed any level of education. All (100%) of the subjects knew the characteristics of the procedure, and 99.43% were aware of its benefits. A total of 97.7% received information about complications, 93.7% named some of them, and 25% (n=44) remembered major complications. All the subjects received, read, and signed the informed consent statement before the study. There were no differences between the groups with respect to knowledge of the characteristics and benefits of the procedure, or the receipt and reading of the consent form. Group B responded better in relation to complications (P=.0027) and group A had a better recollection of the major complications (P<.0001). Group A had a higher number of affirmative answers (P<.0001). The combination of verbal and written information provides the patient with a more comprehensive level of knowledge about the procedure. Copyright © 2014 Asociación Mexicana de Gastroenterología. Published by Masson Doyma México S.A. All rights reserved.

  18. Method s for Measuring Productivity in Libraries and Information Centres

    Directory of Open Access Journals (Sweden)

    Mohammad Alaaei

    2009-04-01

    Full Text Available   Within Information centers, productivity is the result of optimal and effective use of information resources, service quality improvement, increased user satisfaction, pleasantness of working environment, increased motivation and enthusiasm of staff to work better. All contribute to the growth and development of information centers. Thus these centers would need to be familiar with methods employed in productivity measurement. Productivity is one of the criteria for evaluating system performance. In the past decades particular emphasis has been placed on measurement and improvement of human resource, creativity, innovation and expert analysis. Contemplation and efforts made towards identification of problems and issues and new means to make more useful and better resource management is the very definition of productivity. Simply put, productivity is the relationship between system output and the elements garnered to produce these outputs. The causality between variables and factors impacting on productivity is very complex. In information centers, given the large volume of elements involved, it seems necessary to increase efficiency and productivity

  19. a Task-Oriented Disaster Information Correlation Method

    Science.gov (United States)

    Linyao, Q.; Zhiqiang, D.; Qing, Z.

    2015-07-01

    With the rapid development of sensor networks and Earth observation technology, a large quantity of disaster-related data is available, such as remotely sensed data, historic data, case data, simulated data, and disaster products. However, the efficiency of current data management and service systems has become increasingly difficult due to the task variety and heterogeneous data. For emergency task-oriented applications, the data searches primarily rely on artificial experience based on simple metadata indices, the high time consumption and low accuracy of which cannot satisfy the speed and veracity requirements for disaster products. In this paper, a task-oriented correlation method is proposed for efficient disaster data management and intelligent service with the objectives of 1) putting forward disaster task ontology and data ontology to unify the different semantics of multi-source information, 2) identifying the semantic mapping from emergency tasks to multiple data sources on the basis of uniform description in 1), and 3) linking task-related data automatically and calculating the correlation between each data set and a certain task. The method goes beyond traditional static management of disaster data and establishes a basis for intelligent retrieval and active dissemination of disaster information. The case study presented in this paper illustrates the use of the method on an example flood emergency relief task.

  20. Information processing systems, reasoning modules, and reasoning system design methods

    Science.gov (United States)

    Hohimer, Ryan E; Greitzer, Frank L; Hampton, Shawn D

    2014-03-04

    Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.

  1. Information processing systems, reasoning modules, and reasoning system design methods

    Energy Technology Data Exchange (ETDEWEB)

    Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.

    2016-08-23

    Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.

  2. Information processing systems, reasoning modules, and reasoning system design methods

    Energy Technology Data Exchange (ETDEWEB)

    Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.

    2015-08-18

    Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.

  3. The Linear Quadratic Gaussian Multistage Game with Nonclassical Information Pattern Using a Direct Solution Method

    Science.gov (United States)

    Clemens, Joshua William

    Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.

  4. Risk-Informed SSCs Categorization: Elicitation Method of Expert's Opinion

    International Nuclear Information System (INIS)

    Hwang, Mee Jeong; Yang, Joon Eon; Kim, Kil Yoo

    2005-01-01

    The regulations have been performing by deterministic way since nuclear power plants have been operating. However, some SSCs identified as safety-significance by deterministic way, were turned out to be low or non safety-significant and some SSCs identified as non-safety significance were turned out to be high safety-significant according to the results of PSA. Considering these risk insights, Regulatory Guide 1.174 and 10CFR50.69 were drawn up, and we can re-categorize the SSCs according to their safety significance. Therefore, a study and an interest about the risk-informed SSCs re-categorization and treatment has been continued. The objective of this regulatory initiative is to adjust the scope of equipment subject to special regulatory treatment to better focus licensee and regulatory attention and resources on equipment that has safety significance. Current most regulations define the plant equipment necessary to meet deterministic regulatory basis as 'safety-related.' This equipment is subject to special treatment regulations. Other plant equipment is categorized as 'non-safety related,' and is not subject to a select number of special treatment requirement or a subset of those requirement. However, risk information is not a magic tool making a decision but a supporting tool to categorize SSCs. This is because only small parts of a plant are modeled in PSA model. Thus, engineering and deterministic judgments are also used for risk-informed SSCs categorization, and expert opinion elicitation is very important for risk-informed SSCs categorization. Therefore, we need a rational method to elicit the expert's opinions, and in this study, we developed a systematic method for expert elicitation to categorize the nuclear power plants' SSCs. Current states for SSCs categorization of the USA and the existing methods for expert elicitation were surveyed and more systematic way eliciting the expert opinions and combining was developed. To validate the developed method

  5. Photocatalytical Properties and Theoretical Analysis of N, Cd-Codoped TiO2 Synthesized by Thermal Decomposition Method

    Directory of Open Access Journals (Sweden)

    Hongtao Gao

    2012-01-01

    Full Text Available N, Cd-codoped TiO2 have been synthesized by thermal decomposition method. The products were characterized by X-ray diffraction (XRD, scanning electron microscope (SEM, UV-visible diffuse reflectance spectra (DRS, X-ray photoelectron spectroscopy (XPS, and Brunauer-Emmett-Teller (BET specific surface area analysis, respectively. The products represented good performance in photocatalytic degradation of methyl orange. The effect of the incorporation of N and Cd on electronic structure and optical properties of TiO2 was studied by first-principle calculations on the basis of density functional theory (DFT. The impurity states, introduced by N 2p or Cd 5d, lied between the valence band and the conduction band. Due to dopants, the band gap of N, Cd-codoped TiO2 became narrow. The electronic transition from the valence band to conduction band became easy, which could account for the observed photocatalytic performance of N, Cd-codoped TiO2. The theoretical analysis might provide a probable reference for the experimentally element-doped TiO2 synthesis.

  6. Methods and Systems for Advanced Spaceport Information Management

    Science.gov (United States)

    Fussell, Ronald M. (Inventor); Ely, Donald W. (Inventor); Meier, Gary M. (Inventor); Halpin, Paul C. (Inventor); Meade, Phillip T. (Inventor); Jacobson, Craig A. (Inventor); Blackwell-Thompson, Charlie (Inventor)

    2007-01-01

    Advanced spaceport information management methods and systems are disclosed. In one embodiment, a method includes coupling a test system to the payload and transmitting one or more test signals that emulate an anticipated condition from the test system to the payload. One or more responsive signals are received from the payload into the test system and are analyzed to determine whether one or more of the responsive signals comprises an anomalous signal. At least one of the steps of transmitting, receiving, analyzing and determining includes transmitting at least one of the test signals and the responsive signals via a communications link from a payload processing facility to a remotely located facility. In one particular embodiment, the communications link is an Internet link from a payload processing facility to a remotely located facility (e.g. a launch facility, university, etc.).

  7. Using a Theoretical Framework to Investigate Whether the HIV/AIDS Information Needs of the AfroAIDSinfo Web Portal Members Are Met: A South African eHealth Study

    Directory of Open Access Journals (Sweden)

    Hendra Van Zyl

    2014-03-01

    Full Text Available eHealth has been identified as a useful approach to disseminate HIV/AIDS information. Together with Consumer Health Informatics (CHI, the Web-to-Public Knowledge Transfer Model (WPKTM has been applied as a theoretical framework to identify consumer needs for AfroAIDSinfo, a South African Web portal. As part of the CHI practice, regular eSurveys are conducted to determine whether these needs are changing and are continually being met. eSurveys show high rates of satisfaction with the content as well as the modes of delivery. The nature of information is thought of as reliable to reuse; both for education and for referencing of information. Using CHI and the WPKTM as a theoretical framework, it ensures that needs of consumers are being met and that they find the tailored methods of presenting the information agreeable. Combining ICTs and theories in eHealth interventions, this approach can be expanded to deliver information in other sectors of public health.

  8. Information-Theoretic Approach May Shed a Light to a Better Understanding and Sustaining the Integrity of Ecological-Societal Systems under Changing Climate

    Science.gov (United States)

    Kim, J.

    2016-12-01

    Considering high levels of uncertainty, epistemological conflicts over facts and values, and a sense of urgency, normal paradigm-driven science will be insufficient to mobilize people and nation toward sustainability. The conceptual framework to bridge the societal system dynamics with that of natural ecosystems in which humanity operates remains deficient. The key to understanding their coevolution is to understand `self-organization.' Information-theoretic approach may shed a light to provide a potential framework which enables not only to bridge human and nature but also to generate useful knowledge for understanding and sustaining the integrity of ecological-societal systems. How can information theory help understand the interface between ecological systems and social systems? How to delineate self-organizing processes and ensure them to fulfil sustainability? How to evaluate the flow of information from data through models to decision-makers? These are the core questions posed by sustainability science in which visioneering (i.e., the engineering of vision) is an essential framework. Yet, visioneering has neither quantitative measure nor information theoretic framework to work with and teach. This presentation is an attempt to accommodate the framework of self-organizing hierarchical open systems with visioneering into a common information-theoretic framework. A case study is presented with the UN/FAO's communal vision of climate-smart agriculture (CSA) which pursues a trilemma of efficiency, mitigation, and resilience. Challenges of delineating and facilitating self-organizing systems are discussed using transdisciplinary toold such as complex systems thinking, dynamic process network analysis and multi-agent systems modeling. Acknowledgments: This study was supported by the Korea Meteorological Administration Research and Development Program under Grant KMA-2012-0001-A (WISE project).

  9. Probabilistic methods applied to the safety of nuclear power plant: annual report - 1980. Part. 1: theoretical fundaments

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Hesles, J.B.S.; Milidiu, R.L.; Maciel, C.C.; Gibelli, S.M.O.; Oliveira, L.C.; Fleming, P.V.; Rivera, R.R.J.

    1981-02-01

    The probabilistic Safety Analysis Group from COPPE was founded in 1980. This first part of the report shows the theoretical fundaments used for reliability analysis of some safety systems for Angra-1 [pt

  10. An Information Theoretic Framework and Self-organizing Agent- based Sensor Network Architecture for Power Plant Condition Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Loparo, Kenneth [Case Western Reserve Univ., Cleveland, OH (United States); Kolacinski, Richard [Case Western Reserve Univ., Cleveland, OH (United States); Threeanaew, Wanchat [Case Western Reserve Univ., Cleveland, OH (United States); Agharazi, Hanieh [Case Western Reserve Univ., Cleveland, OH (United States)

    2017-01-30

    A central goal of the work was to enable both the extraction of all relevant information from sensor data, and the application of information gained from appropriate processing and fusion at the system level to operational control and decision-making at various levels of the control hierarchy through: 1. Exploiting the deep connection between information theory and the thermodynamic formalism, 2. Deployment using distributed intelligent agents with testing and validation in a hardware-in-the loop simulation environment. Enterprise architectures are the organizing logic for key business processes and IT infrastructure and, while the generality of current definitions provides sufficient flexibility, the current architecture frameworks do not inherently provide the appropriate structure. Of particular concern is that existing architecture frameworks often do not make a distinction between ``data'' and ``information.'' This work defines an enterprise architecture for health and condition monitoring of power plant equipment and further provides the appropriate foundation for addressing shortcomings in current architecture definition frameworks through the discovery of the information connectivity between the elements of a power generation plant. That is, to identify the correlative structure between available observations streams using informational measures. The principle focus here is on the implementation and testing of an emergent, agent-based, algorithm based on the foraging behavior of ants for eliciting this structure and on measures for characterizing differences between communication topologies. The elicitation algorithms are applied to data streams produced by a detailed numerical simulation of Alstom’s 1000 MW ultra-super-critical boiler and steam plant. The elicitation algorithm and topology characterization can be based on different informational metrics for detecting connectivity, e.g. mutual information and linear correlation.

  11. Uranium theoretical speciation for drinking water from private drilled wells in Sweden – Implications for choice of removal method

    International Nuclear Information System (INIS)

    Norrström, Ann Catrine; Löv, Åsa

    2014-01-01

    Highlights: • Neutral charge uranium complexes dominated in the pH range 6.7–7.8. • The Ca concentration influence which calcium-UO 2 carbonate complexes was formed. • In the acidic pH range several different U complexes can comprise a large fraction of total complexes. • It is crucial to include all relevant chemical compounds in the model. • Before removal method is selected, some crucial parameters should be measured. - Abstract: Elevated concentrations of uranium (U) from natural sources have been measured in drinking water from private drilled wells in Sweden and many other countries world-wide. Although U is a radioactive element, radioactivity is not the main concern, but rather chemical toxicity, e.g. kidney damage. Uranium chemistry is complex and U in water has a very high tendency to form complexes with other compounds. Since speciation is crucial for the properties of U, and therefore the removal efficiency, this study determined theoretical U species in drinking water from private drilled wells using the geochemical model Visual MINTEQ. The drinking water samples used in modelling were from two datasets: (1) 76 water samples selected from a previous survey of 722 wells; and (2) samples of drinking water from 21 private wells sampled in May 2013. The results showed that neutrally charged U complexes dominated in the pH range 6.7–7.8, which is common in private drilled wells. This has important implications for removal method, since charge is an important factor for U removal efficiency. In the alkaline pH range, one of two calcium-UO 2 carbonate complexes dominated and calcium (Ca) concentration proved to be a key factor determining the Ca-UO 2 carbonate complex formed: the neutral Ca 2 UO 2 (CO 3 ) 3 0 (aq) or the negative CaUO 2 (CO 3 ) 3 2− . Complexes with organic carbon (C) varied greatly in the acidic range, indicating that it is crucial to measure organic C content in the water since it is critical for the dissolved organic matter

  12. Advancing the Direction of Health Information Management in Greek Public Hospitals: Theoretical Directions and Methodological Implications for Sharing Information in order to Obtain Decision-Making

    Directory of Open Access Journals (Sweden)

    Evagelia Lappa

    2016-08-01

    Full Text Available Although consultants have long placed the use of research information at the centre of their activity, the extent that physicians use this information tends to vary widely. Despite this study and its recommendations, there is still a gap between the functions of a manager and the use of the associated information, while the decision-making procedures vary according to the organization in which they work. The cost of IT remains the largest barrier, while some current IT solutions are not user friendly and out-of-date, particularly for public hospitals in Greece. The knowledge management is concerned not only with the facts and figures of production, but also with the know-how of staff. The information needs protocol should not be referred only to those who comply with formal computer-based information systems, but also to those who take into account other informal information and its flow within the organization. In a field such as medicine, where out-of-date information may be positively dangerous, doctors make heavy use of journals and several texts from the web. The decision-making process is a complex approach, particularly in human diagnostic and therapeutic applications. Therefore, it is very important to set priorities in the sector of health information management and promote education and training on information and communication technology (ICT.

  13. The Effect of Health Information Technology on Health Care Provider Communication: A Mixed-Method Protocol.

    Science.gov (United States)

    Manojlovich, Milisa; Adler-Milstein, Julia; Harrod, Molly; Sales, Anne; Hofer, Timothy P; Saint, Sanjay; Krein, Sarah L

    2015-06-11

    Communication failures between physicians and nurses are one of the most common causes of adverse events for hospitalized patients, as well as a major root cause of all sentinel events. Communication technology (ie, the electronic medical record, computerized provider order entry, email, and pagers), which is a component of health information technology (HIT), may help reduce some communication failures but increase others because of an inadequate understanding of how communication technology is used. Increasing use of health information and communication technologies is likely to affect communication between nurses and physicians. The purpose of this study is to describe, in detail, how health information and communication technologies facilitate or hinder communication between nurses and physicians with the ultimate goal of identifying how we can optimize the use of these technologies to support effective communication. Effective communication is the process of developing shared understanding between communicators by establishing, testing, and maintaining relationships. Our theoretical model, based in communication and sociology theories, describes how health information and communication technologies affect communication through communication practices (ie, use of rich media; the location and availability of computers) and work relationships (ie, hierarchies and team stability). Therefore we seek to (1) identify the range of health information and communication technologies used in a national sample of medical-surgical acute care units, (2) describe communication practices and work relationships that may be influenced by health information and communication technologies in these same settings, and (3) explore how differences in health information and communication technologies, communication practices, and work relationships between physicians and nurses influence communication. This 4-year study uses a sequential mixed-methods design, beginning with a

  14. Study on the Reduced Traffic Congestion Method Based on Dynamic Guidance Information

    Science.gov (United States)

    Li, Shu-Bin; Wang, Guang-Min; Wang, Tao; Ren, Hua-Ling; Zhang, Lin

    2018-05-01

    This paper studies how to generate the reasonable information of travelers’ decision in real network. This problem is very complex because the travelers’ decision is constrained by different human behavior. The network conditions can be predicted by using the advanced dynamic OD (Origin-Destination, OD) estimation techniques. Based on the improved mesoscopic traffic model, the predictable dynamic traffic guidance information can be obtained accurately. A consistency algorithm is designed to investigate the travelers’ decision by simulating the dynamic response to guidance information. The simulation results show that the proposed method can provide the best guidance information. Further, a case study is conducted to verify the theoretical results and to draw managerial insights into the potential of dynamic guidance strategy in improving traffic performance. Supported by National Natural Science Foundation of China under Grant Nos. 71471104, 71771019, 71571109, and 71471167; The University Science and Technology Program Funding Projects of Shandong Province under Grant No. J17KA211; The Project of Public Security Department of Shandong Province under Grant No. GATHT2015-236; The Major Social and Livelihood Special Project of Jinan under Grant No. 20150905

  15. Information-seeking behavior during residency is associated with quality of theoretical learning, academic career achievements, and evidence-based medical practice: a strobe-compliant article.

    Science.gov (United States)

    Oussalah, Abderrahim; Fournier, Jean-Paul; Guéant, Jean-Louis; Braun, Marc

    2015-02-01

    Data regarding knowledge acquisition during residency training are sparse. Predictors of theoretical learning quality, academic career achievements and evidence-based medical practice during residency are unknown. We performed a cross-sectional study on residents and attending physicians across several residency programs in 2 French faculties of medicine. We comprehensively evaluated the information-seeking behavior (I-SB) during residency using a standardized questionnaire and looked for independent predictors of theoretical learning quality, academic career achievements, and evidence-based medical practice among I-SB components using multivariate logistic regression analysis. Between February 2013 and May 2013, 338 fellows and attending physicians were included in the study. Textbooks and international medical journals were reported to be used on a regular basis by 24% and 57% of the respondents, respectively. Among the respondents, 47% refer systematically (4.4%) or frequently (42.6%) to published guidelines from scientific societies upon their publication. The median self-reported theoretical learning quality score was 5/10 (interquartile range, 3-6; range, 1-10). A high theoretical learning quality score (upper quartile) was independently and strongly associated with the following I-SB components: systematic reading of clinical guidelines upon their publication (odds ratio [OR], 5.55; 95% confidence interval [CI], 1.77-17.44); having access to a library that offers the leading textbooks of the specialty in the medical department (OR, 2.45, 95% CI, 1.33-4.52); knowledge of the specialty leading textbooks (OR, 2.12; 95% CI, 1.09-4.10); and PubMed search skill score ≥5/10 (OR, 1.94; 95% CI, 1.01-3.73). Research Master (M2) and/or PhD thesis enrolment were independently and strongly associated with the following predictors: PubMed search skill score ≥5/10 (OR, 4.10; 95% CI, 1.46-11.53); knowledge of the leading medical journals of the specialty (OR, 3.33; 95

  16. Information-seeking Behavior During Residency Is Associated With Quality of Theoretical Learning, Academic Career Achievements, and Evidence-based Medical Practice

    Science.gov (United States)

    Oussalah, Abderrahim; Fournier, Jean-Paul; Guéant, Jean-Louis; Braun, Marc

    2015-01-01

    Abstract Data regarding knowledge acquisition during residency training are sparse. Predictors of theoretical learning quality, academic career achievements and evidence-based medical practice during residency are unknown. We performed a cross-sectional study on residents and attending physicians across several residency programs in 2 French faculties of medicine. We comprehensively evaluated the information-seeking behavior (I-SB) during residency using a standardized questionnaire and looked for independent predictors of theoretical learning quality, academic career achievements, and evidence-based medical practice among I-SB components using multivariate logistic regression analysis. Between February 2013 and May 2013, 338 fellows and attending physicians were included in the study. Textbooks and international medical journals were reported to be used on a regular basis by 24% and 57% of the respondents, respectively. Among the respondents, 47% refer systematically (4.4%) or frequently (42.6%) to published guidelines from scientific societies upon their publication. The median self-reported theoretical learning quality score was 5/10 (interquartile range, 3–6; range, 1–10). A high theoretical learning quality score (upper quartile) was independently and strongly associated with the following I-SB components: systematic reading of clinical guidelines upon their publication (odds ratio [OR], 5.55; 95% confidence interval [CI], 1.77–17.44); having access to a library that offers the leading textbooks of the specialty in the medical department (OR, 2.45, 95% CI, 1.33–4.52); knowledge of the specialty leading textbooks (OR, 2.12; 95% CI, 1.09–4.10); and PubMed search skill score ≥5/10 (OR, 1.94; 95% CI, 1.01–3.73). Research Master (M2) and/or PhD thesis enrolment were independently and strongly associated with the following predictors: PubMed search skill score ≥5/10 (OR, 4.10; 95% CI, 1.46–11.53); knowledge of the leading medical journals of the

  17. System and Method for RFID-Enabled Information Collection

    Science.gov (United States)

    Fink, Patrick W. (Inventor); Lin, Gregory Y. (Inventor); Kennedy, Timothy F. (Inventor); Ngo, Phong H. (Inventor); Byerly, Diane (Inventor)

    2016-01-01

    Methods, apparatuses and systems for radio frequency identification (RFID)-enabled information collection are disclosed, including an enclosure, a collector coupled to the enclosure, an interrogator, a processor, and one or more RFID field sensors, each having an individual identification, disposed within the enclosure. In operation, the interrogator transmits an incident signal to the collector, causing the collector to generate an electromagnetic field within the enclosure. The electromagnetic field is affected by one or more influences. RFID sensors respond to the electromagnetic field by transmitting reflected signals containing the individual identifications of the responding RFID sensors to the interrogator. The interrogator receives the reflected signals, measures one or more returned signal strength indications ("RSSI") of the reflected signals and sends the RSSI measurements and identification of the responding RFID sensors to the processor to determine one or more facts about the influences. Other embodiments are also described.

  18. A new template matching method based on contour information

    Science.gov (United States)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process

  19. Theoretical validation of potential habitability via analytical and boosted tree methods: An optimistic study on recently discovered exoplanets

    Science.gov (United States)

    Saha, S.; Basak, S.; Safonova, M.; Bora, K.; Agrawal, S.; Sarkar, P.; Murthy, J.

    2018-04-01

    Seven Earth-sized planets, known as the TRAPPIST-1 system, was discovered with great fanfare in the last week of February 2017. Three of these planets are in the habitable zone of their star, making them potentially habitable planets (PHPs) a mere 40 light years away. The discovery of the closest potentially habitable planet to us just a year before - Proxima b and a realization that Earth-type planets in circumstellar habitable zones are a common occurrence provides the impetus to the existing pursuit for life outside the Solar System. The search for life has two goals essentially: looking for planets with Earth-like conditions (Earth similarity) and looking for the possibility of life in some form (habitability). An index was recently developed, the Cobb-Douglas Habitability Score (CDHS), based on Cobb-Douglas habitability production function (CD-HPF), which computes the habitability score by using measured and estimated planetary parameters. As an initial set, radius, density, escape velocity and surface temperature of a planet were used. The proposed metric, with exponents accounting for metric elasticity, is endowed with analytical properties that ensure global optima and can be scaled to accommodate a finite number of input parameters. We show here that the model is elastic, and the conditions on elasticity to ensure global maxima can scale as the number of predictor parameters increase. K-NN (K-Nearest Neighbor) classification algorithm, embellished with probabilistic herding and thresholding restriction, utilizes CDHS scores and labels exoplanets into appropriate classes via feature-learning methods yielding granular clusters of habitability. The algorithm works on top of a decision-theoretical model using the power of convex optimization and machine learning. The goal is to characterize the recently discovered exoplanets into an "Earth League" and several other classes based on their CDHS values. A second approach, based on a novel feature-learning and

  20. A method for investigating relative timing information on phylogenetic trees.

    Science.gov (United States)

    Ford, Daniel; Matsen, Frederick A; Stadler, Tanja

    2009-04-01

    In this paper, we present a new way to describe the timing of branching events in phylogenetic trees. Our description is in terms of the relative timing of diversification events between sister clades; as such it is complementary to existing methods using lineages-through-time plots which consider diversification in aggregate. The method can be applied to look for evidence of diversification happening in lineage-specific "bursts", or the opposite, where diversification between 2 clades happens in an unusually regular fashion. In order to be able to distinguish interesting events from stochasticity, we discuss 2 classes of neutral models on trees with relative timing information and develop a statistical framework for testing these models. These model classes include both the coalescent with ancestral population size variation and global rate speciation-extinction models. We end the paper with 2 example applications: first, we show that the evolution of the hepatitis C virus deviates from the coalescent with arbitrary population size. Second, we analyze a large tree of ants, demonstrating that a period of elevated diversification rates does not appear to have occurred in a bursting manner.

  1. Information Design Theories

    Science.gov (United States)

    Pettersson, Rune

    2014-01-01

    Information design has practical and theoretical components. As an academic discipline we may view information design as a combined discipline, a practical theory, or as a theoretical practice. So far information design has incorporated facts, influences, methods, practices, principles, processes, strategies, and tools from a large number of…

  2. Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities

    KAUST Repository

    Nielsen, Frank; Sun, Ke

    2016-01-01

    does not admit a closed-form formula, it is in practice either estimated using costly Monte Carlo stochastic integration, approximated or bounded using various techniques. We present a fast and generic method that builds algorithmically closed

  3. [Lack of access to information on oral health problems among adults: an approach based on the theoretical model for literacy in health].

    Science.gov (United States)

    Roberto, Luana Leal; Noronha, Daniele Durães; Souza, Taiane Oliveira; Miranda, Ellen Janayne Primo; Martins, Andréa Maria Eleutério de Barros Lima; Paula, Alfredo Maurício Batista De; Ferreira, Efigênia Ferreira E; Haikal, Desirée Sant'ana

    2018-03-01

    This study sought to investigate factors associated with the lack of access to information on oral health among adults. It is a cross-sectional study, carried out among 831 adults (35-44 years of age). The dependent variable was access to information on how to avoid oral problems, and the independent variables were gathered into subgroups according to the theoretical model for literacy in health. Binary logistic regression was carried out, and results were corrected by the design effect. It was observed that 37.5% had no access to information about dental problems. The lack of access was higher among adults who had lower per capita income, were dissatisfied with the dental services provided, did not use dental floss, had unsatisfactory physical control of the quality of life, and self-perceived their oral health as fair/poor/very poor. The likelihood of not having access to information about dental problems among those dissatisfied with the dental services used was 3.28 times higher than for those satisfied with the dental services used. Thus, decreased access to information was related to unfavorable conditions among adults. Health services should ensure appropriate information to their users in order to increase health literacy levels and improve satisfaction and equity.

  4. "It's the Method, Stupid." Interrelations between Methodological and Theoretical Advances: The Example of Comparing Higher Education Systems Internationally

    Science.gov (United States)

    Hoelscher, Michael

    2017-01-01

    This article argues that strong interrelations between methodological and theoretical advances exist. Progress in, especially comparative, methods may have important impacts on theory evaluation. By using the example of the "Varieties of Capitalism" approach and an international comparison of higher education systems, it can be shown…

  5. About application during lectures on protection of the information and information security of the method of "the round table"

    Directory of Open Access Journals (Sweden)

    Simon Zh. Simavoryan

    2011-05-01

    Full Text Available In article the analysis of one of passive methods of transfer of knowledge – lecture is resulted. Experience of teaching of a subject on protection of the information and information security shows that students acquire a teaching material if during lecture to apply an active method of transfer of knowledge – a method of "a round table" is better.

  6. Information systems project management: methods, tools, and techniques

    OpenAIRE

    Mcmanus, John; Wood-Harper, Trevor

    2004-01-01

    Information Systems Project Management offers a clear and logical exposition of how to plan, organise and monitor projects effectively in order to deliver quality information systems within time, to budget and quality. This new book by John McManus and Trevor Wood-Harper is suitable for upper level undergraduates and postgraduates studying project management and Information Systems. Practising managers will also find it to be a valuable tool in their work. Managing information systems pro...

  7. Agent-based method for distributed clustering of textual information

    Science.gov (United States)

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  8. Linear information retrieval method in X-ray grating-based phase contrast imaging and its interchangeability with tomographic reconstruction

    Science.gov (United States)

    Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.

    2017-06-01

    In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.

  9. Decision-Making under Ambiguity Is Modulated by Visual Framing, but Not by Motor vs. Non-Motor Context. Experiments and an Information-Theoretic Ambiguity Model.

    Science.gov (United States)

    Grau-Moya, Jordi; Ortega, Pedro A; Braun, Daniel A

    2016-01-01

    A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects' choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects' choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain.

  10. Decision-Making under Ambiguity Is Modulated by Visual Framing, but Not by Motor vs. Non-Motor Context. Experiments and an Information-Theoretic Ambiguity Model.

    Directory of Open Access Journals (Sweden)

    Jordi Grau-Moya

    Full Text Available A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects' choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects' choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain.

  11. Decision-Making under Ambiguity Is Modulated by Visual Framing, but Not by Motor vs. Non-Motor Context. Experiments and an Information-Theoretic Ambiguity Model

    Science.gov (United States)

    Grau-Moya, Jordi; Ortega, Pedro A.; Braun, Daniel A.

    2016-01-01

    A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects’ choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects’ choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain. PMID:27124723

  12. Theoretical aspects and modelling of cellular decision making, cell killing and information-processing in photodynamic therapy of cancer.

    Science.gov (United States)

    Gkigkitzis, Ioannis

    2013-01-01

    The aim of this report is to provide a mathematical model of the mechanism for making binary fate decisions about cell death or survival, during and after Photodynamic Therapy (PDT) treatment, and to supply the logical design for this decision mechanism as an application of rate distortion theory to the biochemical processing of information by the physical system of a cell. Based on system biology models of the molecular interactions involved in the PDT processes previously established, and regarding a cellular decision-making system as a noisy communication channel, we use rate distortion theory to design a time dependent Blahut-Arimoto algorithm where the input is a stimulus vector composed of the time dependent concentrations of three PDT related cell death signaling molecules and the output is a cell fate decision. The molecular concentrations are determined by a group of rate equations. The basic steps are: initialize the probability of the cell fate decision, compute the conditional probability distribution that minimizes the mutual information between input and output, compute the cell probability of cell fate decision that minimizes the mutual information and repeat the last two steps until the probabilities converge. Advance to the next discrete time point and repeat the process. Based on the model from communication theory described in this work, and assuming that the activation of the death signal processing occurs when any of the molecular stimulants increases higher than a predefined threshold (50% of the maximum concentrations), for 1800s of treatment, the cell undergoes necrosis within the first 30 minutes with probability range 90.0%-99.99% and in the case of repair/survival, it goes through apoptosis within 3-4 hours with probability range 90.00%-99.00%. Although, there is no experimental validation of the model at this moment, it reproduces some patterns of survival ratios of predicted experimental data. Analytical modeling based on cell death

  13. Exploring Factors Influencing Self-Efficacy in Information Security an Empirical Analysis by Integrating Multiple Theoretical Perspectives in the Context of Using Protective Information Technologies

    Science.gov (United States)

    Reddy, Dinesh Sampangirama

    2017-01-01

    Cybersecurity threats confront the United States on a daily basis, making them one of the major national security challenges. One approach to meeting these challenges is to improve user cybersecurity behavior. End user security behavior hinges on end user acceptance and use of the protective information technologies such as anti-virus and…

  14. Graph-theoretic analysis of discrete-phase-space states for condition change detection and quantification of information

    Science.gov (United States)

    Hively, Lee M.

    2014-09-16

    Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By monitoring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (unstructured data) into discrete-phase-space states, and hence into a graph (structured data) for extraction of condition change.

  15. Designing theoretically-informed implementation interventions: Fine in theory, but evidence of effectiveness in practice is needed

    Directory of Open Access Journals (Sweden)

    Reeves Scott

    2006-02-01

    Full Text Available Abstract The Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG authors assert that a key weakness in implementation research is the unknown applicability of a given intervention outside its original site and problem, and suggest that use of explicit theory offers an effective solution. This assertion is problematic for three primary reasons. First, the presence of an underlying theory does not necessarily ease the task of judging the applicability of a piece of empirical evidence. Second, it is not clear how to translate theory reliably into intervention design, which undoubtedly involves the diluting effect of "common sense." Thirdly, there are many theories, formal and informal, and it is not clear why any one should be given primacy. To determine whether explicitly theory-based interventions are, on average, more effective than those based on implicit theories, pragmatic trials are needed. Until empirical evidence is available showing the superiority of theory-based interventions, the use of theory should not be used as a basis for assessing the value of implementation studies by research funders, ethics committees, editors or policy decision makers.

  16. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method.

    Science.gov (United States)

    Badran, Hani; Pluye, Pierre; Grad, Roland

    2017-03-14

    The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n

  17. Using the Work System Method with Freshman Information Systems Students

    Science.gov (United States)

    Recker, Jan; Alter, Steven

    2012-01-01

    Recent surveys of information technology management professionals show that understanding business domains in terms of business productivity and cost reduction potential, knowledge of different vertical industry segments and their information requirements, understanding of business processes and client-facing skills are more critical for…

  18. Pattern recognition in complex activity travel patterns : comparison of Euclidean distance, signal-processing theoretical, and multidimensional sequence alignment methods

    NARCIS (Netherlands)

    Joh, C.H.; Arentze, T.A.; Timmermans, H.J.P.

    2001-01-01

    The application of a multidimensional sequence alignment method for classifying activity travel patterns is reported. The method was developed as an alternative to the existing classification methods suggested in the transportation literature. The relevance of the multidimensional sequence alignment

  19. Summer School organized by the International Centre for Theoretical Physics, Trieste, and the Institute for Information Sciences, University of Tübingen

    CERN Document Server

    Güttinger, Werner; Cin, Mario

    1974-01-01

    This volume is the record and product of the Summer School on the Physics and Mathematics of the Nervous System, held at the International Centre for Theoretical Physics in Trieste from August 21-31, 1973, and jointly organized by the Institute for Information Sciences, University of Tlibingen and by the Centre. The school served to bring biologists, physicists and mathemati­ cians together to exchange ideas about the nervous system and brain, and also to introduce young scientists to the field. The program, attended by more than a hundred scientists, was interdisciplinary both in character and participation. The primary support for the school was provided by the Volkswagen Foundation of West Germany. We are particularly indebted to Drs. G. Gambke, M. -L Zarnitz, and H. Penschuck of the Foundation for their in­ terest in and help with the project. The school also received major support from the International Centre for Theoretical Physics in Trieste and its sponsoring agencies, including the use of its exce...

  20. Rethinking the Elementary Science Methods Course: A Case for Content, Pedagogy, and Informal Science Education.

    Science.gov (United States)

    Kelly, Janet

    2000-01-01

    Indicates the importance of preparing prospective teachers who will be elementary science teachers with different methods. Presents the theoretical and practical rationale for developing a constructivist-based elementary science methods course. Discusses the impact student knowledge and understanding of science and student attitudes has on…