Theoretical Approaches to Coping
Sofia Zyga
2013-01-01
Full Text Available Introduction: Dealing with stress requires conscious effort, it cannot be perceived as equal to individual's spontaneous reactions. The intentional management of stress must not be confused withdefense mechanisms. Coping differs from adjustment in that the latter is more general, has a broader meaning and includes diverse ways of facing a difficulty.Aim: An exploration of the definition of the term "coping", the function of the coping process as well as its differentiation from other similar meanings through a literature review.Methodology: Three theoretical approaches of coping are introduced; the psychoanalytic approach; approaching by characteristics; and the Lazarus and Folkman interactive model.Results: The strategic methods of the coping approaches are described and the article ends with a review of the approaches including the functioning of the stress-coping process , the classificationtypes of coping strategies in stress-inducing situations and with a criticism of coping approaches.Conclusions: The comparison of coping in different situations is difficult, if not impossible. The coping process is a slow process, so an individual may select one method of coping under one set ofcircumstances and a different strategy at some other time. Such selection of strategies takes place as the situation changes.
Theoretical approaches to elections defining
Natalya V. Lebedeva
2011-01-01
Full Text Available Theoretical approaches to elections defining develop the nature, essence and content of elections, help to determine their place and a role as one of the major national law institutions in democratic system.
THEORETICAL APPROACHES IN INTERNATIONAL RELATIONS ...
plt
understanding of the social dynamics of the world we live in. Theoretical approaches are also instrumental in shaping perceptions of what matters in international politics ... This implies that, as a technique of last resort, the military instrument.
Theoretical Approaches to Political Communication.
Chesebro, James W.
Political communication appears to be emerging as a theoretical and methodological academic area of research within both speech-communication and political science. Five complimentary approaches to political science (Machiavellian, iconic, ritualistic, confirmational, and dramatistic) may be viewed as a series of variations which emphasize the…
Theoretical approaches to superionic conductivity
C S Sunandana; P Senthil Kumar
2004-02-01
Recent theoretical approaches to the understanding of superionic conductivity in polycrystalline, glassy and polymeric materials are briefly reviewed. Phase transitions to the superionic conducting state in the AgI family are apparently triggered by cluster formation and strong mobile ion interaction within the clusters. Anomalous conductivity and related physical properties are explained in the cluster induced distortion model. Ionic composites such as AgX : Al2O3 ( = Cl, Br and I) involve conducting and non-conducting phases and the all-important interface between the two whose space charge enhances the conductivity and also trigger phase transitions to exotic polymorphic phases, for which the mechanisms are yet to be explored. Ion hopping dynamics controls the conductivity of superionic glasses. Mode coupling and jump relaxation theories account for the non-Debye relaxation observed in a.c. conductivity of these glasses. The theory of conductivity in polymer electrolytes-still in its infancy-involves their complex structure and glass transition behaviour. Preparative and thermal history, composition and crystallinity control ionic conductivity. New approaches to the synthesis of optimal polymer electrolytes such as rubbery electrolytes, crystalline polymers and nanocomposites must be considered before achieving a comprehensive theoretical understanding.
Group theoretical approach to entanglement
Korbicz, J K
2006-01-01
We examine a potential relevance of methods of harmonic analysis for the study of quantum entanglement. By changing the mathematical object representing quantum states, we reformulate the separability problem in group-theoretical terms. We also translate the positivity of partial transpose (PPT) criterion and one of the necessary-and-sufficient criteria for pure states to the group-theoretical language. The formal relation of our formalism to local hidden variable models is briefly examined. We also remark on the connection between entanglement and some certain non-commutativity.
Machine learning a theoretical approach
Natarajan, Balas K
2014-01-01
This is the first comprehensive introduction to computational learning theory. The author's uniform presentation of fundamental results and their applications offers AI researchers a theoretical perspective on the problems they study. The book presents tools for the analysis of probabilistic models of learning, tools that crisply classify what is and is not efficiently learnable. After a general introduction to Valiant's PAC paradigm and the important notion of the Vapnik-Chervonenkis dimension, the author explores specific topics such as finite automata and neural networks. The presentation
A Set Theoretical Approach to Maturity Models
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...
Managerial Leadership - A Theoretical Approach
Felicia Cornelia MACARIE
2007-06-01
Full Text Available The paper endeavors to offer an overview of the major theories on leadership and the way in which it influences the management of contemporary organizations. Numerous scholars highlight that there are numerous overlaps between the concepts of management and leadership. This is the reason why the first section of the paper focuses on providing an extensive overview of the literature regarding the meaning of the two aforementioned concepts. The second section addresses more in depth the concept of leadership and managerial leadership and focuses on the ideal profile of the leader. The last section of the paper critically discusses various types of leadership and more specifically modern approaches to the concept and practices of leadership.
Dramaturgical and Music-Theoretical Approaches to Improvisation Pedagogy
Huovinen, Erkki; Tenkanen, Atte; Kuusinen, Vesa-Pekka
2011-01-01
The aim of this article is to assess the relative merits of two approaches to teaching musical improvisation: a music-theoretical approach, focusing on chords and scales, and a "dramaturgical" one, emphasizing questions of balance, variation and tension. Adult students of music pedagogy, with limited previous experience in improvisation,…
Game theoretic approaches for spectrum redistribution
Wu, Fan
2014-01-01
This brief examines issues of spectrum allocation for the limited resources of radio spectrum. It uses a game-theoretic perspective, in which the nodes in the wireless network are rational and always pursue their own objectives. It provides a systematic study of the approaches that can guarantee the system's convergence at an equilibrium state, in which the system performance is optimal or sub-optimal. The author provides a short tutorial on game theory, explains game-theoretic channel allocation in clique and in multi-hop wireless networks and explores challenges in designing game-theoretic m
New Theoretical Approach Integrated Education and Technology
Ding, Gang
2010-01-01
The paper focuses on exploring new theoretical approach in education with development of online learning technology, from e-learning to u-learning and virtual reality technology, and points out possibilities such as constructing a new teaching ecological system, ubiquitous educational awareness with ubiquitous technology, and changing the…
Information theoretic approach for accounting classification
Ribeiro, E M S
2014-01-01
In this paper we consider an information theoretic approach for the accounting classification process. We propose a matrix formalism and an algorithm for calculations of information theoretic measures associated to accounting classification. The formalism may be useful for further generalizations, and computer based implementation. Information theoretic measures, mutual information and symmetric uncertainty, were evaluated for daily transactions recorded in the chart of accounts of a small company during two years. Variation in the information measures due the aggregation of data in the process of accounting classification is observed. In particular, the symmetric uncertainty seems to be a useful parameter for comparing companies over time or in different sectors; or different accounting choices and standards.
Reasoning about Action: An Argumentation - Theoretic Approach
Foo, N Y; 10.1613/jair.1602
2011-01-01
We present a uniform non-monotonic solution to the problems of reasoning about action on the basis of an argumentation-theoretic approach. Our theory is provably correct relative to a sensible minimisation policy introduced on top of a temporal propositional logic. Sophisticated problem domains can be formalised in our framework. As much attention of researchers in the field has been paid to the traditional and basic problems in reasoning about actions such as the frame, the qualification and the ramification problems, approaches to these problems within our formalisation lie at heart of the expositions presented in this paper.
Theoretical and methodological approaches in discourse analysis.
Stevenson, Chris
2004-10-01
Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power. Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a frame- work for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.
Courcelle's Theorem - A Game-Theoretic Approach
Kneis, Joachim; Rossmanith, Peter
2011-01-01
Courcelle's Theorem states that every problem definable in Monadic Second-Order logic can be solved in linear time on structures of bounded treewidth, for example, by constructing a tree automaton that recognizes or rejects a tree decomposition of the structure. Existing, optimized software like the MONA tool can be used to build the corresponding tree automata, which for bounded treewidth are of constant size. Unfortunately, the constants involved can become extremely large - every quantifier alternation requires a power set construction for the automaton. Here, the required space can become a problem in practical applications. In this paper, we present a novel, direct approach based on model checking games, which avoids the expensive power set construction. Experiments with an implementation are promising, and we can solve problems on graphs where the automata-theoretic approach fails in practice.
A decision theoretical approach for diffusion promotion
Ding, Fei; Liu, Yun
2009-09-01
In order to maximize cost efficiency from scarce marketing resources, marketers are facing the problem of which group of consumers to target for promotions. We propose to use a decision theoretical approach to model this strategic situation. According to one promotion model that we develop, marketers balance between probabilities of successful persuasion and the expected profits on a diffusion scale, before making their decisions. In the other promotion model, the cost for identifying influence information is considered, and marketers are allowed to ignore individual heterogeneity. We apply the proposed approach to two threshold influence models, evaluate the utility of each promotion action, and provide discussions about the best strategy. Our results show that efforts for targeting influentials or easily influenced people might be redundant under some conditions.
An information theoretic approach to pedigree reconstruction.
Almudevar, Anthony
2016-02-01
Network structure is a dominant feature of many biological systems, both at the cellular level and within natural populations. Advances in genotype and gene expression screening made over the last few decades have permitted the reconstruction of these networks. However, resolution to a single model estimate will generally not be possible, leaving open the question of the appropriate method of formal statistical inference. The nonstandard structure of the problem precludes most traditional statistical methodologies. Alternatively, a Bayesian approach provides a natural methodology for formal inference. Construction of a posterior density on the space of network structures allows formal inference regarding features of network structure using specific marginal posterior distributions. An information theoretic approach to this problem will be described, based on the Minimum Description Length principle. This leads to a Bayesian inference model based on the information content of data rather than on more commonly used probabilistic models. The approach is applied to the problem of pedigree reconstruction based on genotypic data. Using this application, it is shown how the MDL approach is able to provide a truly objective control for model complexity. A two-cohort model is used for a simulation study. The MDL approach is compared to COLONY-2, a well known pedigree reconstruction application. The study highlights the problem of genotyping error modeling. COLONY-2 requires prior error rate estimates, and its accuracy proves to be highly sensitive to these estimates. In contrast, the MDL approach does not require prior error rate estimates, and is able to accurately adjust for genotyping error across the range of models considered. Copyright © 2015 Elsevier Inc. All rights reserved.
Theoretical approaches to social innovation – A critical literature review
Butzin, A.; Davis, A.; Domanski, D.; Dhondt, S.; Howaldt, J.; Kaletka, C.; Kesselring, A.; Kopp, R.; Millard, J.; Oeij, P.; Rehfeld, D.; Schaper-Rinkel, P.; Schwartz, M.; Scoppetta, A.; Wagner-Luptacik, P.; Weber, M.
2014-01-01
The SI-DRIVE report “Theoretical approaches to Social Innovation – A Critical Literature Review” delivers a comprehensive overview on the state of the art of theoretically relevant building blocks for advancing a theoretical understanding of social innovation. It collects different theoretical
Recent Theoretical Approaches to Minimal Artificial Cells
Fabio Mavelli
2014-05-01
Full Text Available Minimal artificial cells (MACs are self-assembled chemical systems able to mimic the behavior of living cells at a minimal level, i.e. to exhibit self-maintenance, self-reproduction and the capability of evolution. The bottom-up approach to the construction of MACs is mainly based on the encapsulation of chemical reacting systems inside lipid vesicles, i.e. chemical systems enclosed (compartmentalized by a double-layered lipid membrane. Several researchers are currently interested in synthesizing such simple cellular models for biotechnological purposes or for investigating origin of life scenarios. Within this context, the properties of lipid vesicles (e.g., their stability, permeability, growth dynamics, potential to host reactions or undergo division processes… play a central role, in combination with the dynamics of the encapsulated chemical or biochemical networks. Thus, from a theoretical standpoint, it is very important to develop kinetic equations in order to explore first—and specify later—the conditions that allow the robust implementation of these complex chemically reacting systems, as well as their controlled reproduction. Due to being compartmentalized in small volumes, the population of reacting molecules can be very low in terms of the number of molecules and therefore their behavior becomes highly affected by stochastic effects both in the time course of reactions and in occupancy distribution among the vesicle population. In this short review we report our mathematical approaches to model artificial cell systems in this complex scenario by giving a summary of three recent simulations studies on the topic of primitive cell (protocell systems.
Titanocene / cyclodextrin supramolecular systems: a theoretical approach
Riviş Adrian
2012-11-01
Full Text Available Abstract Background Recently, various metallocenes were synthesized and analyzed by biological activity point of view (such as antiproliferative properties: ruthenocenes, cobaltoceniums, titanocenes, zirconocenes, vanadocenes, niobocenes, molibdocenes etc. Two main disadvantages of metallocenes are the poor hydrosolubility and the hydrolytic instability. These problems could be resolved in two ways: synthetically modifying the structure or finding new formulations with enhanced properties. The aqueous solubility of metallocenes with cytostatic activities could be enhanced by molecular encapsulation in cyclodextrins, as well as the hydrolytic instability of these compounds could be reduced. Results This study presents a theoretical approach on the nanoencapsulation of a series of titanocenes with cytotoxic activity in α-, β-, and γ-cyclodextrin. The HyperChem 5.11 package was used for building and molecular modelling of titanocene and cyclodextrin structures, as well as for titanocene/cyclodextrin complex optimization. For titanocene/cyclodextrin complex optimization experiments, the titanocene and cyclodextrin structures in minimal energy conformations were set up at various distances and positions between molecules (molecular mechanics functionality, MM+. The best interaction between titanocene structures and cyclodextrins was obtained in the case of β- and γ-cyclodextrin, having the hydrophobic moieties oriented to the secondary face of cyclodextrin. The hydrophobicity of titanocenes (logP correlate with the titanocene-cyclodextrin interaction parameters, especially with the titanocene-cyclodextrin interaction energy; the compatible geometry and the interaction energy denote that the titanocene/β- and γ-cyclodextrin complex can be achieved. Valuable quantitative structure-activity relationships (QSARs were also obtained in the titanocene class by using the same logP as the main parameter for the in vitro cytotoxic activity against He
A theoretical approach to measuring pilot workload
Kantowitz, B. H.
1984-01-01
Theoretical assumptions used by researchers in the area of attention, with emphasis upon errors and inconsistent assumptions used by some researchers were studied. Two GAT experiments, two laboratory studies and one field experiment were conducted.
Child Language Acquisition: Contrasting Theoretical Approaches
Ambridge, Ben; Lieven, Elena V. M.
2011-01-01
Is children's language acquisition based on innate linguistic structures or built from cognitive and communicative skills? This book summarises the major theoretical debates in all of the core domains of child language acquisition research (phonology, word-learning, inflectional morphology, syntax and binding) and includes a complete introduction…
Child Language Acquisition: Contrasting Theoretical Approaches
Ambridge, Ben; Lieven, Elena V. M.
2011-01-01
Is children's language acquisition based on innate linguistic structures or built from cognitive and communicative skills? This book summarises the major theoretical debates in all of the core domains of child language acquisition research (phonology, word-learning, inflectional morphology, syntax and binding) and includes a complete introduction…
UNCERTAINTY IN NEOCLASSICAL AND KEYNESIAN THEORETICAL APPROACHES: A BEHAVIOURAL PERSPECTIVE
Sinziana BALTATESCU
2015-11-01
Full Text Available The ”mainstream” neoclassical assumptions about human economic behavior are currently challenged by both behavioural researches on human behaviour and other theoretical approaches which, in the context of the recent economic and financial crisis find arguments to reinforce their theoretical statements. The neoclassical “perfect rationality” assumption is most criticized and provokes the mainstream theoretical approach to efforts of revisiting the theoretical framework in order to re-state the economic models validity. Uncertainty seems, in this context, to be the concept that allows other theoretical approaches to take into consideration a more realistic individual from the psychological perspective. This paper is trying to present a comparison between the neoclassical and Keynesian approach of the uncertainty, considering the behavioural arguments and challenges addressed to the mainstream theory.
Game Theoretic Approaches to Protect Cyberspace
2010-04-20
8217fictitious play ( FP )’, conservatively considers that the players cannot make perfect observations of each other’s previous actions. This work studied the...each player is aware of these error probabilities, and (b) neither player 25 knows these error probabilities. Both classical and stochastic FP ...Overall Game Quality Jansen |44| stated qualitative assignments can be used to represent quantitative measures of security prop- erties (e.g
THEORETICAL APPROACHES DEFINE SENSE OF INVESTMENT PROJECT
Yu. A. Burduzha
2010-01-01
The essence of terms “project”, “investment project” is defined in the article, as well as the difference between definitions of terms “project”, “plan” and “program” is determined. The main approaches to the treatment of definition of term “investment project” are considered.
Partial discharge transients: The field theoretical approach
McAllister, Iain Wilson; Crichton, George C
1998-01-01
Up until the mid-1980s the theory of partial discharge transients was essentially static. This situation had arisen because of the fixation with the concept of void capacitance and the use of circuit theory to address what is in essence a field problem. Pedersen rejected this approach and instead...
THEORETICAL APPROACHES DEFINE SENSE OF INVESTMENT PROJECT
Yu. A. Burduzha
2010-11-01
Full Text Available The essence of terms “project”, “investment project” is defined in the article, as well as the difference between definitions of terms “project”, “plan” and “program” is determined. The main approaches to the treatment of definition of term “investment project” are considered.
Understanding bimolecular machines: Theoretical and experimental approaches
Goler, Adam Scott
This dissertation concerns the study of two classes of molecular machines from a physical perspective: enzymes and membrane proteins. Though the functions of these classes of proteins are different, they each represent important test-beds from which new understanding can be developed by the application of different techniques. HIV1 Reverse Transcriptase is an enzyme that performs multiple functions, including reverse transcription of RNA into an RNA/DNA duplex, RNA degradation by the RNaseH domain, and synthesis of dsDNA. These functions allow for the incorporation of the retroviral genes into the host genome. Its catalytic cycle requires repeated large-scale conformational changes fundamental to its mechanism. Motivated by experimental work, these motions were studied theoretically by the application of normal mode analysis. It was observed that the lowest order modes correlate with largest amplitude (low-frequency) motion, which are most likely to be catalytically relevant. Comparisons between normal modes obtained via an elastic network model to those calculated from the essential dynamics of a series of all-atom molecular dynamics simulations show the self-consistency between these calculations. That similar conformational motions are seen between independent theoretical methods reinforces the importance of large-scale subdomain motion for the biochemical action of DNA polymerases in general. Moreover, it was observed that the major subunits of HIV1 Reverse Transcriptase interact quasi-harmonically. The 5HT3A Serotonin receptor and P2X1 receptor, by contrast, are trans-membrane proteins that function as ligand gated ion channels. Such proteins feature a central pore, which allows for the transit of ions necessary for cellular function across a membrane. The pore is opened by the ligation of binding sites on the extracellular portion of different protein subunits. In an attempt to resolve the individual subunits of these membrane proteins beyond the diffraction
Unification of theoretical approaches for epidemic spreading on complex networks
Wang, Wei; Tang, Ming; Stanley, H. Eugene; Braunstein, Lidia A.
2017-03-01
Models of epidemic spreading on complex networks have attracted great attention among researchers in physics, mathematics, and epidemiology due to their success in predicting and controlling scenarios of epidemic spreading in real-world scenarios. To understand the interplay between epidemic spreading and the topology of a contact network, several outstanding theoretical approaches have been developed. An accurate theoretical approach describing the spreading dynamics must take both the network topology and dynamical correlations into consideration at the expense of increasing the complexity of the equations. In this short survey we unify the most widely used theoretical approaches for epidemic spreading on complex networks in terms of increasing complexity, including the mean-field, the heterogeneous mean-field, the quench mean-field, dynamical message-passing, link percolation, and pairwise approximation. We build connections among these approaches to provide new insights into developing an accurate theoretical approach to spreading dynamics on complex networks.
A Field Theoretic Approach to Roughness Corrections
Wu, Hua Yao
2011-01-01
We develop a systematic field theoretic description for the roughness correction to the Casimir free energy of parallel plates. Roughness is modeled by specifying a generating functional for correlation functions of the height profile, the two-point correlation function being characterized by the variance, \\sigma^2, and correlation length, \\ell, of the profile. We obtain the partition function of a massless scalar quantum field interacting with the height profile of the surface via a \\delta-function potential. The partition function of this model is also given by a holographic reduction to three coupled scalar fields on a two-dimensional plane. The original three-dimensional space with a parallel plate at separation 'a' is encoded in the non-local propagators of the surface fields on its boundary. Feynman rules for this equivalent 2+1-dimensional model are derived and its counter terms constructed. The two-loop contribution to the free energy of this model gives the leading roughness correction. The absolute ...
THE NETWORKS IN TOURISM: A THEORETICAL APPROACH
Maria TĂTĂRUȘANU
2016-12-01
Full Text Available The economic world in which tourism companies act today is in a continuous changing process. The most important factor of these changes is the globalization of their environment, both in economic, social, natural and cultural aspects. The tourism companies can benefit from the opportunities brought by globalization, but also could be menaced by the new context. How could react the companies to these changes in order to create and maintain long term competitive advantage for their business? In the present paper we make a literature review of the new tourism companies´ business approach: the networks - a result and/or a reason for exploiting the opportunities or, on the contrary, for keeping their actual position on the market. It’s a qualitative approach and the research methods used are analyses, synthesis, abstraction, which are considered the most appropriate to achieve the objective of the paper.
A graph theoretical approach to data fusion
Žurauskienė, Justina; Kirk, Paul D.W.; Stumpf, Michael P.H.
2016-01-01
The rapid development of high throughput experimental techniques has resulted in a growing diversity of genomic datasets being produced and requiring analysis. Therefore, it is increasingly being recognized that we can gain deeper understanding about underlying biology by combining the insights obtained from multiple, diverse datasets. Thus we propose a novel scalable computational approach to unsupervised data fusion. Our technique exploits network representations of the data to identify similarities among the datasets. We may work within the Bayesian formalism, using Bayesian nonparametric approaches to model each dataset; or (for fast, approximate, and massive scale data fusion) can naturally switch to more heuristic modeling techniques. An advantage of the proposed approach is that each dataset can initially be modeled independently (in parallel), before applying a fast post-processing step to perform data integration. This allows us to incorporate new experimental data in an online fashion, without having to rerun all of the analysis. We first demonstrate the applicability of our tool on artificial data, and then on examples from the literature, which include yeast cell cycle, breast cancer and sporadic inclusion body myositis datasets. PMID:26992203
New theoretical approaches to black holes
Gourgoulhon, Eric
2008-01-01
Quite recently, some new mathematical approaches to black holes have appeared in the literature. They do not rely on the classical concept of event horizon -- which is very global, but on the local concept of hypersurfaces foliated by trapped surfaces. After a brief introduction to these new horizons, we focus on a viscous fluid analogy that can be developed to describe their dynamics, in a fashion similar to the membrane paradigm introduced for event horizons in the seventies, but with a significant change of sign of the bulk viscosity.
The dynamics of alliances : a game theoretical approach
Ridder, A. de
2007-01-01
In this dissertation, Annelies de Ridder presents a game theoretical approach to strategic alliances. More specifically, the dynamics of and within alliances have been studied. To do so, four new models have been developed in the game theoretical tradition. Both coalition theory and strategic game t
MEDICAL BRAIN DRAIN - A THEORETICAL APPROACH
Boncea Irina
2013-07-01
Full Text Available Medical brain drain is defined as the migration of health personnel from developing countries to developed countries and between industrialized nations in search for better opportunities. This phenomenon became a global growing concern due to its impact on both the donor and the destination countries. This article aims to present the main theoretical contributions starting from 1950 until today and the historical evolution, in the attempt of correlating the particular case of medical brain drain with the theory and evolution of the brain drain in general. This article raises questions and offers answers, identifies the main issues and looks for possible solutions in order to reduce the emigration of medical doctors. Factors of influence include push (low level of income, poor working conditions, the absence of job openings and social recognition, oppressive political climate and pull (better remuneration and working conditions, prospects for career development, job satisfaction, security factors. Developing countries are confronting with the loss of their most valuable intellectuals and the investment in their education, at the benefit of developed nations. An ethical debate arises as the disparities between countries increases, industrialized nations filling in the gaps in health systems with professionals from countries already facing shortages. However, recent literature emphasizes the possibility of a “beneficial brain drain” through education incentives offered by the emigration prospects. Other sources of “brain gain” for donor country are the remittances, the scientific networks and return migration. Measures to stem the medical brain drain involve the common effort and collaboration between developing and developed countries and international organizations. Measures adopted by donor countries include higher salaries, better working conditions, security, career opportunities, incentives to stimulate return migration. Destination
Management and human values in Nigeria: A theoretical approach ...
Management and human values in Nigeria: A theoretical approach. ... evolving ethical and human value-based practices as a form of competitive advantage ... has consistently shifted towards value-based models of growth in the workplace.
Social representations: a theoretical approach in health
Isaiane Santos Bittencourt
2011-03-01
Full Text Available Objective: To present the theory of social representations, placing its epistemology and knowing the basic concepts of its approach as a structural unit of knowledge for health studies. Justification: The use of this theory comes from the need to understand social eventsunder the lens of the meanings constructed by the community. Data Synthesis: This was a descriptive study of literature review, which used as a source of data collection the classical authors of social representations supported by articles from electronic search at Virtual Health Library (VHL. The definition and discussion of collected data enabled to introduce two themes, versed on the history and epistemology of representations and on the structuralapproach of representations in health studies. Conclusion: This review allowed highlight the importance of locating the objects of study with regard to contextual issues of individual and collective histories, valuing the plurality of relations, to come closer to reality that is represented by the subjects.
An information theoretic approach for privacy metrics
Michele Bezzi
2010-12-01
Full Text Available Organizations often need to release microdata without revealing sensitive information. To this scope, data are anonymized and, to assess the quality of the process, various privacy metrics have been proposed, such as k-anonymity, l-diversity, and t-closeness. These metrics are able to capture different aspects of the disclosure risk, imposing minimal requirements on the association of an individual with the sensitive attributes. If we want to combine them in a optimization problem, we need a common framework able to express all these privacy conditions. Previous studies proposed the notion of mutual information to measure the different kinds of disclosure risks and the utility, but, since mutual information is an average quantity, it is not able to completely express these conditions on single records. We introduce here the notion of one-symbol information (i.e., the contribution to mutual information by a single record that allows to express and compare the disclosure risk metrics. In addition, we obtain a relation between the risk values t and l, which can be used for parameter setting. We also show, by numerical experiments, how l-diversity and t-closeness can be represented in terms of two different, but equally acceptable, conditions on the information gain..
Mobility versus terrain: a game theoretic approach
Bednarz, David; Muench, Paul
2016-05-01
Mobility and terrain are two sides of the same coin. You cannot describe mobility unless you describe the terrain. For example, if my world is trench warfare, the tank may be the ideal vehicle. If my world is urban warfare, clearing buildings and such, the tank may not be an ideal vehicle, perhaps an anthropomorphic robot would be better. We seek a general framework for mobility that captures the relative value of different mobility strategies. Game theory is positively the right way to analyze the interactions of rational players who behave strategically. In this paper, we will describe the interactions between a mobility player, who is trying to make it from point A to point B with one chance to refuel, and a terrain player who is trying to minimize that probability by placing an obstacle somewhere along the path from A to B. In previous work [1], we used Monte Carlo methods to analyze this mobility game, and found optimal strategies for a discretized version of the game. Here we show the relationship of this game to a classic game of timing [2], and use solution methods from that literature to solve for optimal strategies in a continuous version of this mobility game.
A graph theoretic approach to graded identities
Haile, Darrell
2011-01-01
We consider the algebra M_k(C) of k-by-k matrices over the complex numbers and view it as a crossed product with a group G of order k by embedding G in the symmetric group S_k via the regular representation and embedding S_k in M_k(C) in the usual way. This induces a natural G-grading on M_k(C) which we call a crossed product grading. This grading is the so called elementary grading defined by any k-tuple (g_1,g_2,..., g_k) of distinct elements g_i in G. We study the graded polynomial identities for M_k(C) equipped with a crossed product grading. To each multilinear monomial in the free graded algebra we associate a directed labeled graph. This approach allows us to give new proofs of known results of Bahturin and Drensky on the generators of the T-ideal of identities and the Amitsur-Levitsky Theorem. Our most substantial new result is the determination of the asymptotic formula for the G-graded codimension of M_k(C).
Theoretical approaches to lightness and perception.
Gilchrist, Alan
2015-01-01
. Evidence for and against these approaches is reviewed.
Preservation of Newspapers: Theoretical Approaches and Practical Achievements
Hasenay, Damir; Krtalic, Maja
2010-01-01
The preservation of newspapers is the main topic of this paper. A theoretical overview of newspaper preservation is given, with an emphasis on the importance of a systematic and comprehensive approach. Efficient newspaper preservation implies understanding the meaning of preservation in general, as well as understanding specific approaches,…
Postmodern Implications for Theoretical Integration of Counseling Approaches.
Hansen, James T.
2002-01-01
Theoretical integration refers to the conceptual unification of diverse counseling approaches. Contends that the general failure of integrative attempts is a by-product of the modernistic epistemic context in which the systems were considered and proposes an examination of common narrative features of counseling approaches in a postmodern…
One-dimensional barcode reading: an information theoretic approach.
Houni, Karim; Sawaya, Wadih; Delignon, Yves
2008-03-10
In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.
BEHAVIORAL INPUTS TO THE THEORETICAL APPROACH OF THE ECONOMIC CRISIS
Sinziana BALTATESCU
2015-09-01
Full Text Available The current economic and financial crisis gave room for the theoretical debates to reemerge. The economic reality challenged the mainstream neoclassical approach leaving the opportunity for the Austrian School, Post Keynesianism or Institutionalists to bring in front theories that seem to better explain the economic crisis and thus, leaving space for more efficient economic policies to result. In this context, the main assumptions of the mainstream theoretical approach are challenged and reevaluated, behavioral economics is one of the main challengers. Without developing in an integrated school of thought yet, behavioral economics brings new elements within the framework of economic thinking. How are the main theoretical approaches integrating these new elements and whether this process is going to narrow the theory or enrich it to be more comprehensive are questions to which this paper tries to answer, or, at least, to leave room for an answer.
Campion, Thomas R; Blau, Vanessa L; Brown, Scott W; Izcovich, Daniel; Cole, Curtis L
2014-01-01
Clinical research management systems (CRMSs) can facilitate research billing compliance and clinician awareness of study activities when integrated with practice management and electronic health record systems. However, adoption of CRMSs remains low, and optimal approaches to implementation are unknown. This case report describes one institution's successful approach to organization, technology, and workflow for CRMS implementation following previous failures. Critical factors for CRMS success included organizational commitment to clinical research, a dedicated research information technology unit, integration of research data across disparate systems, and centralized system usage workflows. In contrast, previous failed approaches at the institution lacked a mandate and mechanism for change, received support as a business rather than research activity, maintained data in separate systems, and relied on inconsistent distributed system usage workflows. To our knowledge, this case report is the first to describe CRMS implementation success and failures, which can assist practitioners and academic evaluators.
A Field-Theoretic Approach to the Wiener Sausage
Nekovar, S.; Pruessner, G.
2016-05-01
The Wiener Sausage, the volume traced out by a sphere attached to a Brownian particle, is a classical problem in statistics and mathematical physics. Initially motivated by a range of field-theoretic, technical questions, we present a single loop renormalised perturbation theory of a stochastic process closely related to the Wiener Sausage, which, however, proves to be exact for the exponents and some amplitudes. The field-theoretic approach is particularly elegant and very enjoyable to see at work on such a classic problem. While we recover a number of known, classical results, the field-theoretic techniques deployed provide a particularly versatile framework, which allows easy calculation with different boundary conditions even of higher momenta and more complicated correlation functions. At the same time, we provide a highly instructive, non-trivial example for some of the technical particularities of the field-theoretic description of stochastic processes, such as excluded volume, lack of translational invariance and immobile particles. The aim of the present work is not to improve upon the well-established results for the Wiener Sausage, but to provide a field-theoretic approach to it, in order to gain a better understanding of the field-theoretic obstacles to overcome.
A System Theoretical Inspired Approach to Knowledge Construction
Mathiasen, Helle
2008-01-01
student's knowledge construction, in the light of operative constructivism, inspired by the German sociologist N. Luhmann's system theoretical approach to epistemology. Taking observations as operations based on distinction and indication (selection) contingency becomes a fundamental condition in learning....... In the light of operative constructivism this is a paradox and a challenge for the Educational system....
Theoretical approaches to the glass transition in simple liquids
Chandan Dasgupta
2005-05-01
Theoretical approaches to the development of an understanding of the behaviour of simple supercooled liquids near the structural glass transition are reviewed and our work on this problem, based on the density functional theory of freezing and replicated liquid state theory, are summarized in this context. A few directions for further work on this problem are suggested.
Tourism Competitiveness and Destination Branding - A Theoretical Approach
Morar Doriana; Cotîrlea Denisa Adriana
2012-01-01
The present article was written in order to provide an overview of the theoretical approaches considering competitiveness and differentiation in tourism industry. Also, it emphasizes the importance of competitive advantages in destination branding, their connection and their influence on the size of tourist flows in different destinations.
Scientific-theoretical research approach to practical theology in ...
2017-08-18
Aug 18, 2017 ... In this article, I present a critical literature study of the theoretical approach of practical theologians in South ... practical theological realism connects science and wisdom in a fruitful .... of church members of the violence and hurt that people experience ..... unmarried women, divorced women, homosexuals,.
Theoretical Approach to Gender Differences in Conversational Style
Zhou Ying
2014-01-01
This paper points out the negative aspects of the male dominance hypothesis which is the framework of the prior research on sex differences.Proceeding from the theoretical bases of sociolinguistics,this paper attempts to propose a communicative approach by which gender differences in conversational style can be investigated completely.
Theoretical approaches to low energy $\\bar{K}N$ interactions
Cieply, Ales
2016-01-01
We provide a direct comparison of modern theoretical approaches based on the SU(3) chiral dynamics and describing the low energy $\\bar{K}N$ data. The model predictions for the $\\bar{K}N$ amplitudes and pole content of the models are discussed.
A queer-theoretical approach to community health psychology.
Easpaig, Bróna R Nic Giolla; Fryer, David M; Linn, Seònaid E; Humphrey, Rhianna H
2014-01-01
Queer-theoretical resources offer ways of productively rethinking how central concepts such as 'person-context', 'identity' and 'difference' may be understood for community health psychologists. This would require going beyond consideration of the problems with which queer theory is popularly associated to cautiously engage with the aspects of this work relevant to the promotion of collective practice and engaging with processes of marginalisation. In this article, we will draw upon and illustrate the queer-theoretical concepts of 'performativity' and 'cultural intelligibility' before moving towards a preliminary mapping of what a queer-informed approach to community health psychology might involve.
A theoretical and methodological approach of leadership in organizations
María Laura Lupano Perugini
2015-09-01
Full Text Available The present work has the objective to offer a theoretical-methodological approximation respect of the leadership’s phenomenon. By virtue of the fact that such phenomenon is understood as a complex and multiderminated construct, we will approach the different theoretical schools which have tried to explain it. Also, it is attempted to answer the question about the possibility to evaluate the mentioned construct. To answer this interrogation we will present the different methodologies employed in scientific research and their possible advantages and limitations of their application to the leadership’s evaluation.
Magariños, Eduardo; Solioz, Germán; Cermesoni, Gabriel; Koretzky, Martín; Carnevalini, Mariana; González, Daniel
2013-01-01
The percutaneous punction of the radial artery for catheterization procedures has gained acceptance lately. This was a consequence of achieving results similar to the femoral approach, with the benefits of a lower rate of complications and increased comfort for the patients post procedure. Recently it has gained an additional impulse with the better prognosis obtained in acute coronary syndromes. In this trial we have evaluated if the feasibility, results and advantages related with the use of the radial artery percutaneous approach to perform catheterization procedures, continues when used in patients who have had a previous brachial artery cutdown. Out of a total of 1356 percutaneous radial accesses, 53 were in patients with previous brachial artery cutdown. Through this access 71 catheterization procedures were performed, achieving access success in 96.2% (51/53) of the punctions. Once the access success was obtained, 93.6% (44/47) of the diagnostic procedures and 100% (24/24) of the therapeutics procedures were successful. During hospitalization, in this group of patients, no major adverse cardiac events occurred and there was a 1.4% (1/71) rate of minor events. At seven days follow up, no new complications were recorded. Although this is a small group, we believe that it is enough to show that percutaneous punctions of the radial artery to perform catheterization procedures, in patients with previous brachial artery cutdown, are feasible, allowing high access and procedure success rates, with a low frequency of complications.
A theoretical approach to artificial intelligence systems in medicine.
Spyropoulos, B; Papagounos, G
1995-10-01
The various theoretical models of disease, the nosology which is accepted by the medical community and the prevalent logic of diagnosis determine both the medical approach as well as the development of the relevant technology including the structure and function of the A.I. systems involved. A.I. systems in medicine, in addition to the specific parameters which enable them to reach a diagnostic and/or therapeutic proposal, entail implicitly theoretical assumptions and socio-cultural attitudes which prejudice the orientation and the final outcome of the procedure. The various models -causal, probabilistic, case-based etc. -are critically examined and their ethical and methodological limitations are brought to light. The lack of a self-consistent theoretical framework in medicine, the multi-faceted character of the human organism as well as the non-explicit nature of the theoretical assumptions involved in A.I. systems restrict them to the role of decision supporting "instruments" rather than regarding them as decision making "devices". This supporting role and, especially, the important function which A.I. systems should have in the structure, the methods and the content of medical education underscore the need of further research in the theoretical aspects and the actual development of such systems.
Model selection and inference a practical information-theoretic approach
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
A Theoretical Approach for Estimating Fracture Toughness of Ductile Metals
Y.T. He; F. Li; G.Q. Zhang; L.J. Ernst; X.J. FU
2004-01-01
Fracture toughness is very important when applying Damage Tolerance Design and Assessment Techniques. The traditional testing approach for obtaining fracture toughness values is costly and time consuming. In order to estimate the fracture toughness of ductile metals, the fracture mechanics theory, materials plastic deformation theory and materials constructive relationships are employed here. A series of formulae and a theoretical approach are presented to calculate fracture toughness values of different materials in the plane stress and plane strain conditions. Compared with test results, evaluated values have a good agreement.
Interactions between DNA purinic bases and amodiaquine: A theoretical approach
Valdemar Lacerda Júnior
2010-06-01
Full Text Available We study theoretically the amodiaquine-adenine and amodiaquine-guanine adducts formation using Density Functional Theory (B3LYP and the 6-31G(d basis set for the geometry optimizations and 6-31+G(d,p for the analysis of the global indexes: electrophilicity (w, electronic chemical potential (m, hardness (h and softness (S, based in the Frontier Molecular Orbital Theory – FMO. Local softness for nucleophilic reaction (sk+ sites over guanine was evaluated using Fukui function (f k. We also evaluated the guanine Electrostatic Potential (EP values using the (MSK charge scheme. The theoretical calculations had demonstrated that the amodiaquine has greater electronic affinity for the guanine, with irreversible formation of the amodiaquine-guanine adduct, as reported before on a previous experimental work.
An attempt of classification of theoretical approaches to national identity
Milošević-Đorđević Jasna S.
2003-01-01
Full Text Available It is compulsory that complex social concepts should be defined in different ways and approached from the perspective of different science disciplines. Therefore, it is difficult to precisely define them without overlapping of meaning with other similar concepts. This paper has made an attempt towards theoretical classification of the national identity and differentiate that concept in comparison to the other related concepts (race, ethnic group, nation, national background, authoritativeness, patriarchy. Theoretical assessments are classified into two groups: ones that are dealing with nature of national identity and others that are stating one or more dimensions of national identity, crucial for its determination. On the contrary to the primordialistic concept of national identity, describing it as a fundamental, deeply rooted human feature, there are many numerous contemporary theoretical approaches (instrumentalist, constructivist, functionalistic, emphasizing changeable, fluid, instrumentalist function of the national identity. Fundamental determinants of national identity are: language, culture (music, traditional myths, state symbols (territory, citizenship, self-categorization, religion, set of personal characteristics and values.
Information Ergonomics A theoretical approach and practical experience in transportation
Sandl, Peter
2012-01-01
The variety and increasing availability of hypermedia information systems, which are used in stationary applications like operators’ consoles as well as mobile systems, e.g. driver information and navigation systems in automobiles form a foundation for the mediatization of the society. From the human engineering point of view this development and the ensuing increased importance of information systems for economic and private needs require careful deliberation of the derivation and application of ergonomics methods particularly in the field of information systems. This book consists of two closely intertwined parts. The first, theoretical part defines the concept of an information system, followed by an explanation of action regulation as well as cognitive theories to describe man information system interaction. A comprehensive description of information ergonomics concludes the theoretical approach. In the second, practically oriented part of this book authors from industry as well as from academic institu...
Theoretical Approach to Electroresistance in Ferroelectric Tunnel Junctions
Chang, Sou-Chi; Naeemi, Azad; Nikonov, Dmitri E.; Gruverman, Alexei
2017-02-01
In this paper, a theoretical approach comprising the nonequilibrium Green's function method for electronic transport and the Landau-Khalatnikov equation for electric polarization dynamics is presented to describe polarization-dependent tunneling electroresistance (TER) in ferroelectric tunnel junctions. Using appropriate contact, interface, and ferroelectric parameters, the measured current-voltage characteristic curves in both inorganic (Co /BaTi O3/La0.67Sr0.33 MnO3 ) and organic (Au /PVDF /W ) ferroelectric tunnel junctions can be well described by the proposed approach. Furthermore, under this theoretical framework, the controversy of opposite TER signs observed experimentally by different groups in Co /BaTi O3/La0.67Sr0.33 MnO3 systems is addressed by considering the interface termination effects using the effective contact ratio defined through the effective screening length and dielectric response at the metal-ferroelectric interfaces. Finally, our approach is extended to investigate the role of a CoOx buffer layer at the Co /BaTi O3 interface in a ferroelectric tunnel memristor. It is shown that in order to have a significant memristor behavior not only the interface oxygen vacancies but also the CoOx layer thickness may vary with the applied bias.
SOCIOLOGICAL UNDERSTANDING OF INTERNET: THEORETICAL APPROACHES TO THE NETWORK ANALYSIS
D. E. Dobrinskaya
2016-01-01
Full Text Available Internet studies are carried out by various scientific disciplines and in different research perspectives. Sociological studies of the Internet deal with a new technology, a revolutionary means of mass communication and a social space. There is a set of research difficulties associated with the Internet. Firstly, the high speed and wide spread of Internet technologies’ development. Secondly, the collection and filtration of materials concerning with Internet studies. Lastly, the development of new conceptual categories, which are able to reflect the impact of the Internet development in contemporary world. In that regard the question of the “network” category use is essential. Network is the base of Internet functioning, on the one hand. On the other hand, network is the ground for almost all social interactions in modern society. So such society is called network society. Three theoretical network approaches in the Internet research case are the most relevant: network society theory, social network analysis and actor-network theory. Each of these theoretical approaches contributes to the study of the Internet. They shape various images of interactions between human beings in their entity and dynamics. All these approaches also provide information about the nature of these interactions.
Theoretical approaches to the diagnosis of altered states of consciousness.
Boly, Melanie; Massimini, Marcello; Tononi, Giulio
2009-01-01
Assessing the level of consciousness of noncommunicative brain-damaged patients is difficult, as one has to make inferences based on the patients' behavior. However, behavioral responses of brain-damaged patients are usually limited not only by their cognitive dysfunctions, but also by their frequent motor impairment. For these reasons, it is essential to resort to para-clinical markers of the level of consciousness. In recent years, a number of studies compared brain activity in comatose and vegetative state patients to that in healthy volunteers, and in other conditions of reduced consciousness such as sleep, anesthesia, or epileptic seizures. Despite the increasing amount of experimental results, no consensus on the brain mechanisms generating consciousness has yet been reached. Here, we discuss the need to combine a theoretical approach with current experimental procedures to obtain a coherent, parsimonious explanation for the loss of consciousness in several different conditions, such as coma, vegetative state, sleep, anesthesia, and epileptic seizures. In our view, without a theoretical account of how conscious experience is generated by the brain, it will remain difficult to understand the mechanisms underlying the generation of consciousness, and to predict reliably its presence or absence in noncommunicative brain-damaged patients. In this context, we review current theoretical approaches to consciousness, and how well they fit with current evidence on the neural correlates of experience. Specifically, we emphasize the principled approach provided by the Integrated Information Theory of Consciousness (IITC). We describe the different conditions where the theory predicts markedly reduced states of consciousness, and discuss several technical and conceptual issues limiting its applicability to measuring the level of consciousness of individual patients. Nevertheless, we argue that some of the predictions of the theory are potentially testable using available
Success Determination by Innovation: A Theoretical Approach in Marketing
Raj Kumar Gautam
2012-10-01
Full Text Available The paper aims at to identify the main issues in the marketing which needs immediate attention of the marketers. The importance of innovation in the marketing has also been highlighted and marketing mix have been related to innovative and creative ideas. The study is based on the secondary data, various research papers, articles has been studied to develop a innovative approach in the marketing. Marketing innovative ideas relating to business lead generation, product, price, distribution, promotion of product, and revenue generation have been highlighted in the paper. All the suggestions are theoretical and may have relevance and implication to the marketers.
Considering children and health literacy: a theoretical approach.
Borzekowski, Dina L G
2009-11-01
The theoretical approaches of Paulo Freire, Jean Piaget, and Lev Vygotsky frame the consideration of children and health literacy. This article includes a general discussion of literacy from the Freirian perspective. A definition of health literacy is then presented; first, the established meaning is introduced, but then a Freirian extension is proposed. Next, the theories of cognitive development by Piaget and Vygotsky are discussed, and examples related to children's health literacy are given. Finally, there is a discussion of why it is important to encourage and enable health literacy among children and adolescents.
The Knowability Argument and the Syntactic Type-Theoretic Approach
Lucas Rosenblatt
2014-06-01
Full Text Available Some attempts have been made to block the Knowability Paradox and other modal paradoxes by adopting a type-theoretic framework in which knowledge and necessity are regarded as typed predicates. The main problem with this approach is that when these notions are simultaneously treated as predicates, a new kind of paradox appears. I claim that avoiding this paradox either by weakening the Knowability Principle or by introducing types for both predicates is rather messy and unattractive. I also consider the prospect of using the truth predicate to emulate other modal notions. It turns out that this idea works quite well.
Success Determination by Innovation: A Theoretical Approach in Marketing
Raj Kumar Gautam
2012-11-01
Full Text Available The paper aims at to identify the main issues in the marketing which needs immediate attention of the marketers. The importance of innovation in the marketing has also been highlighted and marketing mix have been related to innovative and creative ideas. The study is based on the secondary data, various research papers, articles has been studied to develop a innovative approach in the marketing. Marketing innovative ideas relating to business lead generation, product, price, distribution, promotion of product, and revenue generation have been highlighted in the paper. All the suggestions are theoretical and may have relevance and implication to the marketers.
Anel A. Kireyeva
2016-01-01
Abstract The aim of this research is to develop new theoretical approaches of the formation of IT clusters in order to strengthen of trend of the innovative industrialization and competitiveness of the country. Keeping with the previous literature, this study determines by the novelty of the problem, concerning the formation of IT clusters, which can become a driving force of transformation due to the interaction, improving efficiency and introducing advanced technology. In this research,...
Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium
M Weimar
1998-12-10
This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.
Exploring a type-theoretic approach to accessibility constraint modelling
Pogodalla, Sylvain
2008-01-01
The type-theoretic modelling of DRT that [degroote06] proposed features continuations for the management of the context in which a clause has to be interpreted. This approach, while keeping the standard definitions of quantifier scope, translates the rules of the accessibility constraints of discourse referents inside the semantic recipes. In this paper, we deal with additional rules for these accessibility constraints. In particular in the case of discourse referents introduced by proper nouns, that negation does not block, and in the case of rhetorical relations that structure discourses. We show how this continuation-based approach applies to those accessibility constraints and how we can consider the parallel management of various principles.
System-theoretic approach to image interest point detection
Pimenov, Vitaly
2010-01-01
Interest point detection is a common task in various computer vision applications. Although a big variety of detector are developed so far computational efficiency of interest point based image analysis remains to be the problem. Current paper proposes a system-theoretic approach to interest point detection. Starting from the analysis of interdependency between detector and descriptor it is shown that given a descriptor it is possible to introduce to notion of detector redundancy. Furthermore for each detector it is possible to construct its irredundant and equivalent modification. Modified detector possesses lower computational complexity and is preferable. It is also shown that several known approaches to reduce computational complexity of image registration can be generalized in terms of proposed theory.
Ding, Xin; Aimainilezi, Adalaiti; Jin, Yan; Abudula, Wuriguli; Yin, Chenghong
2014-10-01
To explore the appropriate approach of delivery after cesarean section of Uyghur women in primary hospitals in Xinjiang Uyghur Autonomous Region. A total of 5 154 women delivered in Luopu County People Hospital, Hetian Prefecture, Xinjiang Uyghur Autonomous Region from January 2011 to December 2012. Among them, 178 Uyghur women had cesarean section history. The interval between the previous cesarean section and this delivery varied from 1 year to 17 years. The number of cases attempting vaginal labor and the indications of the previous cesarean section were recorded. The indications for the second cesarean section were analyzed. The gestational weeks at delivery, blood loss in 2 hours after delivery, neonatal birth weight, newborn asphyxia, the rate of postpartum fever (≥ 38 °C) and hospitalization days were compared between the two approaches of delivery. (1) Among the 178 cases, 119 cases attempted vaginal labor, the rate of attempting vaginal labor was 66.9% (119/178). A total of 113 cases succeeded in vaginal delivery (the vaginal delivery group), with the successful rate of attempting vaginal delivery of 95.0% (113/119), and the successful rate of vaginal delivery was 63.5% (113/178). For those 119 women succeeded in vaginal delivery, the indications of the previous cesarean sections were as following: pregnancy complications (68.1%, 81/119), macrosomia(5.0%, 6/119), dystocia (14.3%, 17/119), pregnancies complicated with other diseases (5.0%, 6/119) and cesarean section on maternal request (7.6%, 9/119). (2) 15 cases in the cesarean section group had postpartum hemorrhage, with the incidence of 13.3% (15/113). The mean total labor time was (507 ± 182) minutes. 6 cases attempting vaginal delivery failed and turned to cesarean section. (3) 59 cases received the second cesarean section (the cesarean section group). The rate of second cesarean section was 33.1% (59/178). The indications of the second cesarean section were as following: contracted pelvis (5%, 3
Graph theoretical similarity approach to compare molecular electrostatic potentials.
Marín, Ray M; Aguirre, Nestor F; Daza, Edgar E
2008-01-01
In this work we introduce a graph theoretical method to compare MEPs, which is independent of molecular alignment. It is based on the edit distance of weighted rooted trees, which encode the geometrical and topological information of Negative Molecular Isopotential Surfaces. A meaningful chemical classification of a set of 46 molecules with different functional groups was achieved. Structure--activity relationships for the corticosteroid binding affinity (CBG) of 31 steroids by means of hierarchical clustering resulted in a clear partitioning in high, intermediate, and low activity groups, whereas the results from quantitative structure--activity relationships, obtained from a partial least-squares analysis, showed comparable or better cross-validated correlation coefficients than the ones reported for previous methods based solely in the MEP.
On kaonic deuterium. Quantum field theoretic and relativistic covariant approach
Ivanov, A N; Faber, M; Fuhrmann, H; Ivanova, V A; Marton, J; Troitskaya, N I; Zmeskal, J
2004-01-01
We study kaonic deuterium, the bound K^-d state A_{K d}. Within a quantum field theoretic and relativistic covariant approach we derive the energy level displacement of the ground state of kaonic deuterium in terms of the amplitude of K^-d scattering for arbitrary relative momenta. Near threshold our formula reduces to the well-known DGBT formula. The S-wave amplitude of K^-d scattering near threshold is defined by the resonances Lambda(1405), Sigma(1750) and a smooth elastic background, and the inelastic channels K^- d -> NY and K^- d -> NY pion, with Y = Sigma^{+/-}, Sigma^0 and Lambda^0, where the final-state interactions play an important role. The Ericson-Weise formula for the S-wave scattering length of K^-d scattering is derived. The total width of the energy level of the ground state of kaonic deuterium is estimated using the theoretical predictions of the partial widths of the two-body decays A_{Kd} -> NY and experimental data on the rates of the NY-pair production in the reactions K^-d -> NY. We obt...
On kaonic hydrogen. Quantum field theoretic and relativistic covariant approach
Ivanov, A N; Faber, M; Marton, J; Troitskaya, N I; Zmeskal, J
2003-01-01
We study kaonic hydrogen, the bound K^-p state A_(Kp). Within a quantum field theoretic and relativistic covariant approach we derive the energy level displacement of the ground state of kaonic hydrogen in terms of the amplitude of K^-p scattering for arbitrary energies. The amplitude of low-energy K^-p scattering near threshold is defined by the contributions of three resonances Lambda(1405), Lambda(1800) and Sigma^0(1750) and a smooth elastic background. The amplitudes of inelastic channels of low-energy K^-p scattering fit experimental data on near threshold behaviour of the cross sections and the experimental data by the DEAR Collaboration. We use the soft-pion technique (leading order in Chiral Perturbation Theory) for the calculate of the partial width of the radiative decay of pionic hydrogen A_(pi p) -> n + gamma and the Panofsky ratio. The theoretical prediction for the Panofsky ratio agrees well with experimental data. We apply the soft-kaon technique (leading order in Chiral Perturbation Theory) to...
On kaonic hydrogen. Quantum field theoretic and relativistic covariant approach
Ivanov, A. N.; Cargnelli, M.; Faber, M.; Marton, J.; Troitskaya, N. I.; Zmeskal, J.
2004-07-01
We study kaonic hydrogen, the bound K - p state A K p . Within a quantum field theoretic and relativistic covariant approach we derive the energy level displacement of the ground state of kaonic hydrogen in terms of the amplitude of K - p scattering for arbitrary relative momenta. The amplitude of low-energy K - p scattering near threshold is defined by the contributions of three resonances Λ(1405), Λ(1800) and Σ^0(1750) and a smooth elastic background. The amplitudes of inelastic channels of low-energy K - p scattering fit experimental data on the near-threshold behaviour of the cross-sections and the experimental data by the DEAR Collaboration. We use the soft-pion technique (leading order in Chiral Perturbation Theory) for the calculation of the partial width of the radiative decay of pionic hydrogen A_{π p} to n + γ and the Panofsky ratio. The theoretical prediction for the Panofsky ratio agrees well with experimental data. We apply the soft-kaon technique (leading order in Chiral Perturbation Theory) to the calculation of the partial widths of radiative decays of kaonic hydrogen A_{Kp} to Λ^0 + γ and A_{K p} to Σ^0 + γ. We show that the contribution of these decays to the width of the energy level of the ground state of kaonic hydrogen is less than 1%.
Investigations on Actuator Dynamics through Theoretical and Finite Element Approach
Somashekhar S. Hiremath
2010-01-01
Full Text Available This paper gives a new approach for modeling the fluid-structure interaction of servovalve component-actuator. The analyzed valve is a precision flow control valve-jet pipe electrohydraulic servovalve. The positioning of an actuator depends upon the flow rate from control ports, in turn depends on the spool position. Theoretical investigation is made for No-load condition and Load condition for an actuator. These are used in finite element modeling of an actuator. The fluid-structure-interaction (FSI is established between the piston and the fluid cavities at the piston end. The fluid cavities were modeled with special purpose hydrostatic fluid elements while the piston is modeled with brick elements. The finite element method is used to simulate the variation of cavity pressure, cavity volume, mass flow rate, and the actuator velocity. The finite element analysis is extended to study the system's linearized response to harmonic excitation using direct solution steady-state dynamics. It was observed from the analysis that the natural frequency of the actuator depends upon the position of the piston in the cylinder. This is a close match with theoretical and simulation results. The effect of bulk modulus is also presented in the paper.
Maier-Saupe nematogenic fluid: field theoretical approach
M. Holovko
2011-09-01
Full Text Available We adopt a field theoretical approach to study the structure and thermodynamics of a homogeneous Maier-Saupe nematogenic fluid interacting with anisotropic Yukawa potential. In the mean field approximation we retrieve the standard Maier-Saupe theory for liquid crystals. In this theory the density is expressed via the second order Legendre polynomial of molecule orientations. In the Gaussian approximation we obtain analytical expressions for the correlation functions, the elasticity constant, the free energy, the pressure, and the chemical potential. We also use Ward symmetry identities to set a simple condition for the correlation functions. Subsequently we find corrections due to fluctuations and show that density now contains Legendre polynomials of higher orders.
Network Complexity Measures. An Information-Theoretic Approach.
Matthias Dehmer
2015-04-01
Full Text Available Quantitative graph analysis by using structural indices has been intricate in a sense that it often remains unclear which structural graph measures is the most suitable one, see [1, 12, 13]. In general, quantitative graph analysis deals with quantifying structural information of networks by using a measurement approach [5]. As special problem thereof is to characterize a graph quantitatively, that means to determine a measure that captures structural features of a network meaningfully. Various classical structural graph measures have been used to tackle this problem [13]. A fruitful approach by using information-theoretic [21] and statistical methods is to quantify the structural information content of a graph [1, 8, 18]. In this note, we sketch some classical information measures. Also, we briefly address the problem what kind of measures capture structural information uniquely. This relates to determine the discrimination power (or also called uniqueness of a graph measure, that is, how is the ability of the measures to discriminate non-isomorphic graphs structurally. [1] D. Bonchev. Information Theoretic Indices for Characterization of Chemical Structures. Research Studies Press, Chichester, 1983. [5] M. Dehmer and F. Emmert-Streib. Quantitative Graph Theory. Theory and Applications. CRC Press, 2014. [8] M. Dehmer, M. Grabner, and K. Varmuza. Information indices with high discriminative power for graphs. PLoS ONE, 7:e31214, 2012. [12] F. Emmert-Streib and M. Dehmer. Exploring statistical and population aspects of network complexity. PLoS ONE, 7:e34523, 2012. [13] F. Harary. Graph Theory. Addison Wesley Publishing Company, 1969. Reading, MA, USA. [18] A. Mowshowitz. Entropy and the complexity of the graphs I: An index of the relative complexity of a graph. Bull. Math. Biophys., 30:175–204, 1968. [21] C. E. Shannon and W. Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.
THE DINAMICS ON CITIZENSHIP – A THEORETICAL APPROACH
Diana Elena NEAGA
2010-12-01
Full Text Available In this paper I argue that the concept of citizenship is fundamentally a dynamic concept, a reflection of the society in which we live in. Thus, I identify participation as the main element of this dynamics. Starting from a simple definition – citizenship as the connecting point between individuals and the state trough rights and obligations – I note that citizenship is called upon on one hand in order to legitimate a political community’s authority, and on the other hand, in order to protect the individuals trough guaranteeing a set of civil, political and social rights. In order to fulfill these functions, the institution of citizenship must permit a continuous negotiation and renegotiation of the social contract, in a well-defined framework (in terms of time and place coordinates. Thus a particular mechanism emerges, transcending the classical theoretical approaches meant to explain who, how, for whom and why we discuss he issue of citizenship. My paper follows a three-step argument: first, I will start by deconstructing the concept of citizenship to its component elements, stressing out those aspects I consider to be relevant in terms of dynamics. Secondly, I will look at the main theoretical approaches regarding citizenship, considered as the results of a modeling process which establishes particular relations between various elements composing a system. Finally, I will underline the importance of participation (active or/and passive in the process of (re-constructing the concept of citizenship. Also, in this last part, I will try to synthesize the main elements that contribute to the dynamics of citizenship
A perturbation-theoretic approach to Lagrangian flow networks
Fujiwara, Naoya; Kirchen, Kathrin; Donges, Jonathan F.; Donner, Reik V.
2017-03-01
Complex network approaches have been successfully applied for studying transport processes in complex systems ranging from road, railway, or airline infrastructures over industrial manufacturing to fluid dynamics. Here, we utilize a generic framework for describing the dynamics of geophysical flows such as ocean currents or atmospheric wind fields in terms of Lagrangian flow networks. In this approach, information on the passive advection of particles is transformed into a Markov chain based on transition probabilities of particles between the volume elements of a given partition of space for a fixed time step. We employ perturbation-theoretic methods to investigate the effects of modifications of transport processes in the underlying flow for three different problem classes: efficient absorption (corresponding to particle trapping or leaking), constant input of particles (with additional source terms modeling, e.g., localized contamination), and shifts of the steady state under probability mass conservation (as arising if the background flow is perturbed itself). Our results demonstrate that in all three cases, changes to the steady state solution can be analytically expressed in terms of the eigensystem of the unperturbed flow and the perturbation itself. These results are potentially relevant for developing more efficient strategies for coping with contaminations of fluid or gaseous media such as ocean and atmosphere by oil spills, radioactive substances, non-reactive chemicals, or volcanic aerosols.
An integrated theoretical and practical approach for teaching hydrogeology
Bonomi, Tullia; Fumagalli, Letizia; Cavallin, Angelo
2013-04-01
Hydrogeology as an earth science intersects the broader disciplines of geology, engineering, and environmental studies but it does not overlap fully with any of them. It is focused on its own range of problems and over time has developed a rich variety of methods and approaches. The resolution of many hydrogeological problems requires knowledge of elements of geology, hydraulics, physics and chemistry; moreover in recent years the knowledge of modelling techniques has become a necessary ability. Successful transfer of all this knowledge to the students depends on the breadth of material taught in courses, the natural skills of the students and any practical experience the students can obtain. In the Department of Earth and Environmental Sciences of the University of Milano-Bicocca, the teaching of hydrogeology is developed in three inter-related courses: 1) general hydrogeology, 2) applied hydrogeology, 3) groundwater pollution and remediation. The sequence focuses on both groundwater flux and contaminant transport, supplemented by workshops involving case studies and computer labs, which provide the students with practical translation of the theoretical aspects of the science into the world of work. A second key aspect of the program utilizes the students' skill at learning through online approaches, and this is done through three approaches: A) by developing the courses on a University e-learning platform that allows the students to download lectures, articles, and teacher comments, and to participate in online forums; B) by carring out exercises through computer labs where the student analyze and process hydrogeological data by means of different numerical codes, that in turn enable them to manage databases and to perform aquifer test analysis, geostatistical analysis, and flux and transport modelling both in the unsaturated and saturated zone. These exercises are of course preceded by theoretical lectures on codes and software, highlighting their features and
A first-principles theoretical approach to heterogeneous nanocatalysis.
Negreiros, Fabio R; Aprà, Edoardo; Barcaro, Giovanni; Sementa, Luca; Vajda, Stefan; Fortunelli, Alessandro
2012-02-21
A theoretical approach to heterogeneous catalysis by sub-nanometre supported metal clusters and alloys is presented and discussed. Its goal is to perform a computational sampling of the reaction paths in nanocatalysis via a global search in the phase space of structures and stoichiometry combined with filtering which takes into account the given experimental conditions (catalytically relevant temperature and reactant pressure), and corresponds to an incremental exploration of the disconnectivity diagram of the system. The approach is implemented and applied to the study of propylene partial oxidation by Ag(3) supported on MgO(100). First-principles density-functional theory calculations coupled with a Reactive Global Optimization algorithm are performed, finding that: (1) the presence of an oxide support drastically changes the potential energy landscape of the system with respect to the gas phase, favoring configurations which interact positively with the electrostatic field generated by the surface; (2) the reaction energy barriers for the various mechanisms are crucial in the competition between thermodynamically and kinetically favored reaction products; (3) a topological database of structures and saddle points is produced which has general validity and can serve for future studies or for deriving general trends; (4) the MgO(100) surface captures some major features of the effect of an oxide support and appears to be a good model of a simple oxide substrate; (5) strong cooperative effects are found in the co-adsorption of O(2) and other ligands on small metal clusters. The proposed approach appears as a viable route to advance the role of predictive computational science in the field of heterogeneous nanocatalysis. This journal is © The Royal Society of Chemistry 2012
A first-principles theoretical approach to heterogeneous nanocatalysis
Negreiros, Fabio R.; Aprà, Edoardo; Barcaro, Giovanni; Sementa, Luca; Vajda, Stefan; Fortunelli, Alessandro
2012-02-01
A theoretical approach to heterogeneous catalysis by sub-nanometre supported metal clusters and alloys is presented and discussed. Its goal is to perform a computational sampling of the reaction paths in nanocatalysis via a global search in the phase space of structures and stoichiometry combined with filtering which takes into account the given experimental conditions (catalytically relevant temperature and reactant pressure), and corresponds to an incremental exploration of the disconnectivity diagram of the system. The approach is implemented and applied to the study of propylene partial oxidation by Ag3 supported on MgO(100). First-principles density-functional theory calculations coupled with a Reactive Global Optimization algorithm are performed, finding that: (1) the presence of an oxide support drastically changes the potential energy landscape of the system with respect to the gas phase, favoring configurations which interact positively with the electrostatic field generated by the surface; (2) the reaction energy barriers for the various mechanisms are crucial in the competition between thermodynamically and kinetically favored reaction products; (3) a topological database of structures and saddle points is produced which has general validity and can serve for future studies or for deriving general trends; (4) the MgO(100) surface captures some major features of the effect of an oxide support and appears to be a good model of a simple oxide substrate; (5) strong cooperative effects are found in the co-adsorption of O2 and other ligands on small metal clusters. The proposed approach appears as a viable route to advance the role of predictive computational science in the field of heterogeneous nanocatalysis.
A field theoretical approach to the quasi-continuum method
Iyer, Mrinal; Gavini, Vikram
2011-08-01
The quasi-continuum method has provided many insights into the behavior of lattice defects in the past decade. However, recent numerical analysis suggests that the approximations introduced in various formulations of the quasi-continuum method lead to inconsistencies—namely, appearance of ghost forces or residual forces, non-conservative nature of approximate forces, etc.—which affect the numerical accuracy and stability of the method. In this work, we identify the source of these errors to be the incompatibility of using quadrature rules, which is a local notion, on a non-local representation of energy. We eliminate these errors by first reformulating the extended interatomic interactions into a local variational problem that describes the energy of a system via potential fields. We subsequently introduce the quasi-continuum reduction of these potential fields using an adaptive finite-element discretization of the formulation. We demonstrate that the present formulation resolves the inconsistencies present in previous formulations of the quasi-continuum method, and show using numerical examples the remarkable improvement in the accuracy of solutions. Further, this field theoretic formulation of quasi-continuum method makes mathematical analysis of the method more amenable using functional analysis and homogenization theories.
[Economic cost of treating pressure ulcers: a theoretical approach].
Silva, Ana Júlia; Pereira, Sandra Martins; Rodrigues, Alexandre; Rocha, Ana Paula; Varela, Jesuína; Gomes, Luís Miguel; Messias, Norberto; Carvalhal, Rosa; Luís, Rui; Mendes, Luís Filipe Pereira
2013-08-01
The present study consisted of a theoretical approach to the problem posed by the economic costs associated with pressure ulcers (PUs). The initial aim was to assess the target problem from a conceptual perspective and then to report the results of prevalence studies that formed the basis for investigations of the disease's economic impact. The purpose of the present article is to discuss the economic costs associated with PUs from both the global point of view (appraising their financial repercussion) and the individual point of view (addressing the intangible costs). Regarding the economic impact of the costs associated with PUs, the total cost of treatment per healthcare setting was estimated relative to the Autonomous Community of Azores. The total cost of all the PU categories was EUR 7,086,415 in the homecare setting, EUR 1,723,509 in the hospital setting, and EUR 1,002,562 in older people's homes. Therefore, the estimated total treatment cost of all the PU categories was approximately EUR 9,812,486 in Azores. However, the emotional impact of this disease imposes high costs on patients and their relatives as a function of the resultant suffering. Indeed, PUs impose high costs not only related to the treatment but also related to the intangible costs of the suffering caused to patients and their caregivers.
Intelligent cognitive radio jamming - a game-theoretical approach
Dabcevic, Kresimir; Betancourt, Alejandro; Marcenaro, Lucio; Regazzoni, Carlo S.
2014-12-01
Cognitive radio (CR) promises to be a solution for the spectrum underutilization problems. However, security issues pertaining to cognitive radio technology are still an understudied topic. One of the prevailing such issues are intelligent radio frequency (RF) jamming attacks, where adversaries are able to exploit on-the-fly reconfigurability potentials and learning mechanisms of cognitive radios in order to devise and deploy advanced jamming tactics. In this paper, we use a game-theoretical approach to analyze jamming/anti-jamming behavior between cognitive radio systems. A non-zero-sum game with incomplete information on an opponent's strategy and payoff is modelled as an extension of Markov decision process (MDP). Learning algorithms based on adaptive payoff play and fictitious play are considered. A combination of frequency hopping and power alteration is deployed as an anti-jamming scheme. A real-life software-defined radio (SDR) platform is used in order to perform measurements useful for quantifying the jamming impacts, as well as to infer relevant hardware-related properties. Results of these measurements are then used as parameters for the modelled jamming/anti-jamming game and are compared to the Nash equilibrium of the game. Simulation results indicate, among other, the benefit provided to the jammer when it is employed with the spectrum sensing algorithm in proactive frequency hopping and power alteration schemes.
Game theoretic approach for cooperative feature extraction in camera networks
Redondi, Alessandro E. C.; Baroffio, Luca; Cesana, Matteo; Tagliasacchi, Marco
2016-07-01
Visual sensor networks (VSNs) consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition, and tracking. Often, VSN deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (1) to improve the accuracy/quality of the visual analysis task by exploiting multiview information or (2) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. We propose a game theoretic framework based on the Nash bargaining solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.
A game theoretic approach for trading discharge permits in rivers.
Niksokhan, Mohammad Hossein; Kerachian, Reza; Karamouz, Mohammad
2009-01-01
In this paper, a new Cooperative Trading Discharge Permit (CTDP) methodology is designed for estimating equitable and efficient treatment cost allocation among dischargers in a river system considering their conflicting interests. The methodology consists of two main steps: (1) initial treatment cost allocation and (2) equitable treatment cost reallocation. In the first step, a Pareto front among objectives is developed using a powerful and recently developed multi-objective genetic algorithm known as Nondominated Sorting Genetic Algorithm-II (NSGA-II). The objectives of the optimization model are considered to be the average treatment level of dischargers and a fuzzy risk of violating the water quality standards. The fuzzy risk is evaluated using the Monte Carlo analysis. The best non-dominated solution on the Pareto front, which provides the initial cost allocation to dischargers, is selected using the Young Bargaining Theory (YBT). In the second step, some cooperative game theoretic approaches are utilized to investigate how the maximum saving cost of participating dischargers in a coalition can be fairly allocated to them. The final treatment cost allocation provides the optimal trading discharge permit policies. The practical utility of the proposed methodology for river water quality management is illustrated through a realistic case study of the Zarjub river in the northern part of Iran.
A choice-semantical approach to theoretical truth.
Andreas, Holger; Schiemer, Georg
2016-08-01
A central topic in the logic of science concerns the proper semantic analysis of theoretical sentences, that is sentences containing theoretical terms. In this paper, we present a novel choice-semantical account of theoretical truth based on the epsilon-term definition of theoretical terms. Specifically, we develop two ways of specifying the truth conditions of theoretical statements in a choice functional semantics, each giving rise to a corresponding logic of such statements. In order to investigate the inferential strength of these logical systems, we provide a translation of each truth definition into a modal definition of theoretical truth. Based on this, we show that the stronger notion of choice-semantical truth captures more adequately our informal semantic understanding of scientific statements. Copyright © 2016 Elsevier Ltd. All rights reserved.
An information theoretic approach to the functional classification of neurons
Schneidman, E; Berry, M J; Schneidman, Elad; Bialek, William; Berry, Michael J.
2002-01-01
A population of neurons typically exhibits a broad diversity of responses to sensory inputs. The intuitive notion of functional classification is that cells can be clustered so that most of the diversity is captured in the identity of the clusters rather than by individuals within clusters. We show how this intuition can be made precise using information theory, without any need to introduce a metric on the space of stimuli or responses. Applied to the retinal ganglion cells of the salamander, this approach recovers classical results, but also provides clear evidence for subclasses beyond those identified previously. Further, we find that each of the ganglion cells is functionally unique, and that even within the same subclass only a few spikes are needed to reliably distinguish between cells.
Sustainable development, tourism and territory. Previous elements towards a systemic approach
Pierre TORRENTE
2009-01-01
Full Text Available Today, tourism is one of the major challenges for many countries and territories. The balance of payments, an ever-increasing number of visitors and the significant development of the tourism offer clearly illustrate the booming trend in this sector. This macro-economic approach is often used by the organizations in charge of tourism, WTO for instance. Quantitative assessments which consider the satisfaction of customers’ needs as an end in itself have prevailed both in tourism development schemes and in prospective approaches since the sixties.
Tripathy, Shreepada; Miller, Karen H; Berkenbosch, John W; McKinley, Tara F; Boland, Kimberly A; Brown, Seth A; Calhoun, Aaron W
2016-06-01
Controversy exists in the simulation community as to the emotional and educational ramifications of mannequin death due to learner action or inaction. No theoretical framework to guide future investigations of learner actions currently exists. The purpose of our study was to generate a model of the learner experience of mannequin death using a mixed methods approach. The study consisted of an initial focus group phase composed of 11 learners who had previously experienced mannequin death due to action or inaction on the part of learners as defined by Leighton (Clin Simul Nurs. 2009;5(2):e59-e62). Transcripts were analyzed using grounded theory to generate a list of relevant themes that were further organized into a theoretical framework. With the use of this framework, a survey was generated and distributed to additional learners who had experienced mannequin death due to action or inaction. Results were analyzed using a mixed methods approach. Forty-one clinicians completed the survey. A correlation was found between the emotional experience of mannequin death and degree of presession anxiety (P framework. Using the previous approach, we created a model of the effect of mannequin death on the educational and psychological state of learners. We offer the final model as a guide to future research regarding the learner experience of mannequin death.
MacDonald, Laura; Baldini, Giulia; Storrie, Brian
2015-01-01
Conventional microscopy techniques, namely, the confocal microscope or deconvolution processes, are resolution limited to approximately 200-250 nm by the diffraction properties of light as developed by Ernst Abbe in 1873. This diffraction limit is appreciably above the size of most multi-protein complexes, which are typically 20-50 nm in diameter. In the mid-2000s, biophysicists moved beyond the diffraction barrier by structuring the illumination pattern and then applying mathematical principles and algorithms to allow a resolution of approximately 100 nm, sufficient to address protein subcellular co-localization questions. This "breaking" of the diffraction barrier, affording resolution beyond 200 nm, is termed super-resolution microscopy. More recent approaches include single-molecule localization (such as photoactivated localization microscopy (PALM)/stochastic optical reconstruction microscopy (STORM)) and point spread function engineering (such as stimulated emission depletion (STED) microscopy). In this review, we explain basic principles behind currently commercialized super-resolution setups and address advantages and considerations in applying these techniques to protein co-localization in biological systems.
SUSTAINABLE TOURISM AND ITS FORMS - A THEORETICAL APPROACH
Bac Dorin
2013-07-01
Full Text Available From the second half of the twentieth century, the importance of the tourism industry to the world economy continued to grow, reaching today impressive figures: receipts of almost $ 1,000 billion and direct employment for over 70 million people (WTTC 2012, without taking into account the multiplier effect (according to the same statistics of WTTC, if considering the multiplier effect, the values are: $ 5,990 billion in tourism receipts, and 253.5 million jobs. We can say that tourism: has a higher capacity to generate and distribute incomes compared to other sectors; has a high multiplier effect; determines a high level of consumption of varied products and services. In this context, voices began to emerge, which presented the problems and challenges generated by the tourism activity. Many regions are facing real problems generated by tourism entrepreneurs and tourists who visit the community. Therefore, at the end of the last century, there were authors who sought to define a new form of tourism, which eliminated the negative impacts and increased the positive ones. As a generic term they used alternative tourism, but because of the ambiguity of the term, they tried to find a more precise term, which would define the concept easier. Thus emerged: ecotourism, rural tourism, Pro Poor Tourism etc.. All these forms have been introduced under the umbrella concept of sustainable tourism. In the present paper we will take a theoretical approach, in order to present some forms of sustainable tourism. During our research we covered the ideas and concepts promoted by several authors and academics but also some international organizations with focus on tourism. We considered these forms of tourism, as they respect all the rules of sustainable tourism and some of them have great potential to grow in both developed and emerging countries. The forms of sustainable tourism we identified are: ecotourism, pro-poor tourism, volunteer tourism and slow tourism. In
Wang, Ching Y; Ai, Ni; Arora, Sonia; Erenrich, Eric; Nagarajan, Karthigeyan; Zauhar, Randy; Young, Douglas; Welsh, William J
2006-12-01
The physiological roles of estrogen in sexual differentiation and development, female and male reproductive processes, and bone health are complex and diverse. Numerous natural and synthetic chemical compounds, commonly known as endocrine disrupting chemicals (EDCs), have been shown to alter the physiological effects of estrogen in humans and wildlife. As such, these EDCs may cause unanticipated and even undesirable effects. Large-scale in vitro and in vivo screening of chemicals to assess their estrogenic activity would demand a prodigious investment of time, labor, and money and would require animal testing on an unprecedented scale. Approaches in silico are increasingly recognized as playing a vital role in screening and prioritizing chemicals to extend limited resources available for experimental testing. Here, we evaluated a multistep procedure that is suitable for in silico (virtual) screening of large chemical databases to identify compounds exhibiting estrogenic activity. This procedure incorporates Shape Signatures, a novel computational tool that rapidly compares molecules on the basis of similarity in shape, polarity, and other bio-relevant properties. Using 4-hydroxy tamoxifen (4-OH TAM) and diethylstilbestrol (DES) as input queries, we employed this scheme to search a sample database of approximately 200,000 commercially available organic chemicals for matches (hits). Of the eight compounds identified computationally as potentially (anti)estrogenic, biological evaluation confirmed two as heretofore unknown estrogen antagonists. Subsequent radioligand binding assays confirmed that two of these three compounds exhibit antiestrogenic activities comparable to 4-OH TAM. Molecular modeling studies of these ligands docked inside the binding pocket of estrogen receptor alpha (ERalpha) elucidated key ligand-receptor interactions that corroborate these experimental findings. The present study demonstrates the utility of our computational scheme for this and
Chan, Dara V; Gopal, Sucharita; Helfrich, Christine A
2014-11-01
Although a desired rehabilitation goal, research continues to document that community integration significantly lags behind housing stability success rates for people of a variety of ages who used to be homeless. While accessibility to resources is an environmental factor that may promote or impede integration activity, there has been little empirical investigation into the impact of proximity of community features on resource use and integration. Using a Geographic Information Systems (GIS) approach, the current study examines how accessibility or proximity to community features in Boston, United States related to the types of locations used and the size of an individual's "activity space," or spatial presence in the community. Significant findings include an inverse relationship between activity space size and proximity to the number and type of community features in one's immediate area. Specifically, larger activity spaces were associated with neighborhoods with less community features, and smaller activity spaces corresponded with greater availability of resources within one's immediate area. Activity space size also varied, however, based on proximity to different types of resources, namely transportation and health care. Greater community function, or the ability to navigate and use community resources, was associated with better accessibility and feeling part of the community. Finally, proximity to a greater number of individual identified preferred community features was associated with better social integration. The current study suggests the ongoing challenges of successful integration may vary not just based on accessibility to, but relative importance of, specific community features and affinity with one's surroundings. Community integration researchers and housing providers may need to attend to the meaning attached to resources, not just presence or use in the community.
Resilience or Flexibility– A Theoretical Approach on Romanian Development Regions
Roxana Voicu – Dorobanțu
2015-09-01
Full Text Available The paper describes a theoretical contextualization of flexibility, sustainability, durability and resilience, in the context of the sustainable development goals. The main purpose is to identify the theoretical handles that may be used in the creation of a flexibility indicator. Thus, research questions related to the theoretical differentiation between durable and sustainable, flexible and resilient are answered. Further on, the paper describes the situation of the Romanian regions in terms of development indicators, based on Eurostat data, as a premise for further research on the possibility of their leapfrogging. This work was financially supported through the project “Routes of academic excellence in doctoral and post-doctoral research- REACH” co-financed through the European Social Fund, by Sectoral Operational Programme Human Resources Development 2007-2013, contract no POSDRU/59/1.5/S/137926.
Game-theoretic interference coordination approaches for dynamic spectrum access
Xu, Yuhua
2016-01-01
Written by experts in the field, this book is based on recent research findings in dynamic spectrum access for cognitive radio networks. It establishes a game-theoretic framework and presents cutting-edge technologies for distributed interference coordination. With game-theoretic formulation and the designed distributed learning algorithms, it provides insights into the interactions between multiple decision-makers and the converging stable states. Researchers, scientists and engineers in the field of cognitive radio networks will benefit from the book, which provides valuable information, useful methods and practical algorithms for use in emerging 5G wireless communication.
Theoretical approaches to digital services and digital democracy
Hoff, Jens Villiam; Scheele, Christian Elling
2014-01-01
The purpose of this paper is to develop a theoretical framework, which can be used in the analysis of all types of (political-administrative) web applications. Through a discussion and criticism of social construction of technology (SCOT), an earlier version of this model based on new medium theory...
Semiotics and Knowledge Management (KM) : A theoretical and empirical approach
Sjarbaini, Larissa; Jorna, Rene J.
2013-01-01
Knowledge Management (KM) concerns the study of knowledge in organizations. Knowledge sharing, use, storage, support, and knowledge creation are components of KM. The (short) history of KM shows that the theoretical foundations of KM require completion. In this article, a perspective on KM is discus
Semiotics and Knowledge Management (KM) : A theoretical and empirical approach
Sjarbaini, Larissa; Jorna, Rene J.
Knowledge Management (KM) concerns the study of knowledge in organizations. Knowledge sharing, use, storage, support, and knowledge creation are components of KM. The (short) history of KM shows that the theoretical foundations of KM require completion. In this article, a perspective on KM is
STRONG NORMALIZATION IN TYPE SYSTEMS - A MODEL THEORETICAL APPROACH
TERLOUW, J
1995-01-01
Tait's proof of strong normalization for the simply typed lambda-calculus is interpreted in a general model theoretical framework by means of the specification of a certain theory T and a certain model U of T. The argumentation is partly reduced to formal predicate logic by the application of
Group theoretic approaches to nuclear and hadronic collective motion
Biedenharn, L.C.
1982-01-01
Three approaches to nuclear and hadronic collective motion are reviewed, compared and contrasted: the standard symmetry approach as typified by the Interacting Boson Model, the kinematic symmetry group approach of Gell-Mann and Tomonaga, and the recent direct construction by Buck. 50 references.
Representing electrons a biographical approach to theoretical entities
Arabatzis, Theodore
2006-01-01
Both a history and a metahistory, Representing Electrons focuses on the development of various theoretical representations of electrons from the late 1890s to 1925 and the methodological problems associated with writing about unobservable scientific entities. Using the electron-or rather its representation-as a historical actor, Theodore Arabatzis illustrates the emergence and gradual consolidation of its representation in physics, its career throughout old quantum theory, and its appropriation and reinterpretation by chemists. As Arabatzis develops this novel biographical
Recent theoretical progress on an information geometrodynamical approach to chaos
Cafaro, Carlo
2008-01-01
In this paper, we report our latest research on a novel theoretical information-geometric framework suitable to characterize chaotic dynamical behavior of arbitrary complex systems on curved statistical manifolds. Specifically, an information-geometric analogue of the Zurek-Paz quantum chaos criterion of linear entropy growth and an information-geometric characterization of chaotic (integrable) energy level statistics of a quantum antiferromagnetic Ising spin chain in a tilted (transverse) external magnetic field are presented.
A Comparison of Approaches for Solving Hard Graph-Theoretic Problems
2015-05-01
A Comparison of Approaches for Solving Hard Graph- Theoretic Problems Victoria Horan ∗ Air Force Research Laboratory Information Directorate Steve...Comparison of Approaches for Solving Hard Graph- Theoretic Problems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...Minimizing the Size of an Identifying or Locating-Dominating Code in a Graph is NP- Hard ”, Theoret . Comput. Sci., 290 (2003) no. 3, 2109-2120. [4] M.G
We need theoretical physics approaches to study living systems
Blagoev, Krastan B.; Shukla, Kamal; affil="3" >Herbert Levine,
2013-08-01
Living systems, as created initially by the transition from assemblies of large molecules to self-reproducing information-rich cells, have for centuries been studied via the empirical toolkit of biology. This has been a highly successful enterprise, bringing us from the vague non-scientific notions of vitalism to the modern appreciation of the biophysical and biochemical bases of life. Yet, the truly mind-boggling complexity of even the simplest self-sufficient cells, let alone the emergence of multicellular organisms, of brain and consciousness, and to ecological communities and human civilizations, calls out for a complementary approach. In this editorial, we propose that theoretical physics can play an essential role in making sense of living matter. When faced with a highly complex system, a physicist builds simplified models. Quoting Philip W Anderson's Nobel prize address, 'the art of model-building is the exclusion of real but irrelevant parts of the problem and entails hazards for the builder and the reader. The builder may leave out something genuinely relevant and the reader, armed with too sophisticated an experimental probe, may take literally a schematized model. Very often such a simplified model throws more light on the real working of nature....' In his formulation, the job of a theorist is to get at the crux of the system by ignoring details and yet to find a testable consequence of the resulting simple picture. This is rather different than the predilection of the applied mathematician who wants to include all the known details in the hope of a quantitative simulacrum of reality. These efforts may be practically useful, but do not usually lead to increased understanding. To illustrate how this works, we can look at a non-living example of complex behavior that was afforded by spatiotemporal patterning in the Belousov-Zhabotinsky reaction [1]. Physicists who worked on this system did not attempt to determine all the relevant chemical intermediates
Game Theoretic Approach to Post-Docked Satellite Control
Hiramatsu, Takashi; Fitz-Coy, Norman G.
2007-01-01
This paper studies the interaction between two satellites after docking. In order to maintain the docked state with uncertainty in the motion of the target vehicle, a game theoretic controller with Stackelberg strategy to minimize the interaction between the satellites is considered. The small perturbation approximation leads to LQ differential game scheme, which is validated to address the docking interactions between a service vehicle and a target vehicle. The open-loop solution are compared with Nash strategy, and it is shown that less control efforts are obtained with Stackelberg strategy.
The production of scientific videos: a theoretical approach
Carlos Ernesto Gavilondo Rodriguez
2016-12-01
Full Text Available The article presents the results of theoretical research on the production of scientific videos and its application to the teaching-learning process carried out in schools in the city of Guayaquil, Ecuador. It is located within the production line and Audiovisual Communication. Creation of scientific videos, from the Communication major with a concentration in audiovisual production and multimedia of the Salesian Polytechnic University. For the realization of the article it was necessary to use key terms that helped subsequently to data collection. used terms such as: audiovisual production, understood as the production of content for audiovisual media; the following term used audiovisual communication is recognized as the process in which there is an exchange of messages through an audible and / or visual system; and the last term we use is scientifically video, which is one that uses audiovisual resources to obtain relevant and reliable information.As part of the theoretical results a methodological proposal for the video production is presented for educational purposes. In conclusion set out, first, that from the communicative statement in recent times, current social relations, constitute a successful context of possibilities shown to education to generate meeting points between the world of the everyday and the knowledge. Another indicator validated as part of the investigation, is that teachers surveyed use the potential of the audiovisual media, and supported them, deploy alternatives for use.
Scientifically-theoretical approaches to the definition of "managerial innovations"
Mushnikova, S.
2011-01-01
Based on analysis of existing approaches to the definition of "management " and "innovation" in the investigation of the author 's interpretation of "managerial innovations". The necessity of introduction of managerial innovations in the conditions of operation of industrial enterprises.
On Tradeoffs between Trust and Survivability using a Game Theoretic Approach
2016-04-13
networks. In: Pro . 6th Annual ACM/IEEE Mobile Computing and Networking, pp.255- 265, Boston, MA (Aug. 2000) 14. Ng, S.K., Seah, W.K.G.: Game ...On Tradeoffs between Trust and Survivability using a Game Theoretic Approach Jin-Hee Cho and Ananthram Swami U.S. Army Research Laboratory...introduces a game theoretic approach, namely Aoyagi’s game theory based on positive collusion of players. This approach improves group trust by
THEORETICAL APPROACHES TO THE DESCRIPTION OF PERSONAL LEARNING ENVIRONMENT
Sergey Alexandrovich Zolotukhin
2015-01-01
Full Text Available Personal learning environment is a relatively new concept that emerged under the influence of the increasing popularity of Web 2.0 applications and critical understanding of hierarchical systems of distant learning. In the center of this concept is the effect of personalization, according to which the activity of the actor is an important factor of personal development, strengthening of subjectivity of learning, development of learning through social and other types of interaction. The article highlights general theoretical basics of building a personal learning environment as one of the directions of development of modern models of learning, its basic characteristics. Special attention is paid to consideration of one of the variants of constructing personal learning environment – institutional personal learning environment, which combines the benefits of distance learning and the advantages of flexible, adaptive, open educational environments. In a broader sense, an institutional personal learning environment is a mechanism for the integration of formal and informal educational environments.
A Theoretical Approach to Engineering a New Enzyme
Anderson, Greg; Behera, Raghu N.; Gomatam, Ravi
2016-08-01
Density function theory, a subfield of quantum mechanics (QM), in combination with molecular mechanics (MM) has opened the way to engineer new artificial enzymes. Herein, we report theoretical calculations done using QM/MM to examine whether the regioselectivity and rate of chlorination of the enzyme chloroperoxidase can be improved by replacing the vanadium of this enzyme with niobium through dialysis. Our calculations show that a niobium substituted chloroperoxidase will be able to enter the initial steps of the catalytic cycle for chlorination. Although the protonation state of the niobium substituted enzyme is calculated to be different from than that of the natural vanadium substituted enzyme, our calculations show that the catalytic cycle can still proceed forward. Using natural bond orbitals, we analyse the electronic differences between the niobium substituted enzyme and the natural enzyme. We conclude by briefly examining how good of a model QM/MM provides for understanding the mechanism of catalysis of chloroperoxidase.
A theoretical approach on controlling agricultural pest by biological controls.
Mondal, Prasanta Kumar; Jana, Soovoojeet; Kar, T K
2014-03-01
In this paper we propose and analyze a prey-predator type dynamical system for pest control where prey population is treated as the pest. We consider two classes for the pest namely susceptible pest and infected pest and the predator population is the natural enemy of the pest. We also consider average delay for both the predation rate i.e. predation to the susceptible pest and infected pest. Considering a subsystem of original system in the absence of infection, we analyze the existence of all possible non-negative equilibria and their stability criteria for both the subsystem as well as the original system. We present the conditions for transcritical bifurcation and Hopf bifurcation in the disease free system. The theoretical evaluations are demonstrated through numerical simulations.
Strength of wood versus rate of testing - A theoretical approach
Nielsen, Lauge Fuglsang
2007-01-01
Strength of wood is normally measured in ramp load experiments. Experience shows that strength increases with increasing rate of testing. This feature is considered theoretically in this paper. It is shown that the influence of testing rate is a phenomenon, which depends on the quality...... of the considered wood. Low quality wood shows lesser influence of testing rate. This observation agrees with the well-known statement made by Borg Madsen that weak wood subjected to a constant load, has a longer lifetime than strong wood. In general, the influence of testing rate on strength increases...... with increasing moisture content. This phenomenon applies irrespective of the considered wood quality such that the above-mentioned order of magnitude observations between low and high quality wood are kept....
A Game Theoretic Approach to Distributed Opportunistic Scheduling
Banchs, Albert; Serrano, Pablo; Widmer, Joerg
2011-01-01
Distributed Opportunistic Scheduling (DOS) is inherently harder than conventional opportunistic scheduling due to the absence of a central entity that has knowledge of all the channel states. With DOS, stations contend for the channel using random access; after a successful contention, they measure the channel conditions and only transmit in case of a good channel, while giving up the transmission opportunity when the channel conditions are poor. The distributed nature of DOS systems makes them vulnerable to selfish users: by deviating from the protocol and using more transmission opportunities, a selfish user can gain a greater share of the wireless resources at the expense of the well-behaved users. In this paper, we address the selfishness problem in DOS from a game theoretic standpoint. We propose an algorithm that satisfies the following properties: (i) when all stations implement the algorithm, the wireless network is driven to the optimal point of operation, and (ii) one or more selfish stations cannot...
Exploring the joint measurability using an information-theoretic approach
Hsu, Li-Yi
2016-10-01
We explore the legal purity parameters for the joint measurements. Instead of direct unsharpening the measurements, we perform the quantum cloning before the sharp measurements. The necessary fuzziness in the unsharp measurements is equivalently introduced in the imperfect cloning process. Based on the information causality and the consequent noisy nonlocal computation, one can derive the information-theoretic quadratic inequalities that must be satisfied by any physical theory. On the other hand, to guarantee the classicality, the linear Bell-type inequalities deduced by these quadratic ones must be obeyed. As for the joint measurability, the purity parameters must be chosen to obey both types of inequalities. Finally, the quadratic inequalities for purity parameters in the joint measurability region are derived.
A game-theoretical approach to multimedia social networks security.
Liu, Enqiang; Liu, Zengliang; Shao, Fei; Zhang, Zhiyong
2014-01-01
The contents access and sharing in multimedia social networks (MSNs) mainly rely on access control models and mechanisms. Simple adoptions of security policies in the traditional access control model cannot effectively establish a trust relationship among parties. This paper proposed a novel two-party trust architecture (TPTA) to apply in a generic MSN scenario. According to the architecture, security policies are adopted through game-theoretic analyses and decisions. Based on formalized utilities of security policies and security rules, the choice of security policies in content access is described as a game between the content provider and the content requester. By the game method for the combination of security policies utility and its influences on each party's benefits, the Nash equilibrium is achieved, that is, an optimal and stable combination of security policies, to establish and enhance trust among stakeholders.
Mating strategies in primates: a game theoretical approach to infanticide.
Lyon, James E; Pandit, Sagar A; van Schaik, Carel P; Pradhan, Gauri R
2011-04-07
Infanticide by newly immigrated or newly dominant males is reported among a variety of taxa, such as birds, rodents, carnivores and primates. Here we present a game theoretical model to explain the presence and prevalence of infanticide in primate groups. We have formulated a three-player game involving two males and one female and show that the strategies of infanticide on the males' part and polyandrous mating on the females' part emerge as Nash equilibria that are stable under certain conditions. Moreover, we have identified all the Nash equilibria of the game and arranged them in a novel hierarchical scheme. Only in the subspace spanned by the males are the Nash equilibria found to be strict, and hence evolutionarily stable. We have therefore proposed a selection mechanism informed by adaptive dynamics to permit the females to transition to, and remain in, optimal equilibria after successive generations. Our model concludes that polyandrous mating by females is an optimal strategy for the females that minimizes infanticide and that infanticide confers advantage to the males only in certain regions of parameter space. We have shown that infanticide occurs during turbulent changes accompanying male immigration into the group. For changes in the dominance hierarchy within the group, we have shown that infanticide occurs only in primate groups where the chance for the killer to sire the next infant is high. These conclusions are confirmed by observations in the wild. This model thus has enabled us to pinpoint the fundamental processes behind the reproductive decisions of the players involved, which was not possible using earlier theoretical studies.
Anel A. Kireyeva
2016-04-01
Full Text Available Abstract The aim of this research is to develop new theoretical approaches of the formation of IT clusters in order to strengthen of trend of the innovative industrialization and competitiveness of the country. Keeping with the previous literature, this study determines by the novelty of the problem, concerning the formation of IT clusters, which can become a driving force of transformation due to the interaction, improving efficiency and introducing advanced technology. In this research, we used conceptual approach employs the study of different conceptual views of scientists on a specific research object; structured approach involves determination of properties of a whole object by identifying the different relationships; system approach aims to develop research methods and design of complex objects – systems of different types. This study allows to conclude that IT-clusters will be most effective when they evolve naturally, originating under the action of internal forces of consolidation of innovative, information and communications infrastructure (industrial parks, techno polis, research laboratories and business incubators, formation of soft infrastructure, that can help to find quick, innovative and creative ways to solve problems.
Understanding confounding effects in linguistic coordination: an information-theoretic approach
Gao, Shuyang; Galstyan, Aram
2014-01-01
We suggest an information-theoretic approach for measuring linguistic style coordination in dialogues. The proposed measure has a simple predictive interpretation and can account for various confounding factors through proper conditioning. We revisit some of the previous studies that reported strong sig- natures of stylistic accommodation, and find that a significant part of the observed coordination can be attributed to a simple confounding effect - length coordination. Specifically, longer utterances tend to be followed by longer responses, which gives rise to spurious correlations in the other stylistic features. We propose a test to distinguish correlations in length due to contextual factors (topic of conversation, user verbosity, etc.) and turn-by-turn coordination. We also suggest a test to identify whether stylistic coordination persists even after accounting for length coordination and contextual factors.
Vytelingum, Perukrishnen; Cliff, Dave; Jennings, Nicholas R.
We develop a new model to analyse the strategic behaviour of buyers and sellers in market mechanisms. In particular, we wish to understand how the different strategies they adopt affect their economic efficiency in the market and to understand the impact of these choices on the overall efficiency of the marketplace. To this end, we adopt a two-population evolutionary game theoretic approach, where we consider how the behaviours of both buyers and sellers evolve in marketplaces. In so doing, we address the shortcomings of the previous state-of-the-art analytical model that assumes that buyers and sellers have to adopt the same mixed strategy in the market. Finally, we apply our model in one of the most common market mechanisms, the Continuous Double Auction, and demonstrate how it allows us to provide new insights into the strategic interactions of such trading agents.
Adsorption of monovalent metal atoms on graphene: a theoretical approach
Medeiros, Paulo V C; De Brito Mota, F; De Castilho, Caio M C [Grupo de Fisica de Superficies e Materiais, Instituto de Fisica, Universidade Federal da Bahia, Campus Universitario da Federacao/Ondina, 40170-115, Salvador, Bahia (Brazil); Mascarenhas, Artur J S, E-mail: caio@ufba.br [Instituto Nacional de Ciencia e Tecnologia em Energia e Ambiente-INCT-E and A-CIENAM, Universidade Federal da Bahia, Salvador, Bahia (Brazil)
2010-03-19
This work investigates, using first-principles calculations, electronic and structural properties of hydrogen, lithium, sodium, potassium and rubidium that are adsorbed, in a regular pattern, on a graphene surface. The results for H-graphene (graphane) and Li-graphene were compared with previous calculations. The present results do not support previous claims that the Li-C bond in such a layer would result in an sp{sup 2} to an sp{sup 3} transition of carbon orbitals, being more compatible with some ionic character for the covalent bond and with lithium acting as an electron acceptor in a bridging environment. Calculations were also performed for the Na, K, and Rb-graphene systems, resulting in a similar electronic behaviour but with a more pronounced ionic character than for Li-graphene. Energy calculations indicate the possible stability of such ad-graphene layers, with only the Li-graphene being possible to be spontaneously obtained.
Black hole state counting in LQG: A number theoretical approach
Agullo, Ivan; Diaz-Polo, Jacobo; Fernandez-Borja, Enrique; Villaseñor, Eduardo J S
2008-01-01
We give a practical method to exactly compute black hole entropy in the framework of Loop Quantum Gravity. Along the way we provide a complete characterization of the relevant sector of the spectrum of the area operator, including degeneracies, and determine the number of solutions to the projection constraint analytically. We use a computer implementation of the proposed algorithm to confirm and extend previous results on the detailed structure of the black hole degeneracy spectrum.
The Acquisition of Chinese Relative Clauses: Contrasting Two Theoretical Approaches
Hu, Shenai; Gavarró, Anna; Vernice, Mirta; Guasti, Maria Teresa
2016-01-01
This study examines the comprehension of relative clauses by Chinese-speaking children, and evaluates the validity of the predictions of the Dependency Locality Theory (Gibson, 1998, 2000) and the Relativized Minimality approach (Friedmann, Belletti & Rizzi, 2009). One hundred and twenty children from three to eight years of age were tested by…
THEORETICAL APPROACHES TO THE DEFINITION OF THE "INFORMATION RESOURCE"
Netreba, I.
2014-01-01
Existing approaches to determining the nature of the category "information resource" are detailed and systematized. The relationships between the categories "information resource", "information technology", "information management system" are revealed. Determined the importance of information resources for the production process at the enterprise.
Conflicts, development and natural resources : An applied game theoretic approach
Wick, A.K.
2008-01-01
This thesis also provides a critical view on a part of preceding resource curse results, namely the negative association between resources and economic performance. Arguing that the empirical literature on the topic up until now has ignored serious econometric concerns, a different approach is offer
Trust Dynamics in WSNs: An Evolutionary Game-Theoretic Approach
Shigen Shen
2016-01-01
Full Text Available A sensor node (SN in Wireless Sensor Networks (WSNs can decide whether to collaborate with others based on a trust management system (TMS by making a trust decision. In this paper, we study the trust decision and its dynamics that play a key role to stabilize the whole network using evolutionary game theory. When SNs are making their decisions to select action Trust or Mistrust, a WSNs trust game is created to reflect their utilities. An incentive mechanism bound with one SN’s trust degree is incorporated into this trust game and effectively promotes SNs to select action Trust. The replicator dynamics of SNs’ trust evolution, illustrating the evolutionary process of SNs selecting their actions, are given. We then propose and prove the theorems indicating that evolutionarily stable strategies can be attained under different parameter values, which supply theoretical foundations to devise a TMS for WSNs. Moreover, we can find out the conditions that will lead SNs to choose action Trust as their final behavior. In this manner, we can assure WSNs’ security and stability by introducing a trust mechanism to satisfy these conditions. Experimental results have confirmed the proposed theorems and the effects of the incentive mechanism.
Modelling Crowd Dynamics: a Multiscale, Measure-theoretical Approach
Evers, Joep
2011-01-01
We present a strategy capable of describing basic features of the dynamics of crowds. The behaviour of the crowd is considered from a twofold perspective. We examine both the large scale behaviour of the crowd, and phenomena happening at the individual pedestrian's level. We unify micro and macro in a single model, by working with general mass measures and their transport. We improve existing modelling by coupling a measure-theoretical framework with basic ideas of mixture theory formulated in terms of measures. This strategy allows us to define several constituents of the crowd, each having its own partial velocity. We can thus examine the interaction between subpopulations that have distinct characteristics. We give special features to those pedestrians that are represented by the microscopic (discrete) part. In real life they would play the role of leaders, predators etc. Since we are interested in the global behaviour of the rest of the crowd, we model this part as a continuum. By identifying a suitable c...
A Game Theoretic Approach to Cyber Attack Prediction
Peng Liu
2005-11-28
The area investigated by this project is cyber attack prediction. With a focus on correlation-based prediction, current attack prediction methodologies overlook the strategic nature of cyber attack-defense scenarios. As a result, current cyber attack prediction methodologies are very limited in predicting strategic behaviors of attackers in enforcing nontrivial cyber attacks such as DDoS attacks, and may result in low accuracy in correlation-based predictions. This project develops a game theoretic framework for cyber attack prediction, where an automatic game-theory-based attack prediction method is proposed. Being able to quantitatively predict the likelihood of (sequences of) attack actions, our attack prediction methodology can predict fine-grained strategic behaviors of attackers and may greatly improve the accuracy of correlation-based prediction. To our best knowledge, this project develops the first comprehensive framework for incentive-based modeling and inference of attack intent, objectives, and strategies; and this project develops the first method that can predict fine-grained strategic behaviors of attackers. The significance of this research and the benefit to the public can be demonstrated to certain extent by (a) the severe threat of cyber attacks to the critical infrastructures of the nation, including many infrastructures overseen by the Department of Energy, (b) the importance of cyber security to critical infrastructure protection, and (c) the importance of cyber attack prediction to achieving cyber security.
Oxidative dissolution of silver nanoparticles: A new theoretical approach.
Adamczyk, Zbigniew; Oćwieja, Magdalena; Mrowiec, Halina; Walas, Stanisław; Lupa, Dawid
2016-05-01
A general model of an oxidative dissolution of silver particle suspensions was developed that rigorously considers the bulk and surface solute transport. A two-step surface reaction scheme was proposed that comprises the formation of the silver oxide phase by direct oxidation and the acidic dissolution of this phase leading to silver ion release. By considering this, a complete set of equations is formulated describing oxygen and silver ion transport to and from particles' surfaces. These equations are solved in some limiting cases of nanoparticle dissolution in dilute suspensions. The obtained kinetic equations were used for the interpretation of experimental data pertinent to the dissolution kinetics of citrate-stabilized silver nanoparticles. In these kinetic measurements the role of pH and bulk suspension concentration was quantitatively evaluated by using the atomic absorption spectrometry (AAS). It was shown that the theoretical model adequately reflects the main features of the experimental results, especially the significant increase in the dissolution rate for lower pH. Also the presence of two kinetic regimes was quantitatively explained in terms of the decrease in the coverage of the fast dissolving oxide layer. The overall silver dissolution rate constants characterizing these two regimes were determined.
Is DNA a metal, semiconductor or insulator? A theoretical approach
Rey-Gonzalez, Rafael; Fonseca-Romero, Karen; Plazas, Carlos; Grupo de Óptica e Información Cuántica Team
Over the last years, scientific interest for designing and making low dimensional electronic devices with traditional or novel materials has been increased. These experimental and theoretical researches in electronic properties at molecular scale are looking for developing efficient devices able to carry out tasks which are currently done by silicon transistors and devices. Among the new materials DNA strands are highlighted, but the experimental results have been contradictories pointing to behaviors as conductor, semiconductor or insulator. To contribute to the understanding of the origin of the disparity of the measurements, we perform a numerical calculation of the electrical conductance of DNA segments, modeled as 1D disordered finite chains. The system is described into a Tight binding model with nearest neighbor interactions and a s orbital per site. Hydration effects are included as random variations of self-energies. The electronic current as a function of applied bias is calculated using Launder formalism, where the transmission probability is determined into the transfer matrix formalism. We find a conductor-to-semiconductor-to-insulator transition as a function of the three effects taken into account: chain size, intrinsic disorder, and hydration We thank Fundación para la Promoción de la Investigación y la Tecnología, Colombia, and Dirección de Investigación de Bogotá, Universidad Nacional de Colombia, for partial financial support.
Exploring Job Satisfaction of Nursing Faculty: Theoretical Approaches.
Wang, Yingchen; Liesveld, Judy
2015-01-01
The Future of Nursing report identified the shortage of nursing faculty as 1 of the barriers to nursing education. In light of this, it is becoming increasingly important to understand the work-life of nursing faculty. The current research focused on job satisfaction of nursing faculty from 4 theoretical perspectives: human capital theory, which emphasizes the expected monetary and nonmonetary returns for any career choices; structural theory, which emphasizes the impact of institutional features on job satisfaction; positive extrinsic environment by self-determination theory, which asserts that a positive extrinsic environment promotes competency and effective outcomes at work; and psychological theory, which emphasizes the proposed relationship between job performance and satisfaction. In addition to the measures for human capital theory, institutional variables (from structural theory and self-determination theory), and productivity measures (from psychological theory), the authors also selected sets of variables for personal characteristics to investigate their effects on job satisfaction. The results indicated that variables related to human capital theory, especially salary, contributed the most to job satisfaction, followed by those related to institutional variables. Personal variables and productivity variables as a whole contributed as well. The only other variable with marginal significance was faculty's perception of institutional support for teaching.
Optical trimer: A theoretical physics approach to waveguide couplers
Stoffel, A; Rodríguez-Lara, B M
2016-01-01
We study electromagnetic field propagation through an ideal, passive, triangular three-waveguide coupler using a symmetry based approach to take advantage of the underlying $SU(3)$ symmetry. The planar version of this platform has proven valuable in photonic circuit design providing optical sampling, filtering, modulating, multiplexing, and switching. We show that a group-theory approach can readily provide a starting point for design optimization of the triangular version. Our analysis is presented as a practical tutorial on the use of group theory to study photonic lattices for those not familiar with abstract algebra methods. In particular, we study the equilateral trimer to show the relation of pearl-necklace arrays with the Discrete Fourier Transform due to their cyclic group symmetry, and the isosceles trimer to show its relation with the golden ratio and its ability to provide stable output at a single waveguide. We also study the propagation dependent case of an equilateral trimer that linearly increa...
Open-Ended Evolutionary Robotics: an Information Theoretic Approach
Delarboulas, Pierre; Sebag, Michèle
2010-01-01
This paper is concerned with designing self-driven fitness functions for Embedded Evolutionary Robotics. The proposed approach considers the entropy of the sensori-motor stream generated by the robot controller. This entropy is computed using unsupervised learning; its maximization, achieved by an on-board evolutionary algorithm, implements a "curiosity instinct", favouring controllers visiting many diverse sensori-motor states (sms). Further, the set of sms discovered by an individual can be transmitted to its offspring, making a cultural evolution mode possible. Cumulative entropy (computed from ancestors and current individual visits to the sms) defines another self-driven fitness; its optimization implements a "discovery instinct", as it favours controllers visiting new or rare sensori-motor states. Empirical results on the benchmark problems proposed by Lehman and Stanley (2008) comparatively demonstrate the merits of the approach.
Coordination-theoretic approach to modelling grid service composition process
Meng Qian; Zhong Liu; Jing Wang; Li Yao; Weiming Zhang
2010-01-01
A grid service composite process is made up of complex coordinative activities.Developing the appropriate model of grid service coordinative activities is an important foundation for the grid service composition.According to the coordination theory,this paper elaborates the process of the grid service composition by using UML 2.0,and proposes an approach to modelling the grid service composition process based on the coordination theory.This approach helps not only to analyze accurately the task activities and relevant dependencies among task activities,but also to facilitate the adaptability of the grid service orchestration to further realize the connectivity,timeliness,appropriateness and expansibility of the grid service composition.
Simple field theoretical approach of Coulomb systems. Entropic effects
Di Caprio, D; Badiali, J P [Laboratory of Electrochemistry and Analytical Chemistry, University Paris 6, CNRS, ENSCP, BP 39, 4, Place Jussieu, 75252 Paris, Cedex 05 (France); Holovko, M [Institute for Condensed Matter Physics, National Academy of Sciences, 1 Svientsitskii Str, 79011 Lviv (Ukraine)], E-mail: dung.di_caprio@upmc.fr
2009-05-29
We discuss a new simple field theory approach of Coulomb systems. Using a description in terms of fields, we introduce in a new way the statistical degrees of freedom in relation to the quantum mechanics. We show by a series of examples that these fundamental entropic effects can help account for physical phenomena in relation to Coulomb systems whether symmetric or asymmetric in valence. Overall, this gives a new understanding of these systems.
The Case Study Approach: Some Theoretical, Methodological and Applied Considerations
2013-06-01
a large manufacturing organisation in Malaysia . An in- depth case study process (specifically a qualitative approach) was used to illustrate the...researcher closely examined four leaders from generally diverse organisations, who had embraced the learning-organisation concept in order to improve...The researchers focused on the context of learning in the workplace , and they investigated the nature of learning and development opportunities that
Data normalization in biosurveillance: an information-theoretic approach.
Peter, William; Najmi, Amir H; Burkom, Howard
2007-10-11
An approach to identifying public health threats by characterizing syndromic surveillance data in terms of its surprisability is discussed. Surprisability in our model is measured by assigning a probability distribution to a time series, and then calculating its entropy, leading to a straightforward designation of an alert. Initial application of our method is to investigate the applicability of using suitably-normalized syndromic counts (i.e., proportions) to improve early event detection.
Success Determination by Innovation: A Theoretical Approach in Marketing
Raj Kumar Gautam
2012-01-01
The paper aims at to identify the main issues in the marketing which needs immediate attention of the marketers. The importance of innovation in the marketing has also been highlighted and marketing mix have been related to innovative and creative ideas. The study is based on the secondary data, various research papers, articles has been studied to develop a innovative approach in the marketing. Marketing innovative ideas relating to business lead generation, product, price, distribution, pro...
The Economic Security of Bank: Theoretical Basis and Systemic Approach
Gavlovska Nataliia I.
2017-07-01
Full Text Available The article analyzes the existing approaches to interpreting the category of «economic security of bank». A author’s own definition of the concept of «economic security of bank» has been proposed, which should be understood as condition of protecting the vital interests of bank, achieved by harmonizing relationships with the entities of external influence and optimizing the internal system processes, thus enabling efficient function as well as development by means of an adaptation mechanism. A number of approaches to understanding the substance of the above concept has been allocated and their main characteristics have been provided. The need to study the specifics of interaction of banking institutions with the external environment in the context of interaction between the State agents and market actors has been underlined. Features of formation of the term of «system» have been defined, three main groups of approaches to interpretation of the term have been provided. A author’s own definition of the concept of «economic security system of bank» has been proposed. A concrete definition of principles for building an economic security system of bank has been provided.
An information theoretic approach for combining neural network process models.
Sridhar, D V.; Bartlett, E B.; Seagrave, R C.
1999-07-01
Typically neural network modelers in chemical engineering focus on identifying and using a single, hopefully optimal, neural network model. Using a single optimal model implicitly assumes that one neural network model can extract all the information available in a given data set and that the other candidate models are redundant. In general, there is no assurance that any individual model has extracted all relevant information from the data set. Recently, Wolpert (Neural Networks, 5(2), 241 (1992)) proposed the idea of stacked generalization to combine multiple models. Sridhar, Seagrave and Barlett (AIChE J., 42, 2529 (1996)) implemented the stacked generalization for neural network models by integrating multiple neural networks into an architecture known as stacked neural networks (SNNs). SNNs consist of a combination of the candidate neural networks and were shown to provide improved modeling of chemical processes. However, in Sridhar's work SNNs were limited to using a linear combination of artificial neural networks. While a linear combination is simple and easy to use, it can utilize only those model outputs that have a high linear correlation to the output. Models that are useful in a nonlinear sense are wasted if a linear combination is used. In this work we propose an information theoretic stacking (ITS) algorithm for combining neural network models. The ITS algorithm identifies and combines useful models regardless of the nature of their relationship to the actual output. The power of the ITS algorithm is demonstrated through three examples including application to a dynamic process modeling problem. The results obtained demonstrate that the SNNs developed using the ITS algorithm can achieve highly improved performance as compared to selecting and using a single hopefully optimal network or using SNNs based on a linear combination of neural networks.
A group theoretic approach to shear-free radiating stars
Abebe, G. Z.; Maharaj, S. D.; Govinder, K. S.
2015-10-01
A systematic analysis of the junction condition, relating the radial pressure with the heat flow in a shear-free relativistic radiating star, is undertaken. This is a highly nonlinear partial differential equation in general. We obtain the Lie point symmetries that leave the boundary condition invariant. Using a linear combination of the symmetries, we transform the junction condition into ordinary differential equations. We present several new exact solutions to the junction condition. In each case we can identify the exact solution with a Lie point generator. Some of the solutions obtained satisfy the linear barotropic equation of state. As a special case we regain the conformally flat models which were found previously. Our analysis highlights the interplay between Lie algebras, nonlinear differential equations and application to relativistic astrophysics.
A group theoretic approach to shear-free radiating stars
Abebe, G; Govinder, K S
2015-01-01
A systematic analysis of the junction condition, relating the radial pressure with the heat flow in a shear-free relativistic radiating star, is undertaken. This is a highly nonlinear partial differential equation in general. We obtain the Lie point symmetries that leave the boundary condition invariant. Using a linear combination of the symmetries, we transform the junction condition into ordinary differential equations. We present several new exact solutions to the junction condition. In each case we can identify the exact solution with a Lie point generator. Some of the solutions obtained satisfy the linear barotropic equation of state. As a special case we regain conformally flat models which were found previously. Our analysis highlights the interplay between Lie algebras, nonlinear differential equations and application to relativistic astrophysics.
Theoretical Triangulation as an Approach for Revealing the Complexity of a Classroom Discussion
van Drie, Jannet; Dekker, Rijkje
2013-01-01
In this paper we explore the value of theoretical triangulation as a methodological approach for the analysis of classroom interaction. We analyze an excerpt of a whole-class discussion in history from three theoretical perspectives: interactivity of the discourse, conceptual level raising and historical reasoning. We conclude that using…
Theoretical Triangulation as an Approach for Revealing the Complexity of a Classroom Discussion
van Drie, Jannet; Dekker, Rijkje
2013-01-01
In this paper we explore the value of theoretical triangulation as a methodological approach for the analysis of classroom interaction. We analyze an excerpt of a whole-class discussion in history from three theoretical perspectives: interactivity of the discourse, conceptual level raising and historical reasoning. We conclude that using…
From DNA radiation damage to cell death: theoretical approaches.
Ballarini, Francesca
2010-10-05
Some representative models of radiation-induced cell death, which is a crucial endpoint in radiobiology, were reviewed. The basic assumptions were identified, their consequences on predicted cell survival were analyzed, and the advantages and drawbacks of each approach were outlined. In addition to "historical" approaches such as the Target Theory, the Linear-Quadratic model, the Theory of Dual Radiation Action and Katz' model, the more recent Local Effect Model was discussed, focusing on its application in Carbon-ion hadrontherapy. Furthermore, a mechanistic model developed at the University of Pavia and based on the relationship between cell inactivation and chromosome aberrations was presented, together with recent results; the good agreement between model predictions and literature experimental data on different radiation types (photons, protons, alpha particles, and Carbon ions) supported the idea that asymmetric chromosome aberrations like dicentrics and rings play a fundamental role for cell death. Basing on these results, a reinterpretation of the TDRA was also proposed, identifying the TDRA "sublesions" and "lesions" as clustered DNA double-strand breaks and (lethal) chromosome aberrations, respectively.
From DNA Radiation Damage to Cell Death: Theoretical Approaches
Francesca Ballarini
2010-01-01
Full Text Available Some representative models of radiation-induced cell death, which is a crucial endpoint in radiobiology, were reviewed. The basic assumptions were identified, their consequences on predicted cell survival were analyzed, and the advantages and drawbacks of each approach were outlined. In addition to “historical” approaches such as the Target Theory, the Linear-Quadratic model, the Theory of Dual Radiation Action and Katz' model, the more recent Local Effect Model was discussed, focusing on its application in Carbon-ion hadrontherapy. Furthermore, a mechanistic model developed at the University of Pavia and based on the relationship between cell inactivation and chromosome aberrations was presented, together with recent results; the good agreement between model predictions and literature experimental data on different radiation types (photons, protons, alpha particles, and Carbon ions supported the idea that asymmetric chromosome aberrations like dicentrics and rings play a fundamental role for cell death. Basing on these results, a reinterpretation of the TDRA was also proposed, identifying the TDRA “sublesions” and “lesions” as clustered DNA double-strand breaks and (lethal chromosome aberrations, respectively.
Tsukamoto, T.; Sagawa, N. [The Institute of Energy Economics, Tokyo (Japan)
1996-02-01
In order to optimize introduction of wide area power transmission access according to the amended Electric Power Business Law, discussions were given on a theoretical approach to calculation of access fee to transmission lines. The amended law is intended not to limit new comers in power generation area to operations in the supply areas for general electric power business operators, but to form power wholesale markets in wider range. Since a power wholesale market is structured via one transmission network, the access conditions for transmission lines largely govern the economic reasonability for new market comers. Too high access fee prevents high-efficiency power generation units from entering a market, thus not resulting in reduction in energy fees. Conversely, if the fee is too low, harmful effects will result in the system operation, such as entry of low-efficiency generation units. What maximizes economic gains and gives incentives to the system participants would be an operation by using a marginal expense approach, but a number of problems also exists. The overall expense distribution method is simple and easy to operate, but contains economic problems related to technical problems, fairness, and efficiency in the system operation. 5 refs., 5 figs., 1 tab.
Consumer Perception of Competitiveness – Theoretical-Instrumental Approach
Duralia Oana
2016-04-01
Full Text Available Behaviorist economic approach has recorded a quantum leap in a relatively short period of time, as studying the relationship between consumer behavior and companies’ strategic decisions based on market competitiveness are no longer an unknown area. However, this issue remains actual in view of the fact that during the decision process of purchase, consumers do not always behave rationally, as they are the only ones who can appreciate if the offer of the company, in terms of range, quality, price and auxiliary services meet their needs or not. In this context, this paper aims to deepen the existing interconnection between the market decisions of the enterprise and consumer behavior, as measure standard for the competitiveness of a firm on a certain market.
Group-theoretical approach to relativistic eikonal physics
Leon, J.; Quiros, M. (Instituto de Enstructura de la Materia, C.S.I.C., Madrid (Spain); Departamento de Matematica, Universidad Complutense, Campus de Alcala (Spain)); Ramirez Mittelbrunn, J. (Instituto de Estructura de la Materia, C.S.I.C., Madrid (Spain))
1977-09-01
A contraction of the Poincare group is performed leading to the eikonal approximation. Invariants, one-particle states, spinning particles and some interaction problems are studied with the following results: momenta of ultrarelativistic particles behave as lightlike, the little group being E/sub 2/, spin behaves as that of zero-mass particles, helicity being conserved in the presence of interactions. The full eikonal results are rederived for Green's functions, wave functions, etc. The way for computing corrections due to transverse momenta and spin-dependent interactions is outlined. A parallel analysis is made for the infinite-momentum frame, the similarities and differences between this formalism and the eikonal approach being disclosed.
Theoretical Basis of the Psychoanalytic Approach to Psychotherapy of Autism
Aldo Spelic
2015-02-01
Full Text Available In the modern scientific and professional environment psychoanalytic psychotherapy is placed in the background regarding its possibilities in the treatment of autism. This position, expressed by the question of possibilities of its use in the therapy of autism, is identified by the author as a result of the existing dichotomy (‘splitting’ in ‘organic’ and ‘psychic’ concepts of its etiology. To overcome the above constraint the author, on the base of his twenty years of psychotherapeutic experiences with eight autistic children, suggests the possibility of developing such concept of autistic psychogenesis and based on it a therapeutic approach. In support of his therapeutic observations, in this article were used contributions of contemporary researches by intersubjectivists, whose results speak in favour of the thesis.
Theoretical Approaches to the Spin Structure of the Proton
Gonzalez Hernandez, Jose Osvaldo
Many aspects of the structure of the proton are still unknown. One of the most noticeable unanswered question is the one of spin, that is, how can the fundamental degrees of freedom, quarks and gluons, account for the spin of the parent proton? It is known that quarks and gluons carry not only intrinsic but also orbital angular momentum. These two, combined, should in principle should add up to the value 1/2, which characterizes the spin of the proton. The mechanism responsible for this it is yet to be understood. It is not even clear how to define or "separate" the orbital angular momentum from the intrinsic angular momentum of the constituent particles. In recent years, one promising approach to this puzzle known as the spin crisis, is the possibility of accessing the transverse structure of the proton by means of the so called Generalized Parton Distributions (GPDs). These functions appear in the description of exclusive scattering processes. Since GPDs cannot be calculated from first principles, they must be extracted based upon models and experimental data. This dissertation presents the development of a new flexible parametrization, based on a "Reggeized" diquark approach, for chiral-even GPDs. This model is then used to analyze the significance of the different GPDs in some Deeply Virtual Compton Scattering measurements from Jlab; the results from this analysis are extended to the kinematical region relevant at the HERMES experiment. Subsequently, the model is extended to chiral-odd GPDs. With the tool of this model in hand, a study of the flavor dependence of Dirac and Pauli form factors is conducted. The connections between GPDs and other distribution functions are addressed in the last chapter, in the context of Wigner Distributions and possible probabilistic interpretations.
A theoretical approach of Asperger’s syndrome in children
Chrysoula Valamoutopoulou
2010-07-01
Full Text Available The Asperger’s Syndrome is reported in the pervasive developmental disorders and was categorized as a separate disorder, initially in the ICD‐10 (World Health Organization, 1992 and afterwards in the DSM‐IV (American Psychiatric Organization, 1994. The Asperger’s Syndrome is distinguished by a team of symptoms that concern the low output in the social interaction and the communication dexterities, as well as the increased stereotypical behavior in various activities and interests.Aim: The aim of this present study was the descriptive review of bibliography concerning the epidemiology, the differential diagnosis from the autism, the etiology, the diagnosis, the therapeutic approaches and the adaptation difficulty of the family environment, with regard to the investigation of the Asperger ’s syndrome.Material and method: In the present review, the methodology that was followed included specialized electronic research with regard to the Asperger’s syndrome in children, using special keywords.Results: The review of the bibliography showed that the individuals with Asperger’s syndrome experience really big difficulties in elementary social behaviors, as failure in the growth and creation of friendly relations or in the search of amusing activities with others. Also, they face difficulties in the comprehension of not verbal communication (body language and the other’s expressions, the attitudes of body and the eye contact.Conclusions: The precocious recognition of Asperger’ s syndrome is imperative, with final objective the continuous briefing and sensitization of all health professionals as well as the wider public toward this, under the prism of interdisciplinary approach.
Green Logistic Practices: A Theoretical Approach of the Theme
Emanuele Engelage
2016-12-01
Full Text Available This study aims to identify the main practices of green logistic considered in national and international academic literature. Using standard techniques for selection of previous studies, this study firts presents the definition of green logistic term in order to differentiate it from other concepts commonly treated similarly, as circular economy, the green chain management (GSCM, the reverse logistics and the environmental certifications (ISO 14001, to obtain clarity about their delimitations, scopes and depth. The study also organizes a taxonomy that involves different functional areas of the company, giving direction to the sustainable conduct, resulting in nine components of green logistic that serve as subsidy for the classification of the identified practices. Based on this conceptual definition and taxonomy, lists 112 practices of green logistic, among which 85 are enterprise ambit, 24 governmental and 3 towards consumers. Regarding the quantity of identified practices and the number of citations, both in business sphere as well as in governmental, the most representative part is related to green Among the most cited practices are the search by more efficient deliveries, using intermodal and multimodal transport that are less polluting and the programming and optimization of flows deliveries. The survey also revealed that although the concept of green logistics is consolidated in the literature, the majority of studies, especially the empirical, concentrates on some of its components, in particular transport and reverse logistic.
Hybrid empirical--theoretical approach to modeling uranium adsorption
Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W
2004-05-01
An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K{sub f} parameter is correlated to sediment surface area (r{sup 2}=0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth.
A game-theoretic approach to valuating toxoplasmosis vaccination strategies.
Sykes, David; Rychtář, Jan
2015-11-01
The protozoan Toxoplasma gondii is a parasite often found in wild and domestic cats, and it is the cause of the disease toxoplasmosis. More than 60 million people in the United States carry the parasite, and the Centers for Disease Control have placed toxoplasmosis in their disease classification group Neglected Parasitic Infections as one of five parasitic diseases targeted as priorities for public health action. In recent years, there has been significant progress toward the development of a practical vaccine, so vaccination programs may soon be a viable approach to controlling the disease. Anticipating the availability of a toxoplasmosis vaccine, we are interested in determining when cat owners should vaccinate their own pets. We have created a mathematical model describing the conditions under which vaccination is advantageous. Our model can be used to predict the average vaccination level in the population. We find that there is a critical vaccine cost threshold above which no one will use the vaccine. A vaccine cost slightly below this threshold, however, results in high usage of the vaccine, and consequently in a significant reduction in population seroprevalence. Not surprisingly, we find that populations may achieve herd immunity only if the cost of vaccine is zero.
Theoretical Aspects and Methodological Approaches to Sales Services Quality Assessment
Tarasova EE
2015-11-01
Full Text Available The article defines trade service quality and proposes an object-oriented approach for its essence interpretation, according to which such components as product offering and goods quality, service forms and goods selling methods, merchandising, services and staff are singled out; a model of managing retail outlets trading service, which covers levels of strategic, tactical and operational management and is aimed at ensuring customers’ perception expectations, achieving sustainable competitive positions and increasing customers’ loyalty is worked out; a methodology of trade services quality estimation that allows to carry out a comparative assessment of cooperative retailing both in terms of general indicators and their individual components, regulate the factors affecting trade services quality and have a positive administrative action is developed and tested; the results of evaluation of the customers’ service quality in the consumer cooperative retailers, dynamics of overall and comprehensive indicators of measurement of trade service quality for selected components are given; the main directions and measures for improving trade services quality basing on quantitative values of individual indicators for each of the five selected components (product offering and goods quality, service forms and sale methods, merchandising, services, staff are stated.
Methods of quantifying operational risk in Banks : Theoretical approaches
Fatima Zahra El ARIF
2016-07-01
Full Text Available The definition of operational risk is a challenge. This risk has an atypical character as far as it concerns all the activities of the bank. It is also often difficult to estimate it independently of the other risks which characterizes the banking activity. Indeed, it is very difficult to determine the amount, the frequency, and the key factors behind this risk. Banks are still putting in place procedures of data collection and formalized approaches in this area. This is what we try to decipher. How then banks are they supposed to assess, predict and effectively manage operational risk, given the incredible diversity of dangers and threats now facing their business? How can they successfully respond to new constraints emanating from regulatory authorities while preserving their future profitability? These two questions are at the heart of the issues related to the measurement of operational risk, and are not without effect on the future ability of banks to manage this type of risk.
A Framework to Measure the Service Quality of Distributor with Fuzzy Graph Theoretic Approach
Tarun Kumar Gupta
2016-01-01
Full Text Available A combination of fuzzy logic and graph theoretic approach has been used to find the service quality of distributor in a manufacturing supply chain management. This combination is termed as the fuzzy graph theoretic (FGT approach. Initially the identified factors were grouped by SPSS (statistical package for social science software and then the digraph approach was applied. The interaction and inheritance values were calculated by fuzzy graph theory approach in terms of permanent function. Then a single numerical index was calculated by using permanent function which indicates the distributor service quality. This method can be used to compare the service quality of different distributors.
A theoretical approach to the re-suspension factor
Magnoni M.
2012-04-01
Full Text Available The atmospheric re-suspension of radionuclides is a well-known phenomenon that consists in the re-injection into the atmosphere of previously deposited radioactivity. The process is driven by the action of wind on surfaces and can act as an additional source of radiation exposure by inhalation, after the deposition has finished. It is thus defined as the re-suspension factor, a parameter K generally considered as a time depending function and defined as the ratio of Ca, the volumetric air activity concentration (Bq m−3 and I0 (Bq m−2, the radioactivity deposition at time zero. The re-suspension factor concept is very useful in radioprotection in order to estimate the inhalation of radionuclides re-suspended from contaminated surfaces when direct atmospheric measurements are lacking or difficult to perform. However, the choice of the proper values of K is usually not a simple task, being quite site-specific and related to the meteorological, géomorphologie and environmental characteristics of the area to be studied. Moreover, several investigations showed clearly that the values of K are a decreasing function of time. For that reason, K values span several orders of magnitude: typical values in the range 10−5–10−10 m−1 are reported in literature for different environmental conditions and time elapsed since the deposition event. The current available models for the re-suspension factor are based on empirical formulas whose parameters are highly site dependent and cannot easily be related to some physical quantity. In this paper a simple physical model for the re-suspension factor is proposed and tested with available environmental radioactivity data (137Cs, collected since 1986 (Chernobyl fallout. The new model not only allows a satisfactory description of the experimental data like even the current empirical models do, but it is also able to connect the K values to quantities with a physical meaning (such as, for example a diffusion
Theoretical Approach to the Radiance-to-Flux Conversion in the EarthCARE Mission Framework.
Doménech García, Carlos
2008-01-01
The research of the Thesis, Theoretical Approach to the Radiance-to-Flux Conversion in the EarthCARE Framework, is aimed at studying the instantaneous TOA radiance-to-flux conversion for the prospective Broad-Band Radiometer (BBR) on-board the EarthCARE (Earth Clouds Aerosols and Radiation Explorer) platform, through the development of theoretical angular distribution models based on the specific designing features of the instrument. The inversion procedure has been undertaken to obtain the a...
Pant, Sanjay; Lombardi, Damiano
2015-10-01
A new approach for assessing parameter identifiability of dynamical systems in a Bayesian setting is presented. The concept of Shannon entropy is employed to measure the inherent uncertainty in the parameters. The expected reduction in this uncertainty is seen as the amount of information one expects to gain about the parameters due to the availability of noisy measurements of the dynamical system. Such expected information gain is interpreted in terms of the variance of a hypothetical measurement device that can measure the parameters directly, and is related to practical identifiability of the parameters. If the individual parameters are unidentifiable, correlation between parameter combinations is assessed through conditional mutual information to determine which sets of parameters can be identified together. The information theoretic quantities of entropy and information are evaluated numerically through a combination of Monte Carlo and k-nearest neighbour methods in a non-parametric fashion. Unlike many methods to evaluate identifiability proposed in the literature, the proposed approach takes the measurement-noise into account and is not restricted to any particular noise-structure. Whilst computationally intensive for large dynamical systems, it is easily parallelisable and is non-intrusive as it does not necessitate re-writing of the numerical solvers of the dynamical system. The application of such an approach is presented for a variety of dynamical systems--ranging from systems governed by ordinary differential equations to partial differential equations--and, where possible, validated against results previously published in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.
On the electric dipole moments of small sodium clusters from different theoretical approaches
Aguado, Andres, E-mail: aguado@metodos.fam.cie.uva.es [Departamento de Fisica Teorica, Atomica, y Optica, Universidad de Valladolid (Spain); Largo, Antonio, E-mail: alargo@qf.uva.es [Departamento de Quimica Fisica y Quimica Inorganica, Universidad de Valladolid (Spain); Vega, Andres, E-mail: vega@fta.uva.es [Departamento de Fisica Teorica, Atomica, y Optica, Universidad de Valladolid (Spain); Balbas, Luis Carlos, E-mail: balbas@fta.uva.es [Departamento de Fisica Teorica, Atomica, y Optica, Universidad de Valladolid (Spain)
2012-05-03
contribution of the core electrons to the electric dipole moments. Our new geometries possess significantly smaller electric dipole moments than previous density functional results, mostly when combined with the van der Waals exchange-correlation functional. However, although the agreement with experiment clearly improves upon previous calculations, the theoretical dipole moments are still about one order of magnitude larger than the experimental values, suggesting that the correct global minimum structures have not been located yet.
John Etherington
2012-09-01
Full Text Available This article discusses the theoretical lessons from the results of the research project into the reforms of CARUE (Conference for EU-Related Affairs, between the Spanish state and the autonomous communities. To that end, the article begins by explaining the theoretical and methodological premises on which our research is based, and then summarises the main findings of our research, paying particular attention to the question of whether or not these reforms have empowered the autonomous communities as political actors within the framework of European governance. Finally, the results are used to reflect back on the dominant theoretical approaches to European governance and the role played by the regions within it; the conclusion is that any theoretical model that seeks to successfully deal with these questions must be sensitive to different national traditions and must understand the importance of historical inertias.
Farag, A. Z. A.; Sultan, M.; Elkadiri, R.; Abdelhalim, A.
2014-12-01
An integrated approach using remote sensing, landscape analysis and statistical methods was conducted to assess the role of groundwater sapping in shaping the Saharan landscape. A GIS-based logistic regression model was constructed to automatically delineate the spatial distribution of the sapping features over areas occupied by the Nubian Sandstone Aquifer System (NSAS): (1) an inventory was compiled of known locations of sapping features identified either in the field or from satellite datasets (e.g. Orbview-3 and Google Earth Digital Globe imagery); (2) spatial analyses were conducted in a GIS environment and seven geomorphological and geological predisposing factors (i.e. slope, stream density, cross-sectional and profile curvature, minimum and maximum curvature, and lithology) were identified; (3) a binary logistic regression model was constructed, optimized and validated to describe the relationship between the sapping locations and the set of controlling factors and (4) the generated model (prediction accuracy: 90.1%) was used to produce a regional sapping map over the NSAS. Model outputs indicate: (1) groundwater discharge and structural control played an important role in excavating the Saharan natural depressions as evidenced by the wide distribution of sapping features (areal extent: 1180 km2) along the fault-controlled escarpments of the Libyan Plateau; (2) proximity of mapped sapping features to reported paleolake and tufa deposits suggesting a causal effect. Our preliminary observations (from satellite imagery) and statistical analyses together with previous studies in the North Western Sahara Aquifer System (North Africa), Sinai Peninsula, Negev Desert, and The Plateau of Najd (Saudi Arabia) indicate extensive occurrence of sapping features along the escarpments bordering the northern margins of the Saharan-Arabian Desert; these areas share similar hydrologic settings with the NSAS domains and they too witnessed wet climatic periods in the Mid
GRAPH THEORETICAL AND NETWORKS APPROACH FOR THE DEVELOPMENT OF A LEARNING MODEL – A CASE STUDY
PROF. DR. P. K. SRIMANI
2012-08-01
Full Text Available This paper deals with the graph theoretical approach for developing a framework for the Learning model used to optimise the Mathematical Pathway in children at the elementary level and verifying it by usingNetworks model. Data collected pertaining to the mathematical concepts a child needs to learn at elementary level [Class I to VII] is represented by using Concept Flow Graphs and are optimized by using graph theory techniques and algorithms by rearranging nodes as per the learning progression, partitioning the graphs into subgraphsto represent levels of learning, optimizing the sub-graphs using merging and elimination technique and identifying / marking the optional nodes. The design of the framework by using the graph theoretical approach is validated by the application of the Networks approach and this is used to design the Mathematical Pathwaydriver which is the core component of the Learning model. This approach is novel and the Learning model developed is highly accurate.
A Balanced Theoretical and Empirical Approach for the Development of a Design Support Tool
Jensen, Thomas Aakjær; Hansen, Claus Thorp
1996-01-01
The introduction of a new design support system may change the engineering designer's work situation. Therefore, it may not be possible to derive all the functionalities for a design support system from solely empirical studies of manual design work. Alternatively the design support system could...... be based on a assertion with a theoretical foundation, which then is verified through empirical studies. However, to develop a design support system, there is a need for a balanced theoretical and empirical approach.This paper presents the results from the first steps in the development of a design support...... system, indicating a proposal for how to balance a theoretical and empirical approach. The result of this research will be utilized in the development of a Designer's Workbench to support the synthesis activity in mechanical design....
A Theoretical Approach to Financial Therapy: The Development of the Ford Financial Empowerment Model
Kristy L. Archuleta; Joyce A. Baptist; Megan R. Ford
2012-01-01
The purpose of this paper is to introduce an integrative approach to working with clients experiencing problems related to financial disempowerment. The multi-phase model integrates three theoretically-driven psychotherapy approaches, including cognitive behavioral, narrative, and Virginia Satir’s experiential therapies, and financial counseling techniques to increase one’s sense of financial empowerment. A case study is included to demonstrate the applicability and effectiveness of the model...
Dey, Ramendra Sundar; Hjuler, Hans Aage; Chi, Qijin
2015-01-01
We report a facile and low-cost approach for the preparation of all-in-one supercapacitor electrodes using copper foam (CuF) integrated three-dimensional (3D) reduced graphene oxide (rGO) networks. The binderfree 3DrGO@CuF electrodes are capable of delivering high specific capacitance approaching...... the theoretical capacitance of graphene and exhibiting high charge–discharge cycling stability....
A Theoretical Approach to Financial Therapy: The Development of the Ford Financial Empowerment Model
Kristy L. Archuleta; Joyce A. Baptist; Megan R. Ford
2012-01-01
The purpose of this paper is to introduce an integrative approach to working with clients experiencing problems related to financial disempowerment. The multi-phase model integrates three theoretically-driven psychotherapy approaches, including cognitive behavioral, narrative, and Virginia Satir’s experiential therapies, and financial counseling techniques to increase one’s sense of financial empowerment. A case study is included to demonstrate the applicability and effectiveness of the model...
The influence of Coalition Formation on Idea Selection in Dispersed Teams: A Game Theoretic Approach
Sie, Rory
2009-01-01
Sie, R. L. L. (2009). The influence of coalition formation on idea selection in dispersed teams: a game theoretic approach. Poster presentation at the Fourth European conference on Technology Enhanced Learning (EC-TEL 2009). September, 29-October, 2, 2009, Nice, France.
Speaking Back to the Deficit Discourses: A Theoretical and Methodological Approach
Hogarth, Melitta
2017-01-01
The educational attainment of Aboriginal and Torres Strait Islander students is often presented within a deficit view. The need for Aboriginal and Torres Strait Islander researchers to challenge the societal norms is necessary to contribute to the struggle for self-determination. This paper presents a theoretical and methodological approach that…
Metaphor Analysis as an Approach for Exploring Theoretical Concepts: The Case of Social Capital
Andriessen, Daniel; Gubbins, Claire
2009-01-01
In many fields within management and organizational literature there is considerable debate and controversy about key theoretical concepts and their definitions and meanings. Systematic metaphor analysis can be a useful approach to study the underlying conceptualizations that give rise to these cont
The Influence of Coalition Formation on Idea Selection in Dispersed Teams: A Game Theoretic Approach
Sie, Rory; Bitter-Rijpkema, Marlies; Sloep, Peter
2009-01-01
Sie, R. L. L., Bitter-Rijpkema, M., & Sloep, P. B. (2009). The Influence of Coalition Formation on Idea Selection in Dispersed Teams: A Game Theoretic Approach. In U. Cress, V. Dimitrova & M. Specht (Eds.), Learning in the Synergy of Multiple Disciplines. Proceedings of the Fourth European Conferenc
Understanding Older Adults' Physical Activity Behavior: A Multi-Theoretical Approach
Grodesky, Janene M.; Kosma, Maria; Solmon, Melinda A.
2006-01-01
Physical inactivity is a health issue with serious consequences for older adults. Investigating physical activity promotion within a multi-theoretical approach may increase the predictive strength of physical activity determinants and facilitate the development and implementation of effective interventions for older adults. This article examines…
Michael T. Lee
2016-09-01
Full Text Available Empirical evidence demonstrates that motivated employees mean better organizational performance. The objective of this conceptual paper is to articulate the progress that has been made in understanding employee motivation and organizational performance, and to suggest how the theory concerning employee motivation and organizational performance may be advanced. We acknowledge the existing limitations of theory development and suggest an alternative research approach. Current motivation theory development is based on conventional quantitative analysis (e.g., multiple regression analysis, structural equation modeling. Since researchers are interested in context and understanding of this social phenomena holistically, they think in terms of combinations and configurations of a set of pertinent variables. We suggest that researchers take a set-theoretic approach to complement existing conventional quantitative analysis. To advance current thinking, we propose a set-theoretic approach to leverage employee motivation for organizational performance.
A novel approach for absolute radar calibration: formulation and theoretical validation
C. Merker
2015-06-01
Full Text Available The theoretical framework of a novel approach for absolute radar calibration is presented and its potential analysed by means of synthetic data to lay out a solid basis for future practical application. The method presents the advantage of an absolute calibration with respect to the directly measured reflectivity, without needing a previously calibrated reference device. It requires a setup comprising three radars: two devices oriented towards each other, measuring reflectivity along the same horizontal beam and operating within a strongly attenuated frequency range (e.g. K or X band, and one vertical reflectivity and drop size distribution (DSD profiler below this connecting line, which is to be calibrated. The absolute determination of the calibration factor is based on attenuation estimates. Using synthetic, smooth and geometrically idealised data, calibration is found to perform best using homogeneous precipitation events with rain rates high enough to ensure a distinct attenuation signal (reflectivity above ca. 30 dBZ. Furthermore, the choice of the interval width (in measuring range gates around the vertically pointing radar, needed for attenuation estimation, is found to have an impact on the calibration results. Further analysis is done by means of synthetic data with realistic, inhomogeneous precipitation fields taken from measurements. A calibration factor is calculated for each considered case using the presented method. Based on the distribution of the calculated calibration factors, the most probable value is determined by estimating the mode of a fitted shifted logarithmic normal distribution function. After filtering the data set with respect to rain rate and inhomogeneity and choosing an appropriate length of the considered attenuation path, the estimated uncertainty of the calibration factor is of the order of 1 to 11 %, depending on the chosen interval width. Considering stability and accuracy of the method, an interval of
An Information-Theoretic Approach to PMU Placement in Electric Power Systems
Li, Qiao; Weng, Yang; Negi, Rohit; Franchetti, Franz; Ilic, Marija D
2012-01-01
This paper presents an information-theoretic approach to address the phasor measurement unit (PMU) placement problem in electric power systems. Different from the conventional 'topological observability' based approaches, this paper advocates a much more refined, information-theoretic criterion, namely the mutual information (MI) between the PMU measurements and the power system states. The proposed MI criterion can not only include the full system observability as a special case, but also can rigorously model the remaining uncertainties in the power system states with PMU measurements, so as to generate highly informative PMU configurations. Further, the MI criterion can facilitate robust PMU placement by explicitly modeling probabilistic PMU outages. We propose a greedy PMU placement algorithm, and show that it achieves an approximation ratio of (1-1/e) for any PMU placement budget. We further show that the performance is the best that one can achieve in practice, in the sense that it is NP-hard to achieve ...
Sholtes, Joel; Werbylo, Kevin; Bledsoe, Brian
2014-10-01
Theoretical approaches to magnitude-frequency analysis (MFA) of sediment transport in channels couple continuous flow probability density functions (PDFs) with power law flow-sediment transport relations (rating curves) to produce closed-form equations relating MFA metrics such as the effective discharge, Qeff, and fraction of sediment transported by discharges greater than Qeff, f+, to statistical moments of the flow PDF and rating curve parameters. These approaches have proven useful in understanding the theoretical drivers behind the magnitude and frequency of sediment transport. However, some of their basic assumptions and findings may not apply to natural rivers and streams with more complex flow-sediment transport relationships or management and design scenarios, which have finite time horizons. We use simple numerical experiments to test the validity of theoretical MFA approaches in predicting the magnitude and frequency of sediment transport. Median values of Qeff and f+ generated from repeated, synthetic, finite flow series diverge from those produced with theoretical approaches using the same underlying flow PDF. The closed-form relation for f+ is a monotonically increasing function of flow variance. However, using finite flow series, we find that f+ increases with flow variance to a threshold that increases with flow record length. By introducing a sediment entrainment threshold, we present a physical mechanism for the observed diverging relationship between Qeff and flow variance in fine and coarse-bed channels. Our work shows that through complex and threshold-driven relationships sediment transport mode, channel morphology, flow variance, and flow record length all interact to influence estimates of what flow frequencies are most responsible for transporting sediment in alluvial channels.
Theoretical approaches to the problem of expertise of pacifist convictions of conscripts
Stvolygin K.V.
2012-01-01
The aim of this contribution is the analysis of theoretical approaches to the problem of carrying out the psychological expertise of pacifist convictions of persons subject to call to military service. Legalization of alternative civil service in the Russian Federation and other CIS states makes relevant the problem of verification of pacifist convictions, because it inevitably leads to questionable cases related to making statements of existence or lack of sincere convictions which are incom...
Bok, Tae-Hoon; Hysi, Eno; Kolios, Michael C.
2017-03-01
In the present paper, the optical wavelength dependence on the photoacoustic (PA) assessment of the pulsatile blood flow was investigated by means of the experimental and theoretical approaches analyzing PA radiofrequency spectral parameters such as the spectral slope (SS) and mid-band fit (MBF). For the experimental approach, the pulsatile flow of human whole blood at 60 bpm was imaged using the VevoLAZR system (40-MHz-linear-array probe, 700-900 nm illuminations). For the theoretical approach, a Monte Carlo simulation for the light transmit into a layered tissue phantom and a Green's function based method for the PA wave generation was implemented for illumination wavelengths of 700, 750, 800, 850 and 900 nm. The SS and MBF for the experimental results were compared to theoretical ones as a function of the illumination wavelength. The MBF increased with the optical wavelength in both theory and experiments. This was expected because the MBF is representative of the PA magnitude, and the PA signal from red blood cell (RBC) is dependent on the molar extinction coefficient of oxyhemoglobin. On the other hand, the SS decreased with the wavelength, even though the RBC size (absorber size which is related to the SS) cannot depend on the illumination wavelength. This conflicting result can be interpreted by means of the changes of the fluence pattern for different illumination wavelengths. The SS decrease with the increasing illumination wavelength should be further investigated.
A game-theoretic approach to real-time system testing
David, Alexandre; Larsen, Kim Guldstrand; Li, Shuhao
2008-01-01
This paper presents a game-theoretic approach to the testing of uncontrollable real-time systems. By modelling the systems with Timed I/O Game Automata and specifying the test purposes as Timed CTL formulas, we employ a recently developed timed game solver UPPAAL-TIGA to synthesize winning...... strategies, and then use these strategies to conduct black-box conformance testing of the systems. The testing process is proved to be sound and complete with respect to the given test purposes. Case study and preliminary experimental results indicate that this is a viable approach to uncontrollable timed...... system testing....
Examples of feedback, experimental and theoretical approaches for concrete durability assessment
Toutlemonde F.
2011-04-01
Full Text Available This paper presents some experimental data obtained from UHPFRC (Ultra-High Performance Fibre-Reinforced Concrete being exposed for 10 years in a cooling tower and a high slag content concrete being exposed for 30 years in a marine environment. Experimental data are then used for assessing concrete durability through a theoretical approach, namely performance-based analysis. The results from the application of this approach are consistent with the penetration depth of aggressive agents measured from core samples. Finally a simulation method currently being developed by EDF is presented, which has great relevance to durability assessment.
Kevin M. Kostelnik; James H. Clarke; Jerry L. Harbour
2005-02-01
Environmental remediation efforts that are underway at hundreds of contaminated sites in the United States will not be able to remediate large portions of those sites to conditions that would permit unrestricted access. Rather, large volumes of waste materials, contaminated soils and cleanup residuals will have to be isolated either in place or in new, often on-site, disposal cells with long term monitoring, maintenance and institutional control needs. The challenge continues to be to provide engineering systems and controls that can ensure the protection of public health and the environment over very long time horizons (hundreds to perhaps thousands of years) with minimal intervention. Effective long term management of legacy hazardous and nuclear waste requires an integrated approach that addresses both the engineered containment and control system itself and the institutional controls and other responsibilities that are needed. Decisions concerning system design, monitoring and maintenance, and the institutional controls that will be employed are best done through a "risk-nformed, performance-based" approach. Such an approach should incorporate an analysis of potential "failure" modes and consequences for all important system features, together with lessons learned from experience with systems already in place. The authors will present the preliminary results of a case study approach that included several sites where contamination isolation systems including institutional controls have been implemented. The results are being used together with failure trees and logic diagrams that have been developed for both the engineered barriers and the institutional controls. The use of these analytical tools to evaluate the potential for different levels of failure and associated consequences will be discussed. Of special interest is the robustness of different approaches to providing long-term protection through redundancy and defense in depth.
da Silva Fiorin, Fernando; de Oliveira Ferreira, Ana P; Ribeiro, Leandro R; Silva, Luiz F A; de Castro, Mauro R T; da Silva, Luís R H; da Silveira, Mauro E P; Zemolin, Ana P P; Dobrachinski, Fernando; Marchesan de Oliveira, Sara; Franco, Jeferson L; Soares, Félix A; Furian, Ana F; Oliveira, Mauro S; Fighera, Michele R; Freire Royes, Luiz F
2016-07-15
Throughout the world, traumatic brain injury (TBI) is one of the major causes of disability, which can include deficits in motor function and memory, as well as acquired epilepsy. Although some studies have shown the beneficial effects of physical exercise after TBI, the prophylactic effects are poorly understood. In the current study, we demonstrated that TBI induced by fluid percussion injury (FPI) in adult male Wistar rats caused early motor impairment (24 h), learning deficit (15 days), spontaneous epileptiform events (SEE), and hilar cell loss in the hippocampus (35 days) after TBI. The hippocampal alterations in the redox status, which were characterized by dichlorofluorescein diacetate oxidation and superoxide dismutase (SOD) activity inhibition, led to the impairment of protein function (Na(+), K(+)-adenosine triphosphatase [ATPase] activity inhibition) and glutamate uptake inhibition 24 h after neuronal injury. The molecular adaptations elicited by previous swim training protected against the glutamate uptake inhibition, oxidative stress, and inhibition of selected targets for free radicals (e.g., Na(+), K(+)-ATPase) 24 h after neuronal injury. Our data indicate that this protocol of exercise protected against FPI-induced motor impairment, learning deficits, and SEE. In addition, the enhancement of the hippocampal phosphorylated nuclear factor erythroid 2-related factor (P-Nrf2)/Nrf2, heat shock protein 70, and brain-derived neurotrophic factor immune content in the trained injured rats suggests that protein expression modulation associated with an antioxidant defense elicited by previous physical exercise can prevent toxicity induced by TBI, which is characterized by cell loss in the dentate gyrus hilus at 35 days after TBI. Therefore, this report suggests that previous physical exercise can decrease lesion progression in this model of brain damage.
Agha Mohammad Ali Kermani, Mehrdad; Fatemi Ardestani, Seyed Farshad; Aliahmadi, Alireza; Barzinpour, Farnaz
2017-01-01
Influence maximization deals with identification of the most influential nodes in a social network given an influence model. In this paper, a game theoretic framework is developed that models a competitive influence maximization problem. A novel competitive influence model is additionally proposed that incorporates user heterogeneity, message content, and network structure. The proposed game-theoretic model is solved using Nash Equilibrium in a real-world dataset. It is shown that none of the well-known strategies are stable and at least one player has the incentive to deviate from the proposed strategy. Moreover, violation of Nash equilibrium strategy by each player leads to their reduced payoff. Contrary to previous works, our results demonstrate that graph topology, as well as the nodes' sociability and initial tendency measures have an effect on the determination of the influential node in the network.
Wolters, Sander A M
2010-01-01
The aim of this paper is to compare the two topos-theoretic approaches to quantum mechanics that may be found in the literature to date. The first approach, which we will call the contravariant approach, was proposed by Isham and Butterfield, and was later extended by Doering and Isham. The second approach, which we will call the covariant approach, was developed by Heunen, Landsman and Spitters. Motivated by coarse-graining and the Kochen-Specker theorem, the contravariant approach uses the topos of presheaves on a specific context category, defined as the poset of commutative von Neumann subalgebras of some given von Neumann algebra. The intuitionistic logic of this approach is presented by the (complete) Heyting algebra of closed open subobjects of the so-called spectral presheaf. We demonstrate that in a natural way, this Heyting algebra defines a locale, internal to the given presheaf topos. This locale is not regular, which is connected to undesirable properties of the Heyting negation. In the covariant...
From moral theory to penal attitudes and back: a theoretically integrated modeling approach.
de Keijser, Jan W; van der Leeden, Rien; Jackson, Janet L
2002-01-01
From a moral standpoint, we would expect the practice of punishment to reflect a solid and commonly shared legitimizing framework. Several moral legal theories explicitly aim to provide such frameworks. Based on the theories of Retributivism, Utilitarianism, and Restorative Justice, this article first sets out to develop a theoretically integrated model of penal attitudes and then explores the extent to which Dutch judges' attitudes to punishment fit the model. Results indicate that penal attitudes can be measured in a meaningful way that is consistent with an integrated approach to moral theory. The general structure of penal attitudes among Dutch judges suggests a streamlined and pragmatic approach to legal punishment that is identifiably founded on the separate concepts central to moral theories of punishment. While Restorative Justice is frequently presented as an alternative paradigm, results show it to be smoothly incorporated within the streamlined approach.
Kangawa, Yoshihiro; Ito, Tomonori; Koukitu, Akinori; Kakimoto, Koichi
2014-10-01
The surface stability, growth process, and structural stability of InGaN and InN are reviewed from a theoretical viewpoint. In 2001, a new theoretical approach based on an ab initio calculation was developed. This theoretical approach enables the investigation of the influence of growth conditions such as partial pressure and temperature on the surface stability. The theoretical approach is applied to the research on the In incorporation efficiency in InGaN grown on nonpolar and semipolar surfaces. The calculation results suggest that the N-H layer formed on such surfaces has a crucial role in In incorporation. Moreover, the structural stability of InN grown by pressurized-reactor MOVPE is reviewed. It was found by the theoretical approach that \\{ 1\\bar{1}\\bar{1}\\} facet formation causes the spontaneous formation of islands with the zinc-blende structure.
Ahmed, Saeema; Storga, M
2009-01-01
This paper presents a comparison of two previous and separate efforts to develop an ontology in the engineering design domain, together with an ontology proposal from which ontologies for a specific application may be derived. The research contrasts an empirical, user-centered approach to develop...
Hektner, Joel M; Brennan, Alison L; Brotherson, Sean E
2013-09-01
The Nurtured Heart Approach to parenting (NHA; Glasser & Easley, 2008) is summarized and evaluated in terms of its alignment with current theoretical perspectives and empirical evidence in family studies and developmental science. Originally conceived and promoted as a behavior management approach for parents of difficult children (i.e., with behavior disorders), NHA is increasingly offered as a valuable strategy for parents of any children, despite a lack of published empirical support. Parents using NHA are trained to minimize attention to undesired behaviors, provide positive attention and praise for compliance with rules, help children be successful by scaffolding and shaping desired behavior, and establish a set of clear rules and consequences. Many elements of the approach have strong support in the theoretical and empirical literature; however, some of the assumptions are more questionable, such as that negative child behavior can always be attributed to unintentional positive reinforcement by parents responding with negative attention. On balance, NHA appears to promote effective and validated parenting practices, but its effectiveness now needs to be tested empirically.
Pinto-Neto, N
2000-01-01
A new prescription to calculate the total energies and angular momenta of asymptotically $(d+1)$-dimensional anti-de Sitter spacetimes is proposed. The method is based on an extension of the field theoretical approach to General Relativity to the case where there is an effective cosmological constant. A $(d-1)$-form $\\Omega$ is exhibited which, when integrated on asymptotic $(d-1)$-dimensional boundary surfaces, yields the values of those conserved quantities. The calculations are gauge independent once asymptotic conditions are not violated . Total energies and angular momenta of some known solutions in four and five dimensions are calculated agreeing with standard results.
Managing Knowledge by the Information System and Game-Theoretic Approach
Otilija Sedlak
2006-12-01
Full Text Available Knowledge is a source of competitive adventage. In small enterprises there is simultaneous cooperation and competition. Knowledge management research is focused on large firms. Collaboration among small enterprises and with large firms is common. Information systems and information technology play a paramount role in coordinating and controlling joint ventures. Information system is a key tool in the management of knowledge sharing. This paper offer game-theoretic approach to answer the questions under cooperation and competition: what to share, with whom, when, under what conditions is paramount, and the role of information system in managing knowledge in small enterprises.
A set-theoretic approach to linguistic feature structures and unification algorithms (I
N. Curteanu
2000-10-01
Full Text Available The paper proposes formal inductive definitions for linguistic feature structures (FSs taking values within a class of value types or sorts: single, disjunctive, (ordered lists, multisets (or bags, po-multisets (multisets embedded into a partially ordered set, and indexed (re-entrance values. The linguistic realization (semantics of the considered sorts is proposed. The FSs having these multi-sort values are organized as (rooted directed acyclic graphs. The concrete model of the FSs we had in mind for our set-theoretic definitions are the FSs used within the well-known HPSG linguistic theory. Set-theoretic general definitions for the proposed multi-sort FSs are defined. These constructive definitions start from atomic values and build recurrently multi-sorted values and structures, providing naturally a fixed-point semantics of the obtained FSs as a counterpart to the large class of logical semantics models on FSs. The linguistic unification algorithm based on tableau-subsumption is outlined. The Prolog code of the unification algorithm is provided and results of running it on some of the main multi-sort FSs is enclosed in the appendices. We consider the proposed formal approach to FS definitions and unification as necessary steps to set-theoretical implementations of natural language processing systems.
A set-theoretic approach to linguistic feature structures and unification algorithms (II
N.Curteanu
2001-02-01
Full Text Available The paper proposes formal inductive definitions for linguistic feature structures (FSs taking values within a class of value types or sorts: single, disjunctive, (ordered lists, multisets (or bags, po-multisets (multisets embedded into a partially ordered set, and indexed (re-entrance values. The linguistic realization (semantics of the considered sorts is proposed. The FSs having these multi-sort values are organized as (rooted directed acyclic graphs. The concrete model of the FSs we had in mind for our set-theoretic definitions are the FSs used within the well-known HPSG linguistic theory. Set-theoretic general definitions for the proposed multi-sort FSs are defined. These constructive definitions start from atomic values and build recurrent multi-sorted values and structures, providing naturally a fixed-point semantics of the obtained FSs as a counterpart to the large class of logical semantics models on FSs. The linguistic unification algorithm based on tableau-subsumption is outlined. The Prolog code of the unification algorithm is provided and results of running it on some of the main multi-sort FSs is enclosed in the appendices. We consider the proposed formal approach to FSs definitions and unification as necessary steps to set-theoretical implementations of natural language processing systems.
The Theoretical Approaches to Defining the Essence and Functions of Leasing
Levchenko Alexander A.
2017-03-01
Full Text Available The article is aimed at researching the theoretical foundations, generalizing scientific approaches to interpretation of the concept of «leasing», defining the main economic characteristics and functions of leasing. Scientific approaches of researchers to definition of the concept of «leasing» were systematized. New scientific approaches to interpretation of the concept of leasing as a form of public-private partnership, a form of ensuring competitiveness, have been allocated. The author’s own point of view on defining the economic essence of leasing as a special type of entrepreneurial activity which at the same time is implemented in the form of the credit-investment along with the trade-property relations has been substantiated. The main functions of leasing according to the object and subject approach in terms of outsourcing have been disclosed. Further researches in this direction will concern systematization of scientific approaches and definition of new attributes of the specific classification of leasing.
Milad Elyasi
2014-04-01
Full Text Available In the recent decade, studying the economic order quantity (EOQ models with imperfect quality has appealed to many researchers. Only few papers are published discussing EOQ models with imperfect items in a supply chain. In this paper, a two-echelon decentralized supply chain consisting of a manufacture and a supplier that both face just in time (JIT inventory problem is considered. It is sought to find the optimal number of the shipments and the quantity of each shipment in a way that minimizes the both manufacturer’s and the supplier’s cost functions. To the authors’ best knowledge, this is the first paper that deals with imperfect items in a decentralized supply chain. Thereby, three different game theoretical solution approaches consisting of two non-cooperative games and a cooperative game are proposed. Comparing the results of three different scenarios with those of the centralized model, the conclusions are drawn to obtain the best approach.
Towards Theoretical Approach to the Understanding of Language Ideologies in Post-Meiji Japan
Luka CULIBERG
2011-05-01
Full Text Available The paper wishes to examine the specific conditions that have generated the understanding of language in post-Meiji Japan and propose a theoretical approach to the question why a specific view on language, or to use a more precise concept – a language ideology, was, and still is, inevitable within a specific ideological horizon – the horizon of nationalism. In order to do so, it first gives an overview of the linguistic situation in post-Meiji Japan with all its competing and opposing views, following with an outline of the up to date research, its breakthroughs, its problems and its dead ends. Finally it proposes the orthodox method of dialectical materialism as a possibly only methodological approach hoping to grasp all these connected social problems in their totality.
Alfred Gierer
2002-06-01
The topic of this article is the relation between bottom-up and top-down, reductionist and ``holistic” approaches to the solution of basic biological problems. While there is no doubt that the laws of physics apply to all events in space and time, including the domains of life, understanding biology depends not only on elucidating the role of the molecules involved, but, to an increasing extent, on systems theoretical approaches in diverse fields of the life sciences. Examples discussed in this article are the generation of spatial patterns in development by the interplay of autocatalysis and lateral inhibition; the evolution of integrating capabilities of the human brain, such as cognition-based empathy; and both neurobiological and epistemological aspects of scientific theories of consciousness and the mind.
Theoretical and empirical approaches to using films as a means to increase communication efficiency.
Kiselnikova, N.V.
2016-07-01
Full Text Available The theoretical framework of this analytic study is based on studies in the field of film perception. Films are considered as a communicative system that is encrypted in an ordered series of shots, and decoding proceeds during perception. The shots are the elements of a cinematic message that must be “read” by viewer. The objective of this work is to analyze the existing theoretical approaches to using films in psychotherapy and education. An original approach to film therapy that is based on teaching clients to use new communicative sets and psychotherapeutic patterns through watching films is presented. The article specifies the main emphasized points in theories of film therapy and education. It considers the specifics of film therapy in the process of increasing the effectiveness of communication. It discusses the advantages and limitations of the proposed method. The contemporary forms of film therapy and the formats of cinema clubs are criticized. The theoretical assumptions and empirical research that could be used as a basis for a method of developing effective communication by means of films are discussed. Our studies demonstrate that the usage of film therapy must include an educational stage for more effective and stable results. This means teaching viewers how to recognize certain psychotherapeutic and communicative patterns in the material of films, to practice the skill of finding as many examples as possible for each pattern and to transfer the acquired schemes of analyzing and recognizing patterns into one’s own life circumstances. The four stages of the film therapeutic process as well as the effects that are achieved at each stage are described in detail. In conclusion, the conditions under which the usage of the film therapy method would be the most effective are observed. Various properties of client groups and psychotherapeutic scenarios for using the method of active film therapy are described.
A study of brain networks associated with swallowing using graph-theoretical approaches.
Bo Luan
Full Text Available Functional connectivity between brain regions during swallowing tasks is still not well understood. Understanding these complex interactions is of great interest from both a scientific and a clinical perspective. In this study, functional magnetic resonance imaging (fMRI was utilized to study brain functional networks during voluntary saliva swallowing in twenty-two adult healthy subjects (all females, [Formula: see text] years of age. To construct these functional connections, we computed mean partial correlation matrices over ninety brain regions for each participant. Two regions were determined to be functionally connected if their correlation was above a certain threshold. These correlation matrices were then analyzed using graph-theoretical approaches. In particular, we considered several network measures for the whole brain and for swallowing-related brain regions. The results have shown that significant pairwise functional connections were, mostly, either local and intra-hemispheric or symmetrically inter-hemispheric. Furthermore, we showed that all human brain functional network, although varying in some degree, had typical small-world properties as compared to regular networks and random networks. These properties allow information transfer within the network at a relatively high efficiency. Swallowing-related brain regions also had higher values for some of the network measures in comparison to when these measures were calculated for the whole brain. The current results warrant further investigation of graph-theoretical approaches as a potential tool for understanding the neural basis of dysphagia.
Xiao, Ruiyang; Gao, Lingwei; Wei, Zongsu; Spinney, Richard; Luo, Shuang; Wang, Donghong; Dionysiou, Dionysios D; Tang, Chong-Jian; Yang, Weichun
2017-09-13
Advanced oxidation processes (AOPs) based on formation of free radicals at ambient temperature and pressure are effective for treating endocrine disrupting chemicals (EDCs) in waters. In this study, we systematically investigated the degradation kinetics of bisphenol A (BPA), a representative EDC by hydroxyl radical (OH) with a combination of experimental and theoretical approaches. The second-order rate constant (k) of BPA with OH was experimentally determined to be 7.2 ± 0.34 × 10(9) M(-1) s(-1) at pH 7.55. We also calculated the thermodynamic and kinetic behaviors for the bimolecular reactions by density functional theory (DFT) using the M05-2X method with 6-311++G** basis set and solvation model based on density (SMD). The results revealed that H-abstraction on the phenol group is the most favorable pathway for OH. The theoretical k value corrected by the Collins-Kimball approach was determined to be 1.03 × 10(10) M(-1) s(-1), which is in reasonable agreement with the experimental observation. These results are of fundamental and practical importance in understanding the chemical interactions between OH and BPA, and aid further AOPs design in treating EDCs during wastewater treatment processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
A set-theoretic approach for compensated signature embedding using projections onto convex sets
Ababneh, Sufyan; Ansari, Rashid; Khokhar, Ashfaq
2008-01-01
In this paper, we use a set-theoretic approach to provide an efficient and deterministic iterative solution for the compensated signature embedding (CSE) scheme introduced in an earlier work. 4 In CSE, a fragile signature is derived and embedded into the media using a robust watermarking technique. Since the embedding process leads to altering the media, the media samples are iteratively adjusted to compensate for the embedding distortion. Projections Onto Convex Sets (POCS) is an iterative set-theoretic approach known to be deterministic, effective and has been used in many image processing applications. We propose to use POCS for providing a compensation mechanism to address the CSE problem. We identify two convex constraint sets defined according to image fidelity and signature-generation criteria, and use them in a POCS-based CSE image authentication system. The system utilizes the wavelet transform domain for embedding and compensation. Simulation results are presented to show that the proposed scheme is efficient and accurate in terms of both achieving high convergence speed and maintaining image fidelity.
A network-theoretic approach for decompositional translation across Open Biological Ontologies.
Patel, Chintan O; Cimino, James J
2010-08-01
Biological ontologies are now being widely used for annotation, sharing and retrieval of the biological data. Many of these ontologies are hosted under the umbrella of the Open Biological Ontologies Foundry. In order to support interterminology mapping, composite terms in these ontologies need to be translated into atomic or primitive terms in other, orthogonal ontologies, for example, gluconeogenesis (biological process term) to glucose (chemical ontology term). Identifying such decompositional ontology translations is a challenging problem. In this paper, we propose a network-theoretic approach based on the structure of the integrated OBO relationship graph. We use a network-theoretic measure, called the clustering coefficient, to find relevant atomic terms in the neighborhood of a composite term. By eliminating the existing GO to ChEBI Ontology mappings from OBO, we evaluate whether the proposed approach can re-identify the corresponding relationships. The results indicate that the network structure provides strong cues for decompositional ontology translation and the existing relationships can be used to identify new translations.
Herbin, H.; Pujol, O.; Hubert, P.; Petitprez, D.
2017-10-01
The knowledge of aerosol complex refractive indices on wide spectral range with high spectral resolution is important for many research fields and applications. Various combinations of experimental/theoretical/numerical approaches have been employed to determine the optical indices of aerosol particles. However, each approach has its own advantages and limitations that restrict its generalization. This article is first part of a work aimed at proposing a new technique for determining the optical constants of aerosols. Experimentally, the method is based on recording transmittance spectra of an aerosol flow from thermal infrared to UV-visible combined with the size distribution measurements. Herein, we present the theoretical and numerical bases of the algorithm developed to retrieve the imaginary and real parts of refractive indices. This model associates the Mie theory, the single subtractive Kramers-Kronig relations, and the optimal estimation method with an iterative process. In order to quantify the capabilities of the algorithm to retrieve complex refractive indices, inverse calculations are performed from simulated extinction spectra of Quartz particles whose some of optical properties are available in the literature. We have detailed each step of the procedure and performed some comparisons with the most currently employed methods. The impact of experimental accuracy and numerical simulation are investigated in terms of errors, and uncertainties on the retrieved real and imaginary parts of the complex optical index.
Experiments and theoretical approaches on the burning behaviors of single n-heptane droplet
Suh, Hyun Kyu [Kongju National University, Cheonan (Korea, Republic of)
2015-05-15
This study was conducted to improve the theoretical prediction of the burning characteristics of an n-heptane droplet by comparing them with experimental results. To achieve this, numerical approaches were conducted by assuming that the droplet combustion can be described by both quasi-steady behavior for the region between the droplet surface and the flame interface, and transient behavior for the region between the flame interface and ambient surrounding. Comparisons were considered for droplet diameter (d{sub t}), flame diameter (d{sub f}), flame standoff ratio (FSR), and viscous drag induced fluxes which are Stefan flux and thermophoretic flux for various initial droplet diameter (d{sub 0}) and oxygen (O{sub 2}) concentration conditions. It was revealed that the flame diameter (d{sub f}) and flame standoff ratio (FSR) initially increase dramatically and approach quasi-steady behavior within the observation period, and the flame standoff ratio (FSR) increases a little with the initial droplet diameter (d{sub 0}) both experimentally and theoretically. The value of flame diameter (d{sub f}) decreases from its maximum value when oxygen (O{sub 2}) concentration is increased from a value of 18% to 40%. The burning rate (K) constant becomes higher as the oxygen (O{sub 2}) concentration increases since the increase of oxygen (O{sub 2}) concentration produces a higher maximum flame temperature (T{sub f}) which enhances the effective thermo-physical properties of the gas-phase bounded by droplet and flame front.
Maracci, Mirko; Cazes, Claire; Vandebrouck, Fabrice; Mariotti, Maria Alessandra
2013-01-01
Mathematics education as a research domain is characterized by a plurality of theoretical approaches. Acknowledging the existence of such diversity and the risks of an excessive theoretical fragmentation does not mean to search for a unifying theory but to urge the community to develop strategies for coping with this diversity. This article is…
Claret, A.
2016-04-01
Aims: Recent observations of very fast rotating stars show systematic deviations from the von Zeipel theorem and pose a challenge to the theory of gravity-darkening exponents (β1). In this paper, we present a new insight into the problem of temperature distribution over distorted stellar surfaces to try to reduce these discrepancies. Methods: We use a variant of the numerical method based on the triangles strategy, which we previously introduced, to evaluate the gravity-darkening exponents. The novelty of the present method is that the theoretical β1 is now computed as a function of the optical depth, that is, β1 ≡ β1(τ). The stellar evolutionary models, which are necessary to obtain the physical conditions of the stellar envelopes/atmospheres inherent to the numerical method, are computed via the code GRANADA. Results: When the resulting theoretical β1(τ) are compared with the best accurate data of very fast rotators, a good agreement for the six systems is simultaneously achieved. In addition, we derive an equation that relates the locus of constant convective efficiency in the Hertzsprung-Russell (HR) diagram with gravity-darkening exponents.
Sergiu Ciprian Catinas
2015-07-01
Full Text Available A detailed theoretical and practical investigation of the reinforced concrete elements is due to recent techniques and method that are implemented in the construction market. More over a theoretical study is a demand for a better and faster approach nowadays due to rapid development of the calculus technique. The paper above will present a study for implementing in a static calculus the direct stiffness matrix method in order capable to address phenomena related to different stages of loading, rapid change of cross section area and physical properties. The method is a demand due to the fact that in our days the FEM (Finite Element Method is the only alternative to such a calculus and FEM are considered as expensive methods from the time and calculus resources point of view. The main goal in such a method is to create the moment-curvature diagram in the cross section that is analyzed. The paper above will express some of the most important techniques and new ideas as well in order to create the moment curvature graphic in the cross sections considered.
Mahjouri, Najmeh; Ardestani, Mojtaba
2010-08-01
In this paper, a new game theoretic methodology is developed for interbasin water transfer management with regard to economic, equity, and environmental criteria. The main objective is to supply the competing users in a fair way, while the efficiency and environmental sustainability criteria are satisfied and the utilities of water users are incorporated. Firstly, an optimization model is developed to proportionally allocate water to the competing users in water donor and receiving basins based on their water demands. Secondly, for different coalitions of water users, the water shares of the coalitions are determined using an optimization model with economic objectives regarding the physical and environmental constraints of the system. In order to satisfy water-quality requirements, the impacts of decreasing the instream flow in donor basin are estimated using a water-quality simulation model, and the required treatment levels for effluents discharged into the river, downstream of the water transfer point are determined. Finally, to achieve equity and to provide sufficient incentives for water users to participate in the cooperation, some cooperative game theoretic approaches are utilized for reallocation of net benefits to water users. This model is applied to a large-scale interbasin water allocation problem including two different basins struggling with water scarcity in Iran. The results show that this model can be utilized as an effective tool for optimal interbasin water allocation management involving stakeholders with conflicting objectives subject to physical and environmental constraints.
Taghian, Toloo; Sheikh, Abdul; Narmoneva, Daria; Kogan, Andrei
2015-03-01
Application of external electric field (EF) as a non-pharmacological, non-invasive tool to control cell function is of great therapeutic interest. We developed a theoretical-experimental approach to investigate the biophysical mechanisms of EF interaction with cells in electrode-free physiologically-relevant configuration. Our numerical results demonstrated that EF frequency is the major parameter to control cell response to EF. Non-oscillating or low-frequency EF leads to charge accumulation on the cell surface membrane that may mediate membrane initiated cell responses. In contrast, high-frequency EF penetrates the cell membrane and reaches cell cytoplasm, where it may directly activate intracellular responses. The theoretical predictions were confirmed in our experimental studies of the effects of applied EF on vascular cell function. Results show that non-oscillating EF increases vascular endothelial growth factor (VEGF) expression while field polarity controls cell adhesion rate. High-frequency, but not low frequency, EF provides differential regulation of cytoplasmic focal adhesion kinase and VEGF expression depending on the substrate, with increased expression in cells cultured on RGD-rich synthetic hydrogels, and decreased expression for matrigel culture. The authors acknowledge the financial support from the NSF (DMR-1206784 & DMR-0804199 to AK); the NIH (1R21 DK078814-01A1 to DN) and the University of Cincinnati (Interdisciplinary Faculty Research Support Grant to DN and AK).
Lugovsky V. A.
2016-05-01
Full Text Available The article surveys the process of psychological separation from parents in the student's age. Relevance of the research topic is connected with the existing problems in the modern student environment, which include the general trends of the late maturation in adolescents (infantilization: extension of childhood, low achievement motivation, the lack of desire for selfdevelopment, unwillingness to take responsibility for their own lives. The importance of the theme is emphasized by a number of age problems. Without solving these problems, individual development is almost impossible. The authors give the analysis of the theoretical concepts of separation problems in the domestic and foreign psychology, examine the concept of separation in the context of different approaches to its study. Based on the research, the definition of separation is formulated. Separation individuation processes are discussed in the psychoanalytic direction, through the family system therapy, the study of the level of intergenerational relations in the dichotomy of "proximity - gap", the establishment of the sovereignty of the individual, the formation of psychological space. On the basis of the theoretical analysis the authors categorize the types of separation as contradictory or ambivalent, successful, crisis, conflict categorization and give the characteristics of each type and its impact on the resolution of the separation conflict
A short course in quantum information theory. An approach from theoretical physics. 2. ed.
Diosi, Lajos [KFKI Research Institute for Particle and Nuclear Physics (RMKI), Budapest (Hungary). MTA Budapest
2011-07-01
This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. For the second edition, the authors has succeeded in adding many new topics while sticking to the conciseness of the overall approach. A new chapter on qubit thermodynamics has been added, while new sections and subsections have been incorporated in various chapter to deal with weak and time-continuous measurements, period-finding quantum algorithms and quantum error corrections. From the reviews of the first edition: ''The best things about this book are its brevity and clarity. In around 100 pages it provides a tutorial introduction to quantum information theory, including problems and solutions.. it's worth a look if you want to quickly get up to speed with the language and central concepts of quantum information theory, including the background classical information theory.'' (Craig Savage, Australian Physics, Vol. 44 (2), 2007). (orig.)
NON-TERRITORIAL AUTONOMY IN RUSSIA: PRACTICAL IMPLICATIONS OF THEORETICAL APPROACHES
Tatiana RUDNEVA
2012-06-01
Full Text Available Despite the theoretical possibility to use non-territorial autonomy as a mechanism through which ethnic groups can fulfil their right to selfdetermination along with other minority rights, not many states have been willing to put theory into practice. The article offers an explanation why wider applicability of NTA is problematic by arguing that the theory itself is not yet polished enough to be implemented. The study includes examination of both theoretical approaches and empirical data from a case study of an attempt to establish NTAs in the Russian Federation. The findings suggest that inconsistencies and unclarities in the theory do correlate with practical flaws of NTAs, which allows to suggest that when the theory is tested empirically, the reality reveals all the flaws of the theory. The results indicate that the concept of NTA needs further refinement and development to make it more practice-oriented and applicable. As the problem of minority rights is still to be dealt with, we also propose a model of global union of NTAs where each ethnic group is represented by a non-governmental organisation, which seems to be more applicable than the others, alongside a number of other mechanisms that are even more essential and universal and focus on defending basic human rights
Model-free information-theoretic approach to infer leadership in pairs of zebrafish
Butail, Sachit; Mwaffo, Violet; Porfiri, Maurizio
2016-04-01
Collective behavior affords several advantages to fish in avoiding predators, foraging, mating, and swimming. Although fish schools have been traditionally considered egalitarian superorganisms, a number of empirical observations suggest the emergence of leadership in gregarious groups. Detecting and classifying leader-follower relationships is central to elucidate the behavioral and physiological causes of leadership and understand its consequences. Here, we demonstrate an information-theoretic approach to infer leadership from positional data of fish swimming. In this framework, we measure social interactions between fish pairs through the mathematical construct of transfer entropy, which quantifies the predictive power of a time series to anticipate another, possibly coupled, time series. We focus on the zebrafish model organism, which is rapidly emerging as a species of choice in preclinical research for its genetic similarity to humans and reduced neurobiological complexity with respect to mammals. To overcome experimental confounds and generate test data sets on which we can thoroughly assess our approach, we adapt and calibrate a data-driven stochastic model of zebrafish motion for the simulation of a coupled dynamical system of zebrafish pairs. In this synthetic data set, the extent and direction of the coupling between the fish are systematically varied across a wide parameter range to demonstrate the accuracy and reliability of transfer entropy in inferring leadership. Our approach is expected to aid in the analysis of collective behavior, providing a data-driven perspective to understand social interactions.
A theoretical validation of the B-matrix spatial distribution approach to diffusion tensor imaging.
Borkowski, Karol; Kłodowski, Krzysztof; Figiel, Henryk; Krzyżak, Artur Tadeusz
2017-02-01
The recently presented B-matrix Spatial Distribution (BSD) approach is a calibration technique which derives the actual distribution of the B-matrix in space. It is claimed that taking into account the spatial variability of the B-matrix improves the accuracy of diffusion tensor imaging (DTI). The purpose of this study is to verify this approach theoretically through computer simulations. Assuming three different spatial distributions of the B-matrix, diffusion weighted signals were calculated for the six orientations of a model anisotropic phantom. Subsequently two variants of the BSD calibration were performed for each of the three cases; one with the assumption of high uniformity of the model phantom (uBSD-DTI) and the other taking into account imperfections in phantom structure (BSD-DTI). Several cases of varying degrees of phantom uniformity were analyzed and the distributions of the B-matrix obtained were used for the calculation of the diffusion tensor of a model isotropic phantom. The results were compared with standard diffusion tensor calculation. The simulations confirmed the improvement of accuracy in the determination of the diffusion tensor after the calibration. BSD-DTI improves accuracy independent of both the degree of uniformity of the phantom and the inhomogeneity of the B-matrix. In cases of a relatively good uniformity of the phantom and minor distortions in the spatial distribution of the B-matrix, the uBSD-DTI approach is sufficient.
Pedagogical quality in e-learning - Designing e-learning from a learning theoretical approach
Christian Dalsgaard
2005-02-01
Full Text Available The article is concerned with design and use of e-learning technology to develop education qualitatively. The purpose is to develop a framework for a pedagogical evaluation of e-learning technology. The approach is that evaluation and design must be grounded in a learning theoretical approach, and it is argued that it is necessary to make a reflection of technology in relation to activities, learning principles, and a learning theory in order to qualitatively develop education. The article presents three frameworks developed on the basis of cognitivism, radical constructivism and activity theory. Finally, on the basis of the frameworks, the article discusses e-learning technology and, more specifically, design of virtual learning environments and learning objects. It is argued that e-learning technology is not pedagogically neutral, and that it is therefore necessary to focus on design of technology that explicitly supports a certain pedagogical approach. Further, it is argued that design should direct its focus away from organisation of content and towards design of activities.
Razvan-Alexandru GENTIMIR
2015-08-01
Full Text Available The purpose of this paper is to explore the theoretical approaches of the „strategic partnership” concept, and then applying them to the relationship and cooperation between the European Union and the Russian Federation, thus emphasizing its evolution. The base of this article lays on a literature review that gathers recent articles and several studies of some researchers in the field, providing some professional information on the subject. Findings reveal that there has not yet got to a general definition of the concepts, researchers and not only would agree upon, and, when applying it to the EU-Russia relation, that there are some mutual benefits as a result of the cooperation. This article shows that, despite of the fact that the startup of the cooperation between the two major global powers was a good one, it now has reached some kind of dead moment, when policy contradictions block the evolution of the partnership.
A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process
Wang, Yi; Tamai, Tetsuo
2009-01-01
Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.
maila dinia husni rahim
2016-03-01
Full Text Available As adults, we often believe that children are only interested with games and children’s ‘stuff’. However research has shown that children do indeed show a greater interest in the world around them, including about politics, elections, and democracy. If we need to teach children about democracy, what are the best methods of teaching democracy to young children? Narrative is considered as an effective medium to convey messages to children and discuss hard subjects. This paper is a theoretical exploration that looks at the narrative approach to teaching and learning about democracy with young children. The researchers has used a literature review to look at why narratives should be used, what narratives should be used and how to use narratives.
Information propagation and collective consensus in blogosphere: a game-theoretical approach
Liu, L; Wang, L; Fu, Feng; Liu, Lianghuan; Wang, Long
2007-01-01
In this paper, we study the information propagation in an empirical blogging network by game-theoretical approach. The blogging network has small-world property and is scale-free. Individuals in the blogosphere coordinate their decisions according to their idiosyncratic preferences and the choices of their neighbors. We find that corresponding to different initial conditions and weights, the equilibrium frequency of discussions has a transition from high to low as a result of the common interest in the topics specified by payoff matrices. Furthermore, under recommendation, namely, individuals in blogging networks refer to additional bloggers' resources besides their nearest neighbors preferentially according to the popularity of the blogs, the whole blogging network ultrafastly evolves into consensus state (absorbing state). Our results reflect the dynamic pattern of information propagation in blogging networks.
Dynamic Load on a Pipe Caused by Acetylene Detonations – Experiments and Theoretical Approaches
Axel Sperber
1999-01-01
Full Text Available The load acting on the wall of a pipe by a detonation, which is travelling through, is not yet well characterized. The main reasons are the limited amount of sufficiently accurate pressure time history data and the requirement of considering the dynamics of the system. Laser vibrometry measurements were performed to determine the dynamic response of the pipe wall on a detonation. Different modelling approaches were used to quantify, theoretically, the radial displacements of the pipe wall. There is good agreement between measured and predicted values of vibration frequencies and the propagation velocities of transverse waves. Discrepancies mainly due to wave propagation effects were found in the amplitudes of the radial velocities. They might be overcome by the use of a dynamic load factor or improved modelling methods.
Theoretical approaches to control spin dynamics in solid-state nuclear magnetic resonance
Eugene Stephane Mananga
2015-12-01
This article reviews theoretical approaches for controlling spin dynamics in solid-state nuclear magnetic resonance. We present fundamental theories in the history of NMR, namely, the average Hamiltonian and Floquet theories. We also discuss emerging theories such as the Fer and Floquet-Magnus expansions. These theories allow one to solve the time-dependent Schrodinger equation, which is still the central problem in spin dynamics of solid-state NMR. Examples from the literature that highlight several applications of these theories are presented, and particular attention is paid to numerical integrators and propagator operators. The problem of time propagation calculated with Chebychev expansion and the future development of numerical directions with the Cayley transformation are considered. The bibliography includes 190 references.
Souza, Vânia de; Gazzinelli, Maria Flávia; Soares, Amanda Nathale; Fernandes, Marconi Moura; Oliveira, Rebeca Nunes Guedes de; Fonseca, Rosa Maria Godoy Serpa da
2017-04-01
To describe the Papo Reto [Straight Talk] game and reflect on its theoretical-methodological basis. Analytical study on the process of elaboration of the Papo Reto online game, destined to adolescents aged 15-18 years, with access to the Game between 2014 and 2015. the interactions of 60 adolescents from Belo Horizonte and São Paulo constituted examples of the potentialities of the Game to favor the approach to sexuality with adolescents through simulation of reality, invention and interaction. Based on those potentialities, four thinking categories were discussed: the game as pedagogic device; the game as simulation of realities; the game as device for inventive learning; and the game empowering the interaction. By permitting that the adolescents take risks on new ways, the Game allows them to become creative and active in the production of senses, in the creation of their discourses and in the ways of thinking, feeling and acting in the sexuality field.
Region innovation and investment development: conceptual theoretical approach and business solutions
Zozulya D.M.
2017-01-01
Full Text Available The article describes essential problems of the region business innovation and investment development under current conditions, issues of crisis restrictions negotiation and innovation-driven economy formation. The relevance of the research is defined by the need of effective tools creation for business innovation and investment development and support, which can be applied, first, to increase efficiency of the region industrial activity, then improve production competitiveness on the innovative basis, overcome existing problems and provide sustainable innovation development in the region. The results of conducted research are represented in the article including region innovation and investment development concept model made up by the authors on the basis of system theoretical approach. The tools of the region innovation development defined in the concept model are briefly reviewed in the article. The most important of them include engineering marketing (marketing of scientific and technical innovations, strategic planning, benchmarking, place marketing and business process modeling.
A field-theoretic approach to linear scaling \\textit{ab-initio} molecular dynamics
Richters, Dorothee; Kühne, Thomas D
2012-01-01
We present a field-theoretic method suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is solved by means of a properly modified Langevin equation. The predictive power of this approach is illustrated using the example of liquid methane under extreme conditions.
Theoretical investigation of antiferroelectric (SmCA*) subphases by hydrodynamical approach
Lahiri, T.; Pal Majumder, T.
2011-12-01
We provide a hydrodynamical approach utilizing time dependent Landau-Ginzburg model (L-G) and the Cahn-Hilliard model (C-H) to investigate antiferroelectric liquid crystals (AFLCs) exhibiting different chiral phases between paraelectric smectic A (SmA*) phase and antiferroelectric smectic CA* phase (SmCA*). Introducing conserved and non-conserved order parameters in C-H and L-G models, we have predicted the appearance of a chiral smectic C (SmC*) phase and a ferrielectric SmCFI1* phase (three layers SmCA*) in an antiferroelectric phase sequence. The three layers periodicity for SmCFI1* phase is studied in detail with a non-uniform layer interactions among smectic layers with strong experimental support. Finally, we provide some theoretical basis for the non-uniformity of our proposed layer interactions.
A Theoretical Analysis of the Mission Statement Based on the Axiological Approach
Marius-Costel EŞI
2016-12-01
Full Text Available The aim of this work is focused on a theoretical analysis of formulating the mission statement of business organizations in relation to the idea of the organizational axiological core. On one hand, we consider the CSR-Corporate Social Responsibility which, in our view, must be brought into direct connection both with the moral entrepreneurship (which should support the philosophical perspective of the statement of business organizations mission and the purely economic entrepreneurship based on profit maximization (which should support the pragmatic perspective. On the other hand, an analysis of the moral concepts which should underpin business is becoming fundamental, in our view, as far as the idea of the social specific value of the social entrepreneurship is evidenced. Therefore, our approach highlights a number of epistemic explanations in relation to the actual practice dimension.
A theoretical approach to room acoustic simulations based on a radiative transfer model
Ruiz-Navarro, Juan-Miguel; Jacobsen, Finn; Escolano, José
2010-01-01
A theoretical approach to room acoustic simulations based on a radiative transfer model is developed by adapting the classical radiative transfer theory from optics to acoustics. The proposed acoustic radiative transfer model expands classical geometrical room acoustic modeling algorithms...... by incorporating a propagation medium that absorbs and scatters radiation, handling both diffuse and non-diffuse reflections on boundaries and objects in the room. The main scope of this model is to provide a proper foundation for a wide number of room acoustic simulation models, in order to establish and unify...... their principles. It is shown that this room acoustic modeling technique establishes the basis of two recently proposed algorithms, the acoustic diffusion equation and the room acoustic rendering equation. Both methods are derived in detail using an analytical approximation and a simplified integral equation...
Combined action of ionizing radiation with another factor: common rules and theoretical approach
Kim, Jin Kyu; Roh, Changhyun, E-mail: jkkim@kaeri.re.kr [Korea Atomic Energy Research Institute, Jeongeup (Korea, Republic of); Komarova, Ludmila N.; Petin, Vladislav G., E-mail: vgpetin@yahoo.com [Medical Radiological Research Center, Obninsk (Russian Federation)
2013-07-01
Two or more factors can simultaneously make their combined effects on the biological objects. This study has focused on theoretical approach to synergistic interaction due to the combined action of radiation and another factor on cell inactivation. A mathematical model for the synergistic interaction of different environmental agents was suggested for quantitative prediction of irreversibly damaged cells after combined exposures. The model takes into account the synergistic interaction of agents and based on the supposition that additional effective damages responsible for the synergy are irreversible and originated from an interaction of ineffective sub lesions. The experimental results regarding the irreversible component of radiation damage of diploid yeast cells simultaneous exposed to heat with ionizing radiation or UV light are presented. A good agreement of experimental results with model predictions was demonstrated. The importance of the results obtained for the interpretation of the mechanism of synergistic interaction of various environmental factors is discussed. (author)
Fryanov, V. N.; Pavlova, L. D.; Temlyantsev, M. V.
2017-09-01
Methodological approaches to theoretical substantiation of the structure and parameters of robotic coal mines are outlined. The results of mathematical and numerical modeling revealed the features of manifestation of geomechanical and gas dynamic processes in the conditions of robotic mines. Technological solutions for the design and manufacture of technical means for robotic mine are adopted using the method of economic and mathematical modeling and in accordance with the current regulatory documents. For a comparative performance evaluation of technological schemes of traditional and robotic mines, methods of cognitive modeling and matrix search for subsystem elements in the synthesis of a complex geotechnological system are applied. It is substantiated that the process of technical re-equipment of a traditional mine with a phased transition to a robotic mine will reduce unit costs by almost 1.5 times with a significant social effect due to a reduction in the number of personnel engaged in hazardous work.
Advancing the theoretical foundation of the partially-averaged Navier-Stokes approach
Reyes, Dasia Ann
The goal of this dissertation is to consolidate the theoretical foundation of variable-resolution (VR) methods in general and the partially-averaged Navier-Stokes (PANS) approach in particular. The accurate simulation of complex turbulent flows remains an outstanding challenge in modern computational fluid dynamics. High-fidelity approaches such as direct numerical simulations (DNS) and large-eddy simulation (LES) are not typically feasible for complex engineering simulations with current computational technologies. Low-fidelity approaches such as Reynolds-averaged Navier-Stokes (RANS), although widely used, are inherently inadequate for turbulent flows with complex flow features. VR bridging methods fill the gap between DNS and RANS by allowing a tunable degree of resolution ranging from RANS to DNS. While the utility of VR methods is well established, the mathematical foundations and physical characterization require further development. This dissertation focuses on the physical attributes of fluctuations in partially-resolved simulations of turbulence. The specific objectives are to: (i) establish a framework for assessing the physical fidelity of VR methods to examine PANS fluctuations; (ii) investigate PANS simulations subject to multiple resolution changes; (iii) examine turbulent transport closure modeling for partially-resolved fields; (iv) examine the effect of filter control parameters in the limit of spectral cut-off in the dissipative region; and (v) validate low-Reynolds number corrections with RANS for eventual implementation with PANS. While the validation methods are carried out in the context of PANS, they are considered appropriate for all VR bridging methods. The key findings of this dissertation are summarized as follows. The Kolmogorov hypotheses are suitably adapted to describe fluctuations of partially-resolved turbulence fields, and the PANS partially-resolved field is physically consistent with the adapted Kolmogorov hypotheses. PANS
Uszko, Wojciech; Diehl, Sebastian; Englund, Göran; Amarasekare, Priyanga
2017-04-01
We theoretically explore consequences of warming for predator-prey dynamics, broadening previous approaches in three ways: we include beyond-optimal temperatures, predators may have a type III functional response, and prey carrying capacity depends on explicitly modelled resources. Several robust patterns arise. The relationship between prey carrying capacity and temperature can range from near-independence to monotonically declining/increasing to hump-shaped. Predators persist in a U-shaped region in resource supply (=enrichment)-temperature space. Type II responses yield stable persistence in a U-shaped band inside this region, giving way to limit cycles with enrichment at all temperatures. In contrast, type III responses convey stability at intermediate temperatures and confine cycles to low and high temperatures. Warming-induced state shifts can be predicted from system trajectories crossing stability and persistence boundaries in enrichment-temperature space. Results of earlier studies with more restricted assumptions map onto this graph as special cases. Our approach thus provides a unifying framework for understanding warming effects on trophic dynamics.
Modeling the economic impact of medication adherence in type 2 diabetes: a theoretical approach
David S Cobden
2010-08-01
Full Text Available David S Cobden1, Louis W Niessen2, Frans FH Rutten1, W Ken Redekop11Department of Health Policy and Management, Section of Health Economics – Medical Technology Assessment (HE-MTA, Erasmus MC, Erasmus University Rotterdam, The Netherlands; 2Department of International Health, Johns Hopkins University School of Public Health, Johns Hopkins Medical Institutions, Baltimore, MD, USAAims: While strong correlations exist between medication adherence and health economic outcomes in type 2 diabetes, current economic analyses do not adequately consider them. We propose a new approach to incorporate adherence in cost-effectiveness analysis.Methods: We describe a theoretical approach to incorporating the effect of adherence when estimating the long-term costs and effectiveness of an antidiabetic medication. This approach was applied in a Markov model which includes common diabetic health states. We compared two treatments using hypothetical patient cohorts: injectable insulin (IDM and oral (OAD medications. Two analyses were performed, one which ignored adherence (analysis 1 and one which incorporated it (analysis 2. Results from the two analyses were then compared to explore the extent to which adherence may impact incremental cost-effectiveness ratios.Results: In both analyses, IDM was more costly and more effective than OAD. When adherence was ignored, IDM generated an incremental cost-effectiveness of $12,097 per quality-adjusted life-year (QALY gained versus OAD. Incorporation of adherence resulted in a slightly higher ratio ($16,241/QALY. This increase was primarily due to better adherence with OAD than with IDM, and the higher direct medical costs for IDM.Conclusions: Incorporating medication adherence into economic analyses can meaningfully influence the estimated cost-effectiveness of type 2 diabetes treatments, and should therefore be considered in health care decision-making. Future work on the impact of adherence on health
Olofsson, Isabelle; Fredriksson, Anders [Golder Associates AB, Stockholm (Sweden)
2005-05-15
The Swedish Nuclear and Fuel Management Company (SKB) is conducting Preliminary Site Investigations at two different locations in Sweden in order to study the possibility of a Deep Repository for spent fuel. In the frame of these Site Investigations, Site Descriptive Models are achieved. These products are the result of an interaction of several disciplines such as geology, hydrogeology, and meteorology. The Rock Mechanics Site Descriptive Model constitutes one of these models. Before the start of the Site Investigations a numerical method using Discrete Fracture Network (DFN) models and the 2D numerical software UDEC was developed. Numerical simulations were the tool chosen for applying the theoretical approach for characterising the mechanical rock mass properties. Some shortcomings were identified when developing the methodology. Their impacts on the modelling (in term of time and quality assurance of results) were estimated to be so important that the improvement of the methodology with another numerical tool was investigated. The theoretical approach is still based on DFN models but the numerical software used is 3DEC. The main assets of the programme compared to UDEC are an optimised algorithm for the generation of fractures in the model and for the assignment of mechanical fracture properties. Due to some numerical constraints the test conditions were set-up in order to simulate 2D plane strain tests. Numerical simulations were conducted on the same data set as used previously for the UDEC modelling in order to estimate and validate the results from the new methodology. A real 3D simulation was also conducted in order to assess the effect of the '2D' conditions in the 3DEC model. Based on the quality of the results it was decided to update the theoretical model and introduce the new methodology based on DFN models and 3DEC simulations for the establishment of the Rock Mechanics Site Descriptive Model. By separating the spatial variability into two
R.Valli
2010-07-01
Full Text Available Power management is one of the vital issue in wireless sensor networks, where the lifetime of the networkrelies on battery powered nodes. Transmitting at high power reduces the lifetime of both the nodes andthe network. One efficient way of power management is to control the power at which the nodes transmit.In this paper, a virtual multiple input multiple output wireless sensor network (VMIMO-WSNcommunication architecture is considered and the power control of sensor nodes based on the approachof game theory is formulated. The use of game theory has proliferated, with a broad range of applicationsin wireless sensor networking. Approaches from game theory can be used to optimize node level as wellas network wide performance. The game here is categorized as an incomplete information game, in whichthe nodes do not have complete information about the strategies taken by other nodes. For virtualmultiple input multiple output wireless sensor network architecture considered, the Nash equilibrium isused to decide the optimal power level at which a node needs to transmit, to maximize its utility. Outcomeshows that the game theoretic approach considered for VMIMO-WSN architecture achieves the bestutility, by consuming less power.
Hindumathi, V; Kranthi, T; Rao, S B; Manimaran, P
2014-06-01
With rapidly changing technology, prediction of candidate genes has become an indispensable task in recent years mainly in the field of biological research. The empirical methods for candidate gene prioritization that succors to explore the potential pathway between genetic determinants and complex diseases are highly cumbersome and labor intensive. In such a scenario predicting potential targets for a disease state through in silico approaches are of researcher's interest. The prodigious availability of protein interaction data coupled with gene annotation renders an ease in the accurate determination of disease specific candidate genes. In our work we have prioritized the cervix related cancer candidate genes by employing Csaba Ortutay and his co-workers approach of identifying the candidate genes through graph theoretical centrality measures and gene ontology. With the advantage of the human protein interaction data, cervical cancer gene sets and the ontological terms, we were able to predict 15 novel candidates for cervical carcinogenesis. The disease relevance of the anticipated candidate genes was corroborated through a literature survey. Also the presence of the drugs for these candidates was detected through Therapeutic Target Database (TTD) and DrugMap Central (DMC) which affirms that they may be endowed as potential drug targets for cervical cancer.
P N Dileep; R Ramesh Kumar
2011-02-01
Analytical evaluation of fracture toughness of a multilayered composite laminate was well-established using modified crack closure integral (MCCI) approach based on test data on the failure load. For this purpose the crack initiation direction, which is treated as a branch crack direction for the theoretical prediction, is required. The crack initiation direction in a multilayered composite laminate depends on mode of failure. In the present work, a fracture parameter $n^∗$ is introduced to predict the mode of failure in multilayered composite having a crack and is validated. Analytical relationship for the prediction of fracture toughness of multilayered composite between a base laminate and its constituent sublaminates is also arrived at. With available test data on the toughness of a set of sub-laminates, toughness of base laminate is determined and validated. The present approach is useful in evaluating the load carrying capability of composite structures with defects in the form of cracks and this information is valuable for design.
A Theoretical Approach to Understanding Population Dynamics with Seasonal Developmental Durations
Lou, Yijun; Zhao, Xiao-Qiang
2017-04-01
There is a growing body of biological investigations to understand impacts of seasonally changing environmental conditions on population dynamics in various research fields such as single population growth and disease transmission. On the other side, understanding the population dynamics subject to seasonally changing weather conditions plays a fundamental role in predicting the trends of population patterns and disease transmission risks under the scenarios of climate change. With the host-macroparasite interaction as a motivating example, we propose a synthesized approach for investigating the population dynamics subject to seasonal environmental variations from theoretical point of view, where the model development, basic reproduction ratio formulation and computation, and rigorous mathematical analysis are involved. The resultant model with periodic delay presents a novel term related to the rate of change of the developmental duration, bringing new challenges to dynamics analysis. By investigating a periodic semiflow on a suitably chosen phase space, the global dynamics of a threshold type is established: all solutions either go to zero when basic reproduction ratio is less than one, or stabilize at a positive periodic state when the reproduction ratio is greater than one. The synthesized approach developed here is applicable to broader contexts of investigating biological systems with seasonal developmental durations.
THEORETICAL APPROACHES TO THE DEFINITION OF MOTIVATION OF PROFESSIONAL ACTIVITY OF PUBLIC SERVANTS
E. V. Vashalomidze
2016-01-01
Full Text Available The relevance of the topic chosen due to the presence of performance motivation development problems of civil servants, including their motivation for continuous professional development, as one of the main directions of development of the civil service in general, approved by the relevant Presidential Decree on 2016–2018 years. In the first part of this article provides a brief analytical overview and an assessment of content and process of theoretical and methodological approaches to solving the problems of motivation of the personnel of socio-economic systems. In the second part of the article on the basis of the research proposed motivating factors in the development of the approaches set out in the first part of the article.The purpose / goal. The aim of the article is to provide methodological assistance to academic institutions involved in the solution of scientific and practical problems of motivation of civil servants to the continuous professional development in accordance with the Presidential Decree of 11 August 2016 № 408.Methodology. The methodological basis of this article are: a comprehensive analysis of normative legal provision of state of the Russian Federation; systematic approach and historical analysis of the theory and methodology of solving problems of staff motivation; method of expert evaluations; the scientific method of analogies.Conclusions / relevance. The practical significance of the article is in the operational delivery of the scientific and methodological assistance to the implementation of the Russian Federation "On the main directions of the state civil service of the Russian Federation in the years 2016–2018" Presidential Decree of 11 August number 403 regarding the establishment of mechanisms to motivate civil servants to continuous professional development.
Grimm, Guido W.; Potts, Alastair J.
2016-03-01
The Coexistence Approach has been used to infer palaeoclimates for many Eurasian fossil plant assemblages. However, the theory that underpins the method has never been examined in detail. Here we discuss acknowledged and implicit assumptions and assess the statistical nature and pseudo-logic of the method. We also compare the Coexistence Approach theory with the active field of species distribution modelling. We argue that the assumptions will inevitably be violated to some degree and that the method lacks any substantive means to identify or quantify these violations. The absence of a statistical framework makes the method highly vulnerable to the vagaries of statistical outliers and exotic elements. In addition, we find numerous logical inconsistencies, such as how climate shifts are quantified (the use of a "centre value" of a coexistence interval) and the ability to reconstruct "extinct" climates from modern plant distributions. Given the problems that have surfaced in species distribution modelling, accurate and precise quantitative reconstructions of palaeoclimates (or even climate shifts) using the nearest-living-relative principle and rectilinear niches (the basis of the method) will not be possible. The Coexistence Approach can be summarised as an exercise that shoehorns a plant fossil assemblage into coexistence and then assumes that this must be the climate. Given the theoretical issues and methodological issues highlighted elsewhere, we suggest that the method be discontinued and that all past reconstructions be disregarded and revisited using less fallacious methods. We outline six steps for (further) validation of available and future taxon-based methods and advocate developing (semi-quantitative) methods that prioritise robustness over precision.
Cheng, Jin; Hon, Yiu-Chung; Seo, Jin Keun; Yamamoto, Masahiro
2005-01-01
The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches was held at Fudan University, Shanghai from 16-21 June 2004. The first conference in this series was held at the City University of Hong Kong in January 2002 and it was agreed to hold the conference once every two years in a Pan-Pacific Asian country. The next conference is scheduled to be held at Hokkaido University, Sapporo, Japan in July 2006. The purpose of this series of biennial conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries. In recent decades, interest in inverse problems has been flourishing all over the globe because of both the theoretical interest and practical requirements. In particular, in Asian countries, one is witnessing remarkable new trends of research in inverse problems as well as the participation of many young talents. Considering these trends, the second conference was organized with the chairperson Professor Li Tat-tsien (Fudan University), in order to provide forums for developing research cooperation and to promote activities in the field of inverse problems. Because solutions to inverse problems are needed in various applied fields, we entertained a total of 92 participants at the second conference and arranged various talks which ranged from mathematical analyses to solutions of concrete inverse problems in the real world. This volume contains 18 selected papers, all of which have undergone peer review. The 18 papers are classified as follows: Surveys: four papers give reviews of specific inverse problems. Theoretical aspects: six papers investigate the uniqueness, stability, and reconstruction schemes. Numerical methods: four papers devise new numerical methods and their applications to inverse problems. Solutions to applied inverse problems: four papers discuss concrete inverse problems such as scattering problems and inverse problems in
Information theoretical approach to discovering solar wind drivers of the outer radiation belt
Wing, Simon; Johnson, Jay R.; Camporeale, Enrico; Reeves, Geoffrey D.
2016-10-01
The solar wind-magnetosphere system is nonlinear. The solar wind drivers of geosynchronous electrons with energy range of 1.8-3.5 MeV are investigated using mutual information, conditional mutual information (CMI), and transfer entropy (TE). These information theoretical tools can establish linear and nonlinear relationships as well as information transfer. The information transfer from solar wind velocity (Vsw) to geosynchronous MeV electron flux (Je) peaks with a lag time of 2 days. As previously reported, Je is anticorrelated with solar wind density (nsw) with a lag of 1 day. However, this lag time and anticorrelation can be attributed at least partly to the Je(t + 2 days) correlation with Vsw(t) and nsw(t + 1 day) anticorrelation with Vsw(t). Analyses of solar wind driving of the magnetosphere need to consider the large lag times, up to 3 days, in the (Vsw, nsw) anticorrelation. Using CMI to remove the effects of Vsw, the response of Je to nsw is 30% smaller and has a lag time Je. Nonstationarity in the system dynamics is investigated using windowed TE. When the data are ordered according to transfer entropy value, it is possible to understand details of the triangle distribution that has been identified between Je(t + 2 days) versus Vsw(t).
Annual Gross Primary Production from Vegetation Indices: A Theoretically Sound Approach
María Amparo Gilabert
2017-02-01
Full Text Available A linear relationship between the annual gross primary production (GPP and a PAR-weighted vegetation index is theoretically derived from the Monteith equation. A semi-empirical model is then proposed to estimate the annual GPP from commonly available vegetation indices images and a representative PAR, which does not require actual meteorological data. A cross validation procedure is used to calibrate and validate the model predictions against reference data. As the calibration/validation process depends on the reference GPP product, the higher the quality of the reference GPP, the better the performance of the semi-empirical model. The annual GPP has been estimated at 1-km scale from MODIS NDVI and EVI images for eight years. Two reference data sets have been used: an optimized GPP product for the study area previously obtained and the MOD17A3 product. Different statistics show a good agreement between the estimates and the reference GPP data, with correlation coefficient around 0.9 and relative RMSE around 20%. The annual GPP is overestimated in semiarid areas and slightly underestimated in dense forest areas. With the above limitations, the model provides an excellent compromise between simplicity and accuracy for the calculation of long time series of annual GPP.
Understanding uncertainty in seagrass injury recovery: an information-theoretic approach.
Uhrin, Amy V; Kenworthy, W Judson; Fonseca, Mark S
2011-06-01
Vessel groundings cause severe, persistent gaps in seagrass beds. Varying degrees of natural recovery have been observed for grounding injuries, limiting recovery prediction capabilities, and therefore, management's ability to focus restoration efforts where natural recovery is unlikely. To improve our capacity for predicting seagrass injury recovery, we used an information-theoretic approach to evaluate the relative contribution of specific injury attributes to the natural recovery of 30 seagrass groundings in Florida Keys National Marine Sanctuary, Florida, USA. Injury recovery was defined by three response variables examined independently: (1) initiation of seagrass colonization, (2) areal contraction, and (3) sediment in-filling. We used a global model and all possible subsets for four predictor variables: (1) injury age, (2) original injury volume, (3) original injury perimeter-to-area ratio, and (4) wave energy. Successional processes were underway for many injuries with fast-growing, opportunistic seagrass species contributing most to colonization. The majority of groundings that exhibited natural seagrass colonization also exhibited areal contraction and sediment in-filling. Injuries demonstrating colonization, contraction, and in-filling were on average older and smaller, and they had larger initial perimeter-to-area ratios. Wave energy was highest for colonizing injuries. The information-theoretic approach was unable to select a single "best" model for any response variable. For colonization and contraction, injury age had the highest relative importance as a predictor variable; wave energy appeared to be associated with second-order effects, such as sediment in-filling, which in turn, facilitated seagrass colonization. For sediment in-filling, volume and perimeter-to-area ratio had similar relative importance as predictor variables with age playing a lesser role than seen for colonization and contraction. Our findings confirm that these injuries
Staub, Isabelle; Fredriksson, Anders; Outters, Nils [Golder Associates AB, Uppsala (Sweden)
2002-05-01
In the purpose of studying the possibilities of a Deep Repository for spent fuel, the Swedish Nuclear and Fuel Management Company (SKB) is currently planning for Site Investigations. Data collected from these Site Investigations are interpreted and analysed to achieve the full Site Description, which is built up of models from all the disciplines that are considered of importance for the Site Description. One of these models is the Rock Mechanical Descriptive Model,which would be developed for any site in hard crystalline rock, and is a combination and evaluation of the characterisation of rock mass by means of empirical relationships and a theoretical approach based on numerical modelling. The present report describes the theoretical approach. The characterisation of the mechanical properties of the rock mass, viewed as a unit consisting of intact rock and fractures, is achieved by numerical simulations with following input parameters: initial stresses, fracture geometry, distribution of rock mechanical properties, such as deformation and strength parameters, for the intact rock and for the fractures. The numerical modelling was performed with the two-dimensional code UDEC, and the rock block models were generated from 2D trace sections extracted from the 3D Discrete Fracture Network (DFN) model. Assumptions and uncertainties related to the set-up of the model are considered. The numerical model was set-up to simulate a plain strain-loading test. Different boundary conditions were applied on the model for simulating stress conditions (I) in the undisturbed rock mass, and (II) at the proximity of a tunnel. In order to assess the reliability of the model sensitivity analyses have been conducted on some rock block models for defining the dependency of mechanical properties to in situ stresses, the influence of boundary conditions, rock material and joint constitutive models used to simulate the behaviour of intact rock and fractures, domain size and anisotropy. To
Stoisser, C. M.; Audebert, S.
2008-05-01
In order to describe the state-of-the-art on cracked rotor related problems, the current work presents the comprehensive theoretical, numerical and experimental approach adopted by EDF for crack detection in power plant rotating machinery. The work mainly focuses on the theoretical cracked beam model developed in the past years by S. Andrieux and C. Varé and associates both numerical and experimental aspects related to the crack detection problem in either turboset or turbo pump units. The theoretical part consists of the derivation of a lumped cracked beam model from the three-dimensional formulation of the general problem of elasticity with unilateral contact conditions on the crack lips, valid for any shape and number of cracks in the beam section and extended to cracks not located in a cross-section. This leads to the assessment of the cracked beam rigidity as a function of the rotation angle, in case of pure bending load or bending plus shear load. In this way the function can be implemented in a 1D rotordynamics code. An extension of the cracked beam model taking into account the torsion behaviour is also proposed. It is based on the assumption of full adherence between crack lips, when the crack closes, and on an incremental formulation of deformation energy. An experimental validation has been carried out using different cracked samples, both in static and dynamic configurations, considering one or three elliptic cracks in the same cross-section and helix-shaped cracks. Concerning the static configuration, a good agreement between numerical and experimental results is found. It is shown to be equal to 1% maximal gap of the beam deflection. Concerning the dynamical analysis, the main well-known indicator 2× rev. bending vibration component at half critical speed is approximated at maximum by 18% near the crack position. Our experiments also allowed for the observation of the bending and torsion resonance frequency shifts determined by the extra
A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty
Siddharth, S.; Ali, A. S.; El-Sheimy, N.; Goodall, C. L.; Syed, Z. F.
2012-02-01
Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency.
Praveena, R.; Sadasivam, K.
2016-05-01
Synthetic antioxidants such as butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) are found to be toxic, hence non-carcinogenic naturally occurring radical scavengers especially flavonoids have gained considerable importance in the past two decades. In the present investigation, the radical scavenging activity of C-glycosyl flavonoids is evaluated using theoretical approach which could broaden its scope in therapeutic applications. Gas and solvent phase studies of structural and molecular characteristics of C-glycosyl flavonoid, isovitexin is investigated through hydrogen atom transfer mechanism (HAT), Electron transfer-proton transfer (ET-PT) and Sequential proton loss electron transfer (SPLET) by Density functional theory (DFT) using hybrid parameters. The computed values of the adiabatic ionization potential, electron affinity, hardness, softness, electronegativity and electrophilic index indicate that isovitexin possess good radical scavenging activity. The behavior of different -OH groups in polyphenolic compounds is assessed by considering electronic effects of the neighbouring groups and the overall geometry of molecule which in turn helps in analyzing the antioxidant capacity of the polyphenolic molecule. The studies indicate that the H-atom abstraction from 4'-OH site is preferred during the radical scavenging process. From Mulliken spin density analysis and FMOs, B-ring is found to be more delocalized center and capable of electron donation. Comparison of antioxidant activity of vitexin and isovitexin leads to the conclusion that isovitexin acts as a better radical scavenger. This is an evidence for the importance of position of glucose unit in the flavonoid.
Bingtuan Gao
2015-12-01
Full Text Available In a deregulated environment of the power market, in order to lower their energy price and guarantee the stability of the power network, appropriate transmission lines have to be considered for electricity generators to sell their energy to the end users. This paper proposes a game-theoretic power transmission scheduling for multiple generators to lower their wheeling cost. Based on the embedded cost method, a wheeling cost model consisting of congestion cost, cost of losses and cost of transmission capacity is presented. By assuming each generator behaves in a selfish and rational way, the competition among the multiple generators is formulated as a non-cooperative game, where the players are the generators and the strategies are their daily schedules of power transmission. We will prove that there exists at least one pure-strategy Nash equilibrium of the formulated power transmission game. Moreover, a distributed algorithm will be provided to realize the optimization in terms of minimizing the wheeling cost. Finally, simulations were performed and discussed to verify the feasibility and effectiveness of the proposed non-cooperative game approach for the generators in a deregulated environment.
Vânia de Souza
Full Text Available ABSTRACT Objective: To describe the Papo Reto [Straight Talk] game and reflect on its theoretical-methodological basis. Method: Analytical study on the process of elaboration of the Papo Reto online game, destined to adolescents aged 15-18 years, with access to the Game between 2014 and 2015. Results: the interactions of 60 adolescents from Belo Horizonte and São Paulo constituted examples of the potentialities of the Game to favor the approach to sexuality with adolescents through simulation of reality, invention and interaction. Based on those potentialities, four thinking categories were discussed: the game as pedagogic device; the game as simulation of realities; the game as device for inventive learning; and the game empowering the interaction. Conclusion: By permitting that the adolescents take risks on new ways, the Game allows them to become creative and active in the production of senses, in the creation of their discourses and in the ways of thinking, feeling and acting in the sexuality field.
The term of the job satisfaction: theoretical approaches and consequences in the job performance
Tsounis A.
2016-04-01
Full Text Available Background: Job satisfaction is one of the most widely researched subjects in Industrial/Organizational Psychology, which is being linked to many different aspects of individual and organizational function. Therefore, a variety of theories have been developed in order to explain it, while emphasis has also been given to its consequences in job performance. Objective: The aim of this study is to outline the main theories associated with explaining job satisfaction, as well as the its' consequences in job performance. Methods: The material of this review is based on the Greek and international literature. Books, articles and studies from libraries were held by hand and online databases and journals were searched with the help of keywords. Results: The most common and prominent theories in this area are separated in two main categories: the ontological approaches that are based in the content and type of human incentives and the mechanistic theories that emphasize on circumstances under which job performance and satisfaction are enhanced. Concerning job performance there is little empirical support for the straight association between job satisfaction and productivity, absence and turnover intention. Conclusions: Although none of the above theoretical frameworks can fully explain a multi-factorial phenomenon such as job satisfaction each one may contribute to its' better understanding. In addition, as far as the association between job satisfaction and different aspects of job performance the mediating role of many variables must be taken into account by the researchers
Estimation-theoretic approach to delayed decoding of predictively encoded video sequences.
Han, Jingning; Melkote, Vinay; Rose, Kenneth
2013-03-01
Current video coders employ predictive coding with motion compensation to exploit temporal redundancies in the signal. In particular, blocks along a motion trajectory are modeled as an auto-regressive (AR) process, and it is generally assumed that the prediction errors are temporally independent and approximate the innovations of this process. Thus, zero-delay encoding and decoding is considered efficient. This paper is premised on the largely ignored fact that these prediction errors are, in fact, temporally dependent due to quantization effects in the prediction loop. It presents an estimation-theoretic delayed decoding scheme, which exploits information from future frames to improve the reconstruction quality of the current frame. In contrast to the standard decoder that reproduces every block instantaneously once the corresponding quantization indices of residues are available, the proposed delayed decoder efficiently combines all accessible (including any future) information in an appropriately derived probability density function, to obtain the optimal delayed reconstruction per transform coefficient. Experiments demonstrate significant gains over the standard decoder. Requisite information about the source AR model is estimated in a spatio-temporally adaptive manner from a bit-stream conforming to the H.264/AVC standard, i.e., no side information needs to be sent to the decoder in order to employ the proposed approach, thereby compatibility with the standard syntax and existing encoders is retained.
Coarse-grained model of glycosaminoglycans in aqueous salt solutions. A field-theoretical approach.
Kolesnikov, Andrei L; Budkov, Yurij A; Nogovitsyn, Evgenij A
2014-11-20
We present results of self-consistent field calculations of thermodynamic and structural properties of glycosaminoglycans (chondroitin sulfate, hyaluronic acid, and heparin) in aqueous solutions with added monovalent and divalent salts. A semiphenomenological coarse-grained model for semiflexible polyelectrolyte chains in solution is proposed. The coarse-grained model permits one to focus on the essential features of these systems and provides significant computational advantages with respect to more detailed models. Our approach relies on the method of Gaussian equivalent representation for the calculation of the partition functions in the form of functional integrals. This method provides reliable thermodynamic information for polyelectrolyte solutions over wide ranges of monomer concentrations. In the present work, we use the comparison and fitting of the experimental osmotic pressure with a theoretical equation of state within the Gaussian equivalent representation. The degrees of ionization, radii of gyration, persistence lengths, and structure factors of chondroitin sulfate, hyaluronic acid, and heparin in aqueous solutions with added monovalent and divalent salts are calculated and discussed.
Intraplate seismicity in Canada: a graph theoretic approach to data analysis and interpretation
K. Vasudevan
2010-10-01
Full Text Available Intraplate seismicity occurs in central and northern Canada, but the underlying origin and dynamics remain poorly understood. Here, we apply a graph theoretic approach to characterize the statistical structure of spatiotemporal clustering exhibited by intraplate seismicity, a direct consequence of the underlying nonlinear dynamics. Using a recently proposed definition of "recurrences" based on record breaking processes (Davidsen et al., 2006, 2008, we have constructed directed graphs using catalogue data for three selected regions (Region 1: 45°−48° N/74°−80° W; Region 2: 51°−55° N/77°−83° W; and Region 3: 56°−70° N/65°−95° W, with attributes drawn from the location, origin time and the magnitude of the events. Based on comparisons with a null model derived from Poisson distribution or Monte Carlo shuffling of the catalogue data, our results provide strong evidence in support of spatiotemporal correlations of seismicity in all three regions considered. Similar evidence for spatiotemporal clustering has been documented using seismicity catalogues for southern California, suggesting possible similarities in underlying earthquake dynamics of both regions despite huge differences in the variability of seismic activity.
A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange
Douik, Ahmed S.
2014-05-11
In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.
Freeberg, Todd M; Lucas, Jeffrey R
2012-02-01
One aim of this study was to apply information theoretical analyses to understanding the structural complexity of chick-a-dee calls of Carolina chickadees, Poecile carolinensis. A second aim of this study was to compare this structural complexity to that of the calls of black-capped chickadees, P. atricapillus, described in an earlier published report (Hailman, Ficken, & Ficken, 1985). Chick-a-dee calls were recorded from Carolina chickadees in a naturalistic observation study in eastern Tennessee. Calls were analyzed using approaches from information theory, including transition probability matrices, Zipf's rules, entropies, and information coding capacities of calls and notes of calls. As described for black-capped chickadees, calls of Carolina chickadees exhibited considerable structural complexity. Most results suggested that the call of Carolina chickadees is more structurally complex than that of black-capped chickadees. These findings add support to the growing literature on the complexity of this call system in Paridae species. Furthermore, these results point to the feasibility of detailed cross-species comparative analyses that may allow strong testing of hypotheses regarding signal evolution.
Mean field game theoretic approach for security in mobile ad-hoc networks
Wang, Yanwei; Tang, Helen; Yu, F. Richard; Huang, Minyi
2013-05-01
Game theory can provide a useful tool to study the security problem in mobile ad hoc networks (MANETs). Most existing work on applying game theories to security only considers two players in the security game model: an attacker and a defender. While this assumption is valid for a network with centralized administration, it may not be realistic in MANETs, where centralized administration is not available. Consequently, each individual node in a MANET should be treated separately in the security game model. In this paper, using recent advances in mean field game theory, we propose a novel game theoretic approach for security in MANETs. Mean field game theory provides a powerful mathematical tool for problems with a large number of players. Since security defence mechanisms consume precious system resources (e.g., energy), the proposed scheme considers not only the security requirement of MANETs but also the system resources. In addition, each node only needs to know its own state information and the aggregate effect of the other nodes in the MANET. Therefore, the proposed scheme is a fully distributed scheme. Simulation results are presented to illustrate the effectiveness of the proposed scheme.
Kannan, Srinivasa Ramanujam; Chandrasekar, V.
2016-05-01
Even though both the rain measuring instruments, radar and radiometer onboard the TRMM observe the same rain scenes, they both are fundamentally different instruments. Radar is an active instrument and measures backscatter component from vertical rain structure; whereas radiometer is a passive instrument that obtains integrated observation of full depth of the cloud and rain structure. Further, their spatial resolutions on ground are different. Nevertheless, both the instruments are observing the same rain scene and retrieve three dimensional rainfall products. Hence it is only natural to seek answer to the question, what type of information about radiometric observations can be directly retrieved from radar observations. While there are several ways to answer this question, an informational theoretic approach using neural networks has been described in the present work to find if radiometer observations can be predicted from radar observations. A database of TMI brightness temperature and collocated TRMM vertical attenuation corrected reflectivity factor from the year 2012 was considered. The entire database is further classified according to surface type. Separate neural networks were trained for land and ocean and the results are presented.
A graph theoretical approach for assessing bio-macromolecular complex structural stability.
Del Carpio, Carlos Adriel; Iulian Florea, Mihai; Suzuki, Ai; Tsuboi, Hideyuki; Hatakeyama, Nozomu; Endou, Akira; Takaba, Hiromitsu; Ichiishi, Eiichiro; Miyamoto, Akira
2009-11-01
Fast and proper assessment of bio macro-molecular complex structural rigidity as a measure of structural stability can be useful in systematic studies to predict molecular function, and can also enable the design of rapid scoring functions to rank automatically generated bio-molecular complexes. Based on the graph theoretical approach of Jacobs et al. [Jacobs DJ, Rader AJ, Kuhn LA, Thorpe MF (2001) Protein flexibility predictions using graph theory. Proteins: Struct Funct Genet 44:150-165] for expressing molecular flexibility, we propose a new scheme to analyze the structural stability of bio-molecular complexes. This analysis is performed in terms of the identification in interacting subunits of clusters of flappy amino acids (those constituting regions of potential internal motion) that undergo an increase in rigidity at complex formation. Gains in structural rigidity of the interacting subunits upon bio-molecular complex formation can be evaluated by expansion of the network of intra-molecular inter-atomic interactions to include inter-molecular inter-atomic interaction terms. We propose two indices for quantifying this change: one local, which can express localized (at the amino acid level) structural rigidity, the other global to express overall structural stability for the complex. The new system is validated with a series of protein complex structures reported in the protein data bank. Finally, the indices are used as scoring coefficients to rank automatically generated protein complex decoys.
Theoretical and Experimental Approaches to the Dark Energy and the Cosmological Constant Problem
Borzou, Ahmad
2016-01-01
Theoretical and Experimental Approaches to the Dark Energy and theCosmological Constant ProblemAhmad Borzou, Ph.D.Advisor: Kenichi Hatakeyama, Ph.D.The cosmological constant problem is one of the most pressing problems ofphysics at this time. In this dissertation the problem and a set of widely-discussedtheoretical solutions to this problem are reviewed. It is shown that a recently developed Lorentz gauge theory of gravity can provide a natural solution. In this theorypresented here, the metric is not dynamical and it is shown that the Schwartzschildmetric is an exact solution. Also, it is proven that the de Sitter space is an exactvacuum solution and as a result the theory is able to explain the expansion of theuniverse with no need for dark energy. Renormalizability of the theory is studied aswell. It is also shown that, under a certain condition, the theory is power-countingrenormalizable.Supersymmetry provides an alternative solution to the cosmological problem aswell. The idea behind supersymmetry is rev...
Energy-Efficient Resource Allocation in Wireless Networks: An Overview of Game-Theoretic Approaches
Meshkati, Farhad; Schwartz, Stuart C
2007-01-01
An overview of game-theoretic approaches to energy-efficient resource allocation in wireless networks is presented. Focusing on multiple-access networks, it is demonstrated that game theory can be used as an effective tool to study resource allocation in wireless networks with quality-of-service (QoS) constraints. A family of non-cooperative (distributed) games is presented in which each user seeks to choose a strategy that maximizes its own utility while satisfying its QoS requirements. The utility function considered here measures the number of reliable bits that are transmitted per joule of energy consumed and, hence, is particulary suitable for energy-constrained networks. The actions available to each user in trying to maximize its own utility are at least the choice of the transmit power and, depending on the situation, the user may also be able to choose its transmission rate, modulation, packet size, multiuser receiver, multi-antenna processing algorithm, or carrier allocation strategy. The best-respo...
Hao Guo
2015-01-01
Full Text Available Recent experimental progress allows for exploring some important physical quantities of ultracold Fermi gases, such as the compressibility, spin susceptibility, viscosity, optical conductivity, and spin diffusivity. Theoretically, these quantities can be evaluated from suitable linear response theories. For BCS superfluid, it has been found that the gauge invariant linear response theories can be fully consistent with some stringent consistency constraints. When the theory is generalized to stronger than BCS regime, one may meet serious difficulties to satisfy the gauge invariance conditions. In this paper, we try to construct density and spin linear response theories which are formally gauge invariant for a Fermi gas undergoing BCS-Bose-Einstein Condensation (BEC crossover, especially below the superfluid transition temperature Tc. We adapt a particular t-matrix approach which is close to the G0G formalism to incorporate noncondensed pairing in the normal state. We explicitly show that the fundamental constraints imposed by the Ward identities and Q-limit Ward identity are indeed satisfied.
He, Meilin; Devine, Laura; Zhuang, Jun
2017-08-11
The government, private sectors, and others users of the Internet are increasingly faced with the risk of cyber incidents. Damage to computer systems and theft of sensitive data caused by cyber attacks have the potential to result in lasting harm to entities under attack, or to society as a whole. The effects of cyber attacks are not always obvious, and detecting them is not a simple proposition. As the U.S. federal government believes that information sharing on cybersecurity issues among organizations is essential to safety, security, and resilience, the importance of trusted information exchange has been emphasized to support public and private decision making by encouraging the creation of the Information Sharing and Analysis Center (ISAC). Through a decision-theoretic approach, this article provides new perspectives on ISAC, and the advent of the new Information Sharing and Analysis Organizations (ISAOs), which are intended to provide similar benefits to organizations that cannot fit easily into the ISAC structure. To help understand the processes of information sharing against cyber threats, this article illustrates 15 representative information sharing structures between ISAC, government, and other participating entities, and provide discussions on the strategic interactions between different stakeholders. This article also identifies the costs of information sharing and information security borne by different parties in this public-private partnership both before and after cyber attacks, as well as the two main benefits. This article provides perspectives on the mechanism of information sharing and some detailed cost-benefit analysis. © 2017 Society for Risk Analysis.
Game Theoretical Approaches for Transport-Aware Channel Selection in Cognitive Radio Networks
Chen Shih-Ho
2010-01-01
Full Text Available Effectively sharing channels among secondary users (SUs is one of the greatest challenges in cognitive radio network (CRN. In the past, many studies have proposed channel selection schemes at the physical or the MAC layer that allow SUs swiftly respond to the spectrum states. However, they may not lead to enhance performance due to slow response of the transport layer flow control mechanism. This paper presents a cross-layer design framework called Transport Aware Channel Selection (TACS scheme to optimize the transport throughput based on states, such as RTT and congestion window size, of TCP flow control mechanism. We formulate the TACS problem as two different game theoretic approaches: Selfish Spectrum Sharing Game (SSSG and Cooperative Spectrum Sharing Game (CSSG and present novel distributed heuristic algorithms to optimize TCP throughput. Computer simulations show that SSSG and CSSG could double the SUs throughput of current MAC-based scheme when primary users (PUs use their channel infrequently, and with up to 12% to 100% throughput increase when PUs are more active. The simulation results also illustrated that CSSG performs up to 20% better than SSSG in terms of the throughput.
Combination of real options and game-theoretic approach in investment analysis
Arasteh, Abdollah
2016-02-01
Investments in technology create a large amount of capital investments by major companies. Assessing such investment projects is identified as critical to the efficient assignment of resources. Viewing investment projects as real options, this paper expands a method for assessing technology investment decisions in the linkage existence of uncertainty and competition. It combines the game-theoretic models of strategic market interactions with a real options approach. Several key characteristics underlie the model. First, our study shows how investment strategies rely on competitive interactions. Under the force of competition, firms hurry to exercise their options early. The resulting "hurry equilibrium" destroys the option value of waiting and involves violent investment behavior. Second, we get best investment policies and critical investment entrances. This suggests that integrating will be unavoidable in some information product markets. The model creates some new intuitions into the forces that shape market behavior as noticed in the information technology industry. It can be used to specify best investment policies for technology innovations and adoptions, multistage R&D, and investment projects in information technology.
Arienzo Loredana
2010-01-01
Full Text Available The problem of collaborative tracking of mobile nodes in wireless sensor networks is addressed. By using a novel metric derived from the energy model in LEACH (W.B. Heinzelman, A.P. Chandrakasan and H. Balakrishnan, Energy-Efficient Communication Protocol for Wireless Microsensor Networks, in: Proceedings of the 33rd Hawaii International Conference on System Sciences (HICSS '00, 2000 and aiming at an efficient resource solution, the approach adopts a strategy of combining target tracking with node selection procedures in order to select informative sensors to minimize the energy consumption of the tracking task. We layout a cluster-based architecture to address the limitations in computational power, battery capacity and communication capacities of the sensor devices. The computation of the posterior Cramer-Rao bound (PCRB based on received signal strength measurements has been considered. To track mobile nodes two particle filters are used: the bootstrap particle filter and the unscented particle filter, both in the centralized and in the distributed manner. Their performances are compared with the theoretical lower bound PCRB. To save energy, a node selection procedure based on greedy algorithms is proposed. The node selection problem is formulated as a cross-layer optimization problem and it is solved using greedy algorithms.
Praveena, R. [Department of Chemistry, Bannari Amman Institute of Technology, Sathyamangalam, Erode, Tamil Nadu (India); Sadasivam, K. [Department of Physics, Bannari Amman Institute of Technology, Sathyamangalam, Erode, Tamil Nadu (India)
2016-05-06
Synthetic antioxidants such as butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) are found to be toxic, hence non-carcinogenic naturally occurring radical scavengers especially flavonoids have gained considerable importance in the past two decades. In the present investigation, the radical scavenging activity of C-glycosyl flavonoids is evaluated using theoretical approach which could broaden its scope in therapeutic applications. Gas and solvent phase studies of structural and molecular characteristics of C-glycosyl flavonoid, isovitexin is investigated through hydrogen atom transfer mechanism (HAT), Electron transfer-proton transfer (ET–PT) and Sequential proton loss electron transfer (SPLET) by Density functional theory (DFT) using hybrid parameters. The computed values of the adiabatic ionization potential, electron affinity, hardness, softness, electronegativity and electrophilic index indicate that isovitexin possess good radical scavenging activity. The behavior of different –OH groups in polyphenolic compounds is assessed by considering electronic effects of the neighbouring groups and the overall geometry of molecule which in turn helps in analyzing the antioxidant capacity of the polyphenolic molecule. The studies indicate that the H–atom abstraction from 4’–OH site is preferred during the radical scavenging process. From Mulliken spin density analysis and FMOs, B–ring is found to be more delocalized center and capable of electron donation. Comparison of antioxidant activity of vitexin and isovitexin leads to the conclusion that isovitexin acts as a better radical scavenger. This is an evidence for the importance of position of glucose unit in the flavonoid.
M. Renée Umstattd Meyer
2016-09-01
Full Text Available Time spent sitting has been associated with an increased risk of diabetes, cancer, obesity, and mental health impairments. However, 75% of Americans spend most of their days sitting, with work-sitting accounting for 63% of total daily sitting time. Little research examining theory-based antecedents of standing or sitting has been conducted. This lack of solid groundwork makes it difficult to design effective intervention strategies to decrease sitting behaviors. Using the Theory of Planned Behavior (TPB as our theoretical lens to better understand factors related with beneficial standing behaviors already being practiced, we examined relationships between TPB constructs and time spent standing at work among “positive deviants” (those successful in behavior change. Experience sampling methodology (ESM, 4 times a day (midmorning, before lunch, afternoon, and before leaving work for 5 consecutive workdays (Monday to Friday, was used to assess employees’ standing time. TPB scales assessing attitude (α = 0.81–0.84, norms (α = 0.83, perceived behavioral control (α = 0.77, and intention (α = 0.78 were developed using recommended methods and collected once on the Friday before the ESM surveys started. ESM data are hierarchically nested, therefore we tested our hypotheses using multilevel structural equation modeling with Mplus. Hourly full-time university employees (n = 50; 70.6% female, 84.3% white, mean age = 44 (SD = 11, 88.2%in full-time staff positions with sedentary occupation types (time at desk while working ≥6 hours/day participated. A total of 871 daily surveys were completed. Only perceived behavioral control (β = 0.45, p < 0.05 was related with work-standing at the event-level (model fit: just fit; mediation through intention was not supported. This is the first study to examine theoretical antecedents of real-time work-standing in a naturalistic field setting among positive deviants. These relationships should be further
Subjective evaluation and electroacoustic theoretical validation of a new approach to audio upmixing
Usher, John S.
Audio signal processing systems for converting two-channel (stereo) recordings to four or five channels are increasingly relevant. These audio upmixers can be used with conventional stereo sound recordings and reproduced with multichannel home theatre or automotive loudspeaker audio systems to create a more engaging and natural-sounding listening experience. This dissertation discusses existing approaches to audio upmixing for recordings of musical performances and presents specific design criteria for a system to enhance spatial sound quality. A new upmixing system is proposed and evaluated according to these criteria and a theoretical model for its behavior is validated using empirical measurements. The new system removes short-term correlated components from two electronic audio signals using a pair of adaptive filters, updated according to a frequency domain implementation of the normalized-least-means-square algorithm. The major difference of the new system with all extant audio upmixers is that unsupervised time-alignment of the input signals (typically, by up to +/-10 ms) as a function of frequency (typically, using a 1024-band equalizer) is accomplished due to the non-minimum phase adaptive filter. Two new signals are created from the weighted difference of the inputs, and are then radiated with two loudspeakers behind the listener. According to the consensus in the literature on the effect of interaural correlation on auditory image formation, the self-orthogonalizing properties of the algorithm ensure minimal distortion of the frontal source imagery and natural-sounding, enveloping reverberance (ambiance) imagery. Performance evaluation of the new upmix system was accomplished in two ways: Firstly, using empirical electroacoustic measurements which validate a theoretical model of the system; and secondly, with formal listening tests which investigated auditory spatial imagery with a graphical mapping tool and a preference experiment. Both electroacoustic
Fuchs, G. W.; Cuppen, H. M.; Ioppolo, S.; Romanzin, C.; Bisschop, S. E.; Andersson, S.; van Dishoeck, E. F.; Linnartz, H.
2009-10-01
Context: Hydrogenation reactions of CO in inter- and circumstellar ices are regarded as an important starting point in the formation of more complex species. Previous laboratory measurements by two groups of the hydrogenation of CO ices provided controversial results about the formation rate of methanol. Aims: Our aim is to resolve this controversy by an independent investigation of the reaction scheme for a range of H-atom fluxes and different ice temperatures and thicknesses. To fully understand the laboratory data, the results are interpreted theoretically by means of continuous-time, random-walk Monte Carlo simulations. Methods: Reaction rates are determined by using a state-of-the-art ultra high vacuum experimental setup to bombard an interstellar CO ice analog with H atoms at room temperature. The reaction of CO + H into H2CO and subsequently CH3OH is monitored by a Fourier transform infrared spectrometer in a reflection absorption mode. In addition, after each completed measurement, a temperature programmed desorption experiment is performed to identify the produced species according to their mass spectra and to determine their abundance. Different H-atom fluxes, morphologies, and ice thicknesses are tested. The experimental results are interpreted using Monte Carlo simulations. This technique takes into account the layered structure of CO ice. Results: The formation of both formaldehyde and methanol via CO hydrogenation is confirmed at low temperature (T = 12{-}20 K). We confirm that the discrepancy between the two Japanese studies is caused mainly by a difference in the applied hydrogen atom flux, as proposed by Hidaka and coworkers. The production rate of formaldehyde is found to decrease and the penetration column to increase with temperature. Temperature-dependent reaction barriers and diffusion rates are inferred using a Monte Carlo physical chemical model. The model is extended to interstellar conditions to compare with observational H2CO/CH3OH data.
Sexuality Education for Young People: A Theoretically Integrated Approach from Australia
Goldman, Juliette D. G.
2010-01-01
Background: Teachers of sexuality education can often be uncertain about what theoretical basis and pedagogical strategies to use in their teaching. Sexuality educational programmes designed by teachers can often show few evident theoretical principles that have been applied in its construction. Thus, there seems to be a dearth of evidence of ways…
A Graph Theoretic Approach for Hydraulic Fracturing and Wellbore Leakage Risk Modeling
Glosser, D.; Rose, K.; Bauer, J. R.; Warner, T.
2016-12-01
Recent large scale development of unconventional formations for fossil energy has raised concerns over the potential for fluid leakage between subsurface systems and wellbores. This is particularly true in regions with extensive drilling history, where spatial densities of wellbores are higher, and where significant uncertainties in the location and mechanical integrity of such wellbores exist. The generation of induced fracture networks during hydraulic fracturing may increase subsurface connectivity, and create the potential for unwanted fluid migration between operational and legacy wellbores and subsurface fracture networks. We present a graph theoretic approach for identifying geospatial regions and wellbores at increased risk for subsurface connectivity based on wellbore proximity and local geologic characteristics. The algorithm transforms user inputted geospatial data (geologic and wellbore x,y,z) to graph structure, where wellbores are represented as nodes, and where potential overlapping fracture network zones are represented as edges. The algorithm can be used to complement existing fracture models to better account for the reach of induced fractures, and to identify spatial extents at increased risk for unwanted subsurface connectivity. Additionally, the model can be used to identify regions in need of geophysical detection methods for locating undocumented wells. As a result, the method can be part of a cumulative strategy to reduce uncertainty inherent to combined geologic and engineered systems. The algorithm has been successfully tested against a known leakage scenario in Pennsylvania. In addition to identifying wells associated with the leakage event, the algorithm identified two other higher risk networks in the region. The algorithm output provides valuable information for industry to develop environmentally safe drilling and injection plans; and for regulators to identify specific wellbores at greater risk for leakage, and to develop targeted
Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...
An algorithmic and information-theoretic approach to multimetric index construction
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.
2013-01-01
The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified
RESEARCH ON SOME THEORETICAL PROBLEMS OF MAP DATA HANDLING USING FRACTAL APPROACH
无
2000-01-01
Some theoretical problems of fractal geographical map data handling are dis cussed and some new methods about fractal dimension introducing, developing, comparing and estimating are proposed in this paper.
Explaining teacher-student interactions in early childhood: an interpersonal theoretical approach
Thijs, J.; Koomen, H.; Roorda, D.; ten Hagen, J.
2011-01-01
The present study used an interpersonal theoretical perspective to examine the interactions between Dutch teachers and kindergartners. Interpersonal theory provides explanations for dyadic interaction behaviors by stating that complementary behaviors (dissimilar in terms of control, and similar in t
Theoretical-game estimate of radiosystem's efficiency based on entropy approach
Marigodov, V. K.
2011-01-01
Theoretical-game synthesis of radio communications system in a conflict situation of interaction between radio communications system and radio masking system operators taking into consideration information limitations that are imposed on differential entropies of players’ mixed strategies.
Anderson, Rachel M; Yancey, David F; Zhang, Liang; Chill, Samuel T; Henkelman, Graeme; Crooks, Richard M
2015-05-19
The objective of the research described in this Account is the development of high-throughput computational-based screening methods for discovery of catalyst candidates and subsequent experimental validation using appropriate catalytic nanoparticles. Dendrimer-encapsulated nanoparticles (DENs), which are well-defined 1-2 nm diameter metal nanoparticles, fulfill the role of model electrocatalysts. Effective comparison of theory and experiment requires that the theoretical and experimental models map onto one another perfectly. We use novel synthetic methods, advanced characterization techniques, and density functional theory (DFT) calculations to approach this ideal. For example, well-defined core@shell DENs can be synthesized by electrochemical underpotential deposition (UPD), and the observed deposition potentials can be compared to those calculated by DFT. Theory is also used to learn more about structure than can be determined by analytical characterization alone. For example, density functional theory molecular dynamics (DFT-MD) was used to show that the core@shell configuration of Au@Pt DENs undergoes a surface reconstruction that dramatically affects its electrocatalytic properties. A separate Pd@Pt DENs study also revealed reorganization, in this case a core-shell inversion to a Pt@Pd structure. Understanding these types of structural changes is critical to building correlations between structure and catalytic function. Indeed, the second principal focus of the work described here is correlating structure and catalytic function through the combined use of theory and experiment. For example, the Au@Pt DENs system described earlier is used for the oxygen reduction reaction (ORR) as well as for the electro-oxidation of formic acid. The surface reorganization predicted by theory enhances our understanding of the catalytic measurements. In the case of formic acid oxidation, the deformed nanoparticle structure leads to reduced CO binding energy and therefore
A short course in quantum information theory. An approach from theoretical physics
Diosi, L. [KFKI Research Institute for Partical and Nuclear Physics, Budapest (Hungary)
2007-07-01
This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. (orig.)
Theoretical analysis of two ACO approaches for the traveling salesman problem
Kötzing, Timo; Neumann, Frank; Röglin, Heiko
2012-01-01
Bioinspired algorithms, such as evolutionary algorithms and ant colony optimization, are widely used for different combinatorial optimization problems. These algorithms rely heavily on the use of randomness and are hard to understand from a theoretical point of view. This paper contributes...... to the theoretical analysis of ant colony optimization and studies this type of algorithm on one of the most prominent combinatorial optimization problems, namely the traveling salesperson problem (TSP). We present a new construction graph and show that it has a stronger local property than one commonly used...
Morini, Filippo; Deleuze, Michael S., E-mail: michael.deleuze@uhasselt.be [Center of Molecular and Materials Modelling, Hasselt University, Agoralaan Gebouw D, B-3590 Diepenbeek (Belgium); Watanabe, Noboru; Takahashi, Masahiko [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai 980-8577 (Japan)
2015-03-07
The influence of thermally induced nuclear dynamics (molecular vibrations) in the initial electronic ground state on the valence orbital momentum profiles of furan has been theoretically investigated using two different approaches. The first of these approaches employs the principles of Born-Oppenheimer molecular dynamics, whereas the so-called harmonic analytical quantum mechanical approach resorts to an analytical decomposition of contributions arising from quantized harmonic vibrational eigenstates. In spite of their intrinsic differences, the two approaches enable consistent insights into the electron momentum distributions inferred from new measurements employing electron momentum spectroscopy and an electron impact energy of 1.2 keV. Both approaches point out in particular an appreciable influence of a few specific molecular vibrations of A{sub 1} symmetry on the 9a{sub 1} momentum profile, which can be unravelled from considerations on the symmetry characteristics of orbitals and their energy spacing.
Game-Theoretic Approach for Solving Multiobjective Flow Problems on Networks
Maria A. Fonoberova
2005-10-01
Full Text Available The game-theoretic formulation of the multiobjective multicommodity flow problem is considered. The dynamic version of this problem is studied and an algorithm for its solving, based on the concept of multiobjective games, is proposed. Mathematics Subject Classification 2000: 90B10, 90C35, 90C27, 90C47.
Hijmans, E.J.S.; Selm, M. van
2002-01-01
This article aims to examine existential meaning constructions from an action theoretical perspective in a specific Internet environment: the personal homepage. Personal homepages are on-line multi-media documents addressing the question ‘Who am I?’ Authors of personal homepages provide information
Hijmans, E.J.S.; Selm, M. van
2002-01-01
This article aims to examine existential meaning constructions from an action theoretical perspective in a specific Internet environment: the personal homepage. Personal homepages are on-line multi-media documents addressing the question ‘Who am I?’ Authors of personal homepages provide information
A system-theoretical approach to selective grid coarsening of reservoir models
Vakili-Ghahani, S.A.; Jansen, J.D.
2011-01-01
From a system-theoretical point of view and for a given configuration of wells, there are only a limited number of degrees of freedom in the input–output dynamics of a reservoir system. This means that a large number of combinations of the state variables (pressure and saturation values) are not act
Yes-no question/marking in Italian dialects : a typological, theoretical and experimental approach.
Lusini, Sara
This dissertation provides an account of polar questions in Italian dialects from a typological, theoretical and empirical perspective. Both data from the existing literature and new data from the author’s fieldwork are included in this study. It is shown that Italian dialects display a relatively
Distinguishing spectral and temporal properties of speech using an information-theoretic approach
Christiansen, Thomas Ulrich; Greenberg, Steven
2007-01-01
The spectro-temporal coding of Danish consonants was investigated using an information-theoretic analysis. Listeners identified eleven consonants spoken in CV[l] context. In each condition, only a portion of the original spectrum was played. Center frequencies of 750, 1500 and 3000 Hz, were prese...
Fronczak, Piotr
2015-01-01
Using the formalism of the biased random walk in random uncorrelated networks with arbitrary degree distributions, we develop theoretical approach to the critical packet generation rate in traffic based on routing strategy with local information. We explain microscopic origins of the transition from the flow to the jammed phase and discuss how the node neighbourhood topology affects the transport capacity in uncorrelated and correlated networks.
Alla A. Mityureva
2015-12-01
Full Text Available In the present paper, the approach to the representation of aggregate information on the cross sections of elementary processes is described and its justification within mathematical statistics is given. It is caused by necessity of integrated account of the results obtained by different works at different times, in different groups, based on experimental and theoretical studies in various energy ranges. The main attention is paid to the process of electron-atom scattering. As an example of the proposed approach application, the aggregate result on thus obtained integral cross sections of electron impact excitation of the transitions in the hydrogen atom is presented.
Gaoning He
2010-01-01
Full Text Available A Bayesian game-theoretic model is developed to design and analyze the resource allocation problem in K-user fading multiple access channels (MACs, where the users are assumed to selfishly maximize their average achievable rates with incomplete information about the fading channel gains. In such a game-theoretic study, the central question is whether a Bayesian equilibrium exists, and if so, whether the network operates efficiently at the equilibrium point. We prove that there exists exactly one Bayesian equilibrium in our game. Furthermore, we study the network sum-rate maximization problem by assuming that the users coordinate according to a symmetric strategy profile. This result also serves as an upper bound for the Bayesian equilibrium. Finally, simulation results are provided to show the network efficiency at the unique Bayesian equilibrium and to compare it with other strategies.
Exploring SiSn as a performance enhancing semiconductor: A theoretical and experimental approach
Hussain, Aftab M.
2014-12-14
We present a novel semiconducting alloy, silicon-tin (SiSn), as channel material for complementary metal oxide semiconductor (CMOS) circuit applications. The material has been studied theoretically using first principles analysis as well as experimentally by fabricating MOSFETs. Our study suggests that the alloy offers interesting possibilities in the realm of silicon band gap tuning. We have explored diffusion of tin (Sn) into the industry\\'s most widely used substrate, silicon (100), as it is the most cost effective, scalable and CMOS compatible way of obtaining SiSn. Our theoretical model predicts a higher mobility for p-channel SiSn MOSFETs, due to a lower effective mass of the holes, which has been experimentally validated using the fabricated MOSFETs. We report an increase of 13.6% in the average field effect hole mobility for SiSn devices compared to silicon control devices.
M. R. Monazzam, A. Nezafat
2007-04-01
Full Text Available Noise is one of the most serious challenges in modern community. In some specific industries, according to the nature of process, this challenge is more threatening. This paper describes a means of noise control for spinning machine based on experimental measurements. Also advantages and disadvantages of the control procedure are added. Different factors which may affect the performance of the barrier in this situation are also mentioned. To provide a good estimation of the control measure, a theoretical formula is also described and it is compared with the field data. Good agreement between the results of filed measurements and theoretical presented model was achieved. No obvious noise reduction was seen by partial indoor barriers in low absorbent enclosed spaces, since the reflection from multiple hard surfaces is the main dominated factor in the tested environment. At the end, the situation of the environment and standards, which are necessary in attaining the ideal results, are explained.
Control-theoretic Approach to Communication with Feedback: Fundamental Limits and Code Design
Ardestanizadeh, Ehsan
2010-01-01
Feedback communication is studied from a control-theoretic perspective, mapping the communication problem to a control problem in which the control signal is received through the same noisy channel as in the communication problem, and the (nonlinear and time-varying) dynamics of the system determine a subclass of encoders available at the transmitter. The MMSE capacity is defined to be the supremum exponential decay rate of the mean square decoding error. This is upper bounded by the information-theoretic feedback capacity, which is the supremum of the achievable rates. A sufficient condition is provided under which the upper bound holds with equality. For the special class of stationary Gaussian channels, a simple application of Bode's integral formula shows that the feedback capacity, recently characterized by Kim, is equal to the maximum instability that can be tolerated by the controller under a given power constraint. Finally, the control mapping is generalized to the N-sender AWGN multiple access channe...
Kanda, Takashi; Fujita, Masashi; Iida, Osamu; Masuda, Masaharu; Okamoto, Shin; Ishihara, Takayuki; Nanto, Kiyonori; Shiraki, Tatsuya; Takahara, Mitsuyoshi; Sakata, Yasushi; Uematsu, Masaaki
2015-01-01
Several non-invasive methods for measuring pulmonary vascular resistance (PVR) have been proposed to date, but they remain empirical, lacking sufficient accuracy to be used in clinical practice. The aims of this study were to propose a novel echocardiographic measurement of PVR based on a theoretical formula and investigate the feasibilty and accuracy of this method in patients with heart failure. Echocardiography was performed in 27 patients before right heart catheterization. Peak tricuspid regurgitation pressure gradient (TRPG), pulmonary regurgitation pressure gradient in end-diastole (PRPGed), and cardiac output derived from the time-velocity integral and the diameter in the left ventricular outflow tract (COLVOT) were measured. PVR based on a theoretical formula (PVRtheo) was calculated as (TRPG-PRPGed)/3COLVOTin Wood units (WU). The results were compared with PVR obtained by right heart catheterization (PVRcath) using linear regression and Bland-Altman analysis. Mean PVRcathwas 2.4±1.4 WU. PVRtheocorrelated well with PVRcath(r=0.83, P<0.001). On Bland-Altman analysis the mean difference was 0.1±0.7 WU. The limits of agreements were smaller than for other non-invasive estimations previously reported. The new echocardiographic approach based on a theoretical formula provides a non-invasive and accurate assessment of PVR in patients with heart failure.
A field theoretic approach to the energy momentum tensor for theories coupled with gravity
Mukherjee, Pradip; Saha, Anirban; Roy, Amit Singha
2016-01-01
We provide a field-theoretic algorithm of obtaining energy momentum tensor (EMT) for gravitationally coupled theories. The method is based on an auxiliary field theory and equally applicable to both minimal and non-minimal coupling. The algorithm illuminates the connection between the EMT, obtained by functional variation of the metric, and local balance of energy and momentum. Our method is of cardinal value for the proper identification of the EMT in context of non-minimally coupled gravity...
Security-aware Virtual Machine Allocation in the Cloud: A Game Theoretic Approach
2015-01-13
has significantly more to lose. The paper in [1] presented the externality problem within the context of game theory, and this paper aimed to solve ...later in the paper. Section IV will now explain and setup the problem in the context of game theory. IV. GAME MODEL This section considers four...In Section VI we extend the problem to n players and m hypervisors. Along with the commonly applied game theoretic assumptions of rationality and
An extended berry's field theoretic approach to intra-urban system in the toyota planning region
Yano, Keiji
1987-01-01
An interdependence between spatial structure and spatial behavior in the Toyota Planning Region is examined, based on the framework of the Berry(1966)'s general field theory of spatial behavior. Canonical analysis reveals journey to work is determined by areal difference of industrial activities, and journey to shop is by areal differences of commercial activities and urban land use. While conventional field theoretic studies in geography have laid emphasis on mere description of the interdep...
The yoga of schemic Grothendieck rings, a topos-theoretical approach
Schoutens, Hans
2010-01-01
We propose a suitable substitute for the classical Grothendieck ring of an algebraically closed field, in which any quasi-projective scheme is represented, while maintaining its non-reduced structure. This yields a more subtle invariant, called the schemic Grothendieck ring, in which we can formulate a form of integration resembling Kontsevich's motivic integration via arc schemes. Whereas the original construction was via definability, we have translated in this paper everything into a topos-theoretic framework.
Schulte, Kjersti Øverbø
2009-01-01
The thesis discusses design activities in the Norwegian seafood processing industry, with focus on cooperation between packaging suppliers and designers. There is a twofold objective: to increase understanding of how industrial design methodology can be utilised in the seafood industry, and to introduce a theoretical foundation for cooperation and communication in industrial design methodology.Sales of fish and seafood represent Norway’s third largest export. Seafood is an industrially proces...
Evans, R; Ferguson, E
2014-02-01
While blood donation is traditionally described as a behaviour motivated by pure altruism, the assessment of altruism in the blood donation literature has not been theoretically informed. Drawing on theories of altruism from psychology, economics and evolutionary biology, it is argued that a theoretically derived psychometric assessment of altruism is needed. Such a measure is developed in this study that can be used to help inform both our understanding of the altruistic motives of blood donors and recruitment intervention strategies. A cross-sectional survey (N = 414), with a 1-month behavioural follow-up (time 2, N = 77), was designed to assess theoretically derived constructs from psychological, economic and evolutionary biological theories of altruism. Theory of planned behaviour (TPB) variables and co-operation were also assessed at time 1 and a measure of behavioural co-operation at time 2. Five theoretical dimensions (impure altruism, kinship, self-regarding motives, reluctant altruism and egalitarian warm glow) of altruism were identified through factor analyses. These five altruistic motives differentiated blood donors from non-donors (donors scored higher on impure altruism and reluctant altruism), showed incremental validity over TPB constructs to predict donor intention and predicted future co-operative behaviour. These findings show that altruism in the context of blood donation is multifaceted and complex and, does not reflect pure altruism. This has implication for recruitment campaigns that focus solely on pure altruism. © 2013 The Authors. Vox Sanguinis published by John Wiley & Sons Ltd. on behalf of International Society of Blood Transfusion.
Yun Chen; Hui Yang
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic pe...
δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions
Hengrong Ju
2014-01-01
Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.
Conceptual and empirical problems with game theoretic approaches to language evolution
Jeffrey eWatumull
2014-03-01
Full Text Available The importance of game theoretic models to evolutionary theory has been in formulating elegant equations that specify the strategies to be played and the conditions to be satisfied for particular traits to evolve. These models, in conjunction with experimental tests of their predictions, have successfully described and explained the costs and benefits of varying strategies and the dynamics for establishing equilibria in a number of evolutionary scenarios, including especially cooperation, mating, and aggression. Over the past decade or so, game theory has been applied to model the evolution of language. In contrast to the aforementioned scenarios, however, we argue that these models are problematic due to conceptual confusions and empirical difficiences. In particualr, these models conflate the comptutations and representations of our language faculty (mechanism with its utility in communication (function; model languages as having different fitness functions for which there is no evidence; depend on assumptions for the starting state of the system, thereby begging the question of how these systems evolved; and to date, have generated no empirical studies at all. Game theoretic models of language evolution have therefore failed to advance how or why language evolved, or why it has the particular representations and computations that it does. We conclude with some brief suggestions for how this situation might be ameliorated, enabling this important theoretical tool to make substantive empirical contributions.
Neutrosophic Game Theoretic Approach to Indo-Pak Conflict over Jammu-Kashmir
Surapati Pramanik
2014-03-01
Full Text Available The study deals with the enduring conflict between India and Pakistan over Jammu and Kashmir since 1947. The ongoing conflict is analyzed as an enduring rivalry; characterized by three major wars (1947-48, 1965, 1971, low intensity military conflict (Siachen, mini war at Kargil (1999, internal insurgency, cross border terrorism. We examine the progress and the status of the dispute, as well as the dynamics of the India Pakistan relationship by considering the influence of USA and China in crisis dynamics. We discuss the possible solutions offered by the various study groups and persons. Most of the studies were done in crisp environment. Pramanik and Roy (S. Pramanik and T.K. Roy, Game theoretic model to the Jammu-Kashmir conflict between India and Pakistan. International Journal of Mathematical Archive (IJMA, 4(8 (2013, 162-170. studied game theoretic model toJammu and Kashmir conflict in crisp environment. In the present study we have extended the concept of the game theoric model of the Jammu and Kashmir conflict in neutrosophic envirorment. We have explored the possibilities and developed arguments for an application of principle of neutrosophic game theory to understand properly of the Jammu and Kashmir conflict in terms of goals and strategy of either side. Standard 2×2 zero-sum game theoretic model used to identify an optimal solution.
Lindgren, Kristian; Olbrich, Eckehard
2017-08-01
We study the approach towards equilibrium in a dynamic Ising model, the Q2R cellular automaton, with microscopic reversibility and conserved energy for an infinite one-dimensional system. Starting from a low-entropy state with positive magnetisation, we investigate how the system approaches equilibrium characteristics given by statistical mechanics. We show that the magnetisation converges to zero exponentially. The reversibility of the dynamics implies that the entropy density of the microstates is conserved in the time evolution. Still, it appears as if equilibrium, with a higher entropy density is approached. In order to understand this process, we solve the dynamics by formally proving how the information-theoretic characteristics of the microstates develop over time. With this approach we can show that an estimate of the entropy density based on finite length statistics within microstates converges to the equilibrium entropy density. The process behind this apparent entropy increase is a dissipation of correlation information over increasing distances. It is shown that the average information-theoretic correlation length increases linearly in time, being equivalent to a corresponding increase in excess entropy.
Information-theoretic approach to quantum error correction and reversible measurement
Nielsen, M A; Schumacher, B; Barnum, H N; Caves, Carlton M.; Schumacher, Benjamin; Barnum, Howard
1997-01-01
Quantum operations provide a general description of the state changes allowed by quantum mechanics. The reversal of quantum operations is important for quantum error-correcting codes, teleportation, and reversing quantum measurements. We derive information-theoretic conditions and equivalent algebraic conditions that are necessary and sufficient for a general quantum operation to be reversible. We analyze the thermodynamic cost of error correction and show that error correction can be regarded as a kind of ``Maxwell demon,'' for which there is an entropy cost associated with information obtained from measurements performed during error correction. A prescription for thermodynamically efficient error correction is given.
A K-theoretic approach to the classification of symmetric spaces
Bohle, Dennis
2011-01-01
We show that the classification of the symmetric spaces can be achieved by K-theoretical methods. We focus on Hermitian symmetric spaces of non-compact type, and define K-theory for JB*-triples along the lines of C*-theory. K-groups have to be provided with further invariants in order to classify. Among these are the cycles obtained from so called grids, intimately connected to the root systems of an underlying Lie-algebra and thus reminiscent of the classical classification scheme.
PSYCHOLOGICAL FACTORS AFFECTING THE CONSUMER’S INTENTION TO USE E- COMMERCE: A THEORETICAL APPROACH
Mahmoud Al-dalahmeh; Ali Salman Saleh
2007-01-01
There is a need in the literature for an application of the well known social cognitive theory in the area of e-commerce. Hence, this paper develops and models a theoretical framework to study the impact of psychological factors based on the social cognitive theory on the intention to use e-commerce. More specifically, the paper examines the role of individuals’ beliefs about their abilities towards the intention to use e-commerce technology (e-commerce self-efficacy). A conceptual model, bas...
Wavelength shift in a whispering gallery microdisk due to bacterial sensing: A theoretical approach
Hala Ghali
2017-04-01
Full Text Available Whispering gallery mode microcavities have recently been studied as a means to achieve real-time label-free detection of biological targets such as virus particles, specific DNA sequences, or proteins. Binding of a biomolecule to the surface of a microresonator will increase its path length, leading to a shift in the resonance frequency according to the reactive sensing principle. In this paper, we develop a theoretical expression that will link the reactive shift to the bacteria and microdisk parameters and help quantify the number of bacteria that bind to the surface of a 200μm-diameter silica microdisk.
Fristrup, Peter; Lassen, Peter Rygaard; Johannessen, Christian;
2006-01-01
obtained from quantum mechanical calculations (density functional theory with the B3LYP hybrid exchange correlation functional with 6-31++G**, aug-cc-pVDZ, or aug-cc-pVTZ basis set) and related to the physical structure of the compounds. The absolute configuration could be established directly in each case...... by comparing experimental and theoretical spectra. In addition, we have been able to document the changes that occur both in structures and in the VA and VCD spectra due to substituent effects on the oxirane ring....
Lee, Jung-Gil; Kim, Woo-Seung; Choi, June-Seok; Ghaffour, Noreddine; Kim, Young-Deuk
2016-12-15
An economic desalination system with a small scale and footprint for remote areas, which have a limited and inadequate water supply, insufficient water treatment and low infrastructure, is strongly demanded in the desalination markets. Here, a direct contact membrane distillation (DCMD) process has the simplest configuration and potentially the highest permeate flux among all of the possible MD processes. This process can also be easily instituted in a multi-stage manner for enhanced compactness, productivity, versatility and cost-effectiveness. In this study, an innovative, multi-stage, DCMD module under countercurrent-flow configuration is first designed and then investigate both theoretically and experimentally to identify its feasibility and operability for desalination application. Model predictions and measured data for mean permeate flux are compared and shown to be in good agreement. The effect of the number of module stages on the mean permeate flux, performance ratio and daily water production of the MDCMD system has been theoretically identified at inlet feed and permeate flow rates of 1.5 l/min and inlet feed and permeate temperatures of 70 °C and 25 °C, respectively. The daily water production of a three-stage DCMD module with a membrane area of 0.01 m(2) at each stage is found to be 21.5 kg. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mathammal, R.; Sudha, N.; Shankar, R.; Rajaboopathi, M.; Janagi, S.; Prabavathi, B.
2017-03-01
This report discusses crystal structure, molecular arrangements, vibrational analysis, UV-Vis-NIR spectrum, fluorescence emission and second harmonic generation (SHG) efficiency of piperazinium L-tartrate (PPZ2+·Tart2-) crystals with the support of theoretical analysis. A good optical quality PPZ2+·Tart2- crystals were grown from slow evaporation of aqueous solution. The PPZ2+·Tart2- crystal belongs to monoclinic system with non-centrosymmetric space group P21. The charge transfer from donor to acceptor moieties and corresponding changes in the bond lengths and bond angles have been observed. The observed functional group vibrations in the experimental FTIR and the Raman spectrum were assigned and compared with theoretical wavenumbers of PPZ2+·Tart2-.The electron distribution on the donor and acceptor in PPZ2+·Tart2- has been clearly visualised using molecular electrostatic potential map. Compared with L-tartaric acid, red shift was observed in absorption and fluorescence spectrum. The low value of dielectric constant and dielectric loss at the higher frequency and its high second harmonic efficiency suggest PPZ2+·Tart2- crystal is less defect free and suitable for NLO applications.
Lee, Jung Gil
2016-10-24
An economic desalination system with a small scale and footprint for remote areas, which have a limited and inadequate water supply, insufficient water treatment and low infrastructure, is strongly demanded in the desalination markets. Here, a direct contact membrane distillation (DCMD) process has the simplest configuration and potentially the highest permeate flux among all of the possible MD processes. This process can also be easily instituted in a multi-stage manner for enhanced compactness, productivity, versatility and cost-effectiveness. In this study, an innovative, multi-stage, DCMD module under countercurrent-flow configuration is first designed and then investigate both theoretically and experimentally to identify its feasibility and operability for desalination application. Model predictions and measured data for mean permeate flux are compared and shown to be in good agreement. The effect of the number of module stages on the mean permeate flux, performance ratio and daily water production of the MDCMD system has been theoretically identified at inlet feed and permeate flow rates of 1.5 l/min and inlet feed and permeate temperatures of 70 °C and 25 °C, respectively. The daily water production of a three-stage DCMD module with a membrane area of 0.01 m2 at each stage is found to be 21.5 kg.
Shustorovich, Evgeny [American Scientific Materials Technologies, Inc., New York, NY (United States); Sellers, Harrell [Department of Chemistry and Biochemistry, South Dakota State University Brooking, SD (United States)
1998-04-01
In this review we examine the presently available theoretical techniques for determining metal surface reaction energetics. The unity bond index-quadratic exponential potential (UBI-QEP) method, which provides heats of adsorption and reaction activation barriers with a typical accuracy of 1-3 Kcal/mol, emerges as the method with the widest applicability for complex and practically important reaction systems. We discuss in detail the theoretical foundations of the analytic UBI-QEP method which employs the most general two-body interaction potentials. The potential variable, named a bond index, is a general exponential function of the two-center bond distance. The bond indices of interacting bonds are assumed to be conserved at unity (up to the dissociation point), and we cite state-of-the-art ab initio calculations to support this assumption. The UBI-QEP method allows one to calculate the reaction energetics in a straightforward variational way. We summarize the analytic formulas for adsorbate binding energies in various coordination modes and for intrinsic and diffusion activation barriers. We also describe a computer program which makes UBI-QEP calculations fully automated. The normalized bond index-molecular dynamics (NBI-MD) simulation technique, which is an adaptation of the UBI-QEP reactive potential functions to molecular dynamics, is described. Detailed summaries of applications are given which include the Fischer-Tropsch synthesis, oxygen assisted X-H bond cleavage, hydrogen peroxide, methanol and ammonia syntheses, decomposition and reduction of NO, and SO{sub x} chemistry
Shustorovich, Evgeny; Sellers, Harrell
In this review we examine the presently available theoretical techniques for determining metal surface reaction energetics. The unity bond index-quadratic exponential potential (UBI-QEP) method, which provides heats of adsorption and reaction activation barriers with a typical accuracy of 1-3 kcal/mol, emerges as the method with the widest applicability for complex and practically important reaction systems. We discuss in detail the theoretical foundations of the analytic UBI-QEP method which employs the most general two-body interaction potentials. The potential variable, named a bond index, is a general exponential function of the two-center bond distance. The bond indices of interacting bonds are assumed to be conserved at unity (up to the dissociation point), and we cite state-of-the-art ab initio calculations to support this assumption. The UBI-QEP method allows one to calculate the reaction energetics in a straightforward variational way. We summarize the analytic formulas for adsorbate binding energies in various coordination modes and for intrinsic and diffusion activation barriers. We also describe a computer program which makes UBI-QEP calculations fully automated. The normalized bond index-molecular dinamics, (NBI-MD) simulation technique, which is an adaptation of the UBI-QEP reactive potential functions to molecular dynamics, is described. Detailed summaries of applications are given which include the Fischer-Tropsch synthesis, oxygen assisted X-H bond cleavage, hydrogen peroxide, methanol and ammonia syntheses, decomposition and reduction of NO, and SO x chemistry.
Arguel, Amaël; Perez-Concha, Oscar; Li, Simon Y W; Lau, Annie Y S
2016-10-06
The aim of this review was to identify general theoretical frameworks used in online social network interventions for behavioral change. To address this research question, a PRISMA-compliant systematic review was conducted. A systematic review (PROSPERO registration number CRD42014007555) was conducted using 3 electronic databases (PsycINFO, Pubmed, and Embase). Four reviewers screened 1788 abstracts. 15 studies were selected according to the eligibility criteria. Randomized controlled trials and controlled studies were assessed using Cochrane Collaboration's "risk-of-bias" tool, and narrative synthesis. Five eligible articles used the social cognitive theory as a framework to develop interventions targeting behavioral change. Other theoretical frameworks were related to the dynamics of social networks, intention models, and community engagement theories. Only one of the studies selected in the review mentioned a well-known theory from the field of health psychology. Conclusions were that guidelines are lacking in the design of online social network interventions for behavioral change. Existing theories and models from health psychology that are traditionally used for in situ behavioral change should be considered when designing online social network interventions in a health care setting. © 2016 John Wiley & Sons, Ltd.
Towards a child-robot symbiotic co-development : A theoretical approach
Charisi, Vicky; Davison, Daniel; Wijnen, Frances; Van Der Meij, Jan; Reidsma, Dennis; Prescott, Tony; Van Joolingen, Wouter; Evers, Vanessa
2015-01-01
One of the main characteristics for an effective learning is the possibility for learners to choose their own ways and pace of learning, according to their personal previous experiences and needs. Social interaction during the learning process has a crucial role to the skills that learners may devel
A game theoretical approach for QoS provisioning in heterogeneous networks
A.S.M. Zadid Shifat
2015-09-01
Full Text Available With the proliferation of mobile phone users, interference management is a big concern in this neoteric years. To cope with this problem along with ensuring better Quality of Service (QoS, femtocell plays an imperious preamble in heterogeneous networks (HetNets for some of its noteworthy characteristics. In this paper, we propose a game theoretic algorithm along with dynamic channel allocation and hybrid access mechanism with self-organizing power control scheme. With a view to resolving prioritized access issue, the concept of primary and secondary users is applied. Existence of pure strategy Nash equilibrium (NE has been investigated and comes to a perfection that our proposed scheme can be adopted both increasing capacity and increasing revenue of operators considering optimal price for consumers.
Predictive coding and the slowness principle: an information-theoretic approach.
Creutzig, Felix; Sprekeler, Henning
2008-04-01
Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.
Anoufa, M.; Kiat, J. M.; Bogicevic, C.
2015-10-01
Most of the theoretical and experimental studies on the electrocaloric effect (ECE) are devoted to thin films, but they can be hardly envisaged for cooling macroscopic systems; moreover, the results obtained cannot be easily transposed for larger systems like multilayered ceramics. Therefore, efforts should also be focused on predicting, synthesizing, and characterizing interesting bulk single crystal or ceramics. In ferroelectric nanoparticles and ceramics, the core-shell structure of grains is of uttermost importance to explain the experimental results at small sizes. Moreover, it can be used to tailor physical properties, such as energy storage, by experimenting with the composition, thickness, and permittivity of the shell. Here, we report the effect of such structures on the electrocaloric effects in a variety of ferroelectric materials. The magnitude of ECE as well as its field and temperature-dependence are obtained for different types of core-shells. The optimal configuration for a maximal ECE is deduced.
Tractors and twistors from conformal Cartan geometry: a gauge theoretic approach II. Twistors
Attard, J.; François, J.
2017-04-01
Tractor and Twistor bundles provide natural conformally covariant calculi on 4D-Riemannian manifolds. They have different origins but are closely related, and usually constructed bottom–up through prolongation of defining differential equations. We propose alternative top–down gauge theoretic constructions, starting from the conformal Cartan bundle P and its vectorial E and spinorial {E associated bundles. Our key ingredient is the dressing field method of gauge symmetry reduction, which allows tractors and twistors and their associated connections to exhibit as gauge fields of a non-standard kind as far as Weyl rescaling transformation is concerned. By non-standard we mean that they implement the gauge principle of physics, but are of a different geometric nature than the well-known differential geometric objects usually underlying gauge theories. We provide the corresponding BRST treatment. In a companion paper we dealt with tractors, in the present one we address the case of twistors.
Transmedial Worlds: Conceptual Review and Theoretical Approaches on the Art of Worldmaking
Nieves Rosendo Sánchez
2016-01-01
Full Text Available The concept of transmedia storytelling introduced by Henry Jenkins (2003, 2006 has been widely developed in the academic field, getting popular and being redefined with time. In transmedia storytelling’s definition we can observe the focus on the concept of world, which from a theoretical and critical perspective did not had the development that the relationship among narrative and media had. This work gathers and analyzes a selection of references from the main authors that have defined the nature, limits and transferability of the so called transmedial worlds, proposing conclusions related to the need to observe the phenomenon of transmedia narratives, its founding elements and the processes related to them, with the aim to elaborate a catalogue of criteria for the analysis and consideration of key element of transmedia storytelling.
Pandey, Akash; Arockiarajan, A.
2017-03-01
Macro fiber composite (MFC) is extensively used in vibration control and actuation applications due to its high flexibility and enhanced coupling coefficients. During these applications, MFCs are subjected to the continuous cyclic electrical loading, which may lead to the degradation in its actuation performance. In order to predict the life cycle of MFCs, an experimental setup has been devised and experiments are performed under cyclic loading condition. Efforts involved in the experiments are huge in terms of time and cost. Hence, an attempt has been made to develop a theoretical model to predict the fatigue behavior of MFCs. A nonlinear finite element method has been formulated based on Kirchhoff plate theory wherein the fatigue failure criterion based on strain energy is embedded. Simulated results based on the proposed model is compared with experimental observation and are in good agreement with each other. Variation in the life cycle of MFCs are also studied for different operating temperatures as well as structural/geometric configurations.
Role of word-of-mouth for programs of voluntary vaccination: A game-theoretic approach.
Bhattacharyya, Samit; Bauch, Chris T; Breban, Romulus
2015-11-01
We propose a model describing the synergetic feedback between word-of-mouth (WoM) and epidemic dynamics controlled by voluntary vaccination. The key feature consists in combining a game-theoretic model for the spread of WoM and a compartmental model describing VSIR disease dynamics in the presence of a program of voluntary vaccination. We evaluate and compare two scenarios for determinants of behavior, depending on what WoM disseminates: (1) vaccine advertising, which may occur whether or not an epidemic is ongoing and (2) epidemic status, notably disease prevalence. Understanding the synergy between the two strategies could be particularly important for designing voluntary vaccination campaigns. We find that, in the initial phase of an epidemic, vaccination uptake is determined more by vaccine advertising than the epidemic status. As the epidemic progresses, epidemic status becomes increasingly important for vaccination uptake, considerably accelerating vaccination uptake toward a stable vaccination coverage.
Buzzoni, A
2004-01-01
We wish to assess the relationship between the population of planetary nebulae (PNe) and a given parent stellar population from a theoretical point of view. Our results rely on original population synthesis models used to estimate the expected luminosity-specific PN density accounting for different evolutionary scenarios and star formation histories, as observed in galaxies in the near Universe. For a complete PN sample, we find that 1 PN/1.5E06 L(sun) a safe (IMF-independent) lower limit to the traced global bolometric luminosity of the parent stellar population. A tentative application to Virgo cluster data allows us to place a lower limit at ~7% for the global B luminosity of the cluster provided by "loose" intergalactic stars.
The Theoretical Defects in DRIS and the Restructuring of a New Approach
LI Jian; LI Mei-gui
2004-01-01
There are two kinds of theoretical defects export in the diagnosis and recommendation integrated system(DRIS):the distribution of nutrient elements ratios with two normal distribution displays an abnormal distribution of positive skewness; there is a blind diagnostic area by DRIS on the occasion of the nutrient elements with equal ratios but without equal quantity.In the light of the quadratic form theory of multidimensional normal distribution and the view of balance of equal probability,the balance diagnosis and recommendationintegrated system(BDRIS)was developed in this paper,which is superior to DRIS and its diagnosis method is unified to critical value diagnosis.When the correlation matrix of nutrient elements R= I(identity matrix),i.e.,the effects of elements antagonism is disregarded,BDRIS will be simplified into the critical value diagnosis.In addition,the diagnosis program written in SAS language was also provided in this paper.
On Innovation: A Theoretical Approach on the Challenges of Utilities Marketing
Florina PÎNZARU
2012-06-01
Full Text Available One of the markets not long ago closed and completely regulated is now in the growing process of liberalization and deregulation: it is the utilities market we refer to (water, sewege, gas, electricity, waste collection. The deregulation of a market is usually followed by the appearance of competition expression conditions and, unassailably, the occurrence of specific marketing strategies. This paper investigates the specific of utilities marketing as it develops now, an bourgeoning domain, although with a rather discreet presence in this field’s theoretical analysis studies. Exploratory research on the analysis type products, promotional offers and communication of this market’s players shows an effervescent players practice, but also a continuous innovation necessary in a market where consumers are unfamiliar with bein persuaded by commercial means
Levels of integration of ICT in the curriculum: a theoretical approach
Sergio URUEÑA LÓPEZ
2016-06-01
Full Text Available The principal aim of this paper is to analyze from a theoretical point of view the triple dimension that implies the full integration of Information and Communications Technologies (ICT in the curriculum. The first level refers to the incorporation of ICT as didactic tools to support teaching practice. The second one alludes to the incorporation of ICT as a means for promote an efficient use of it. The third one suggests the incorporation of ICT as objects of critical analysis and knowledge. The last two mentioned dimensions –the most unnoticed but certainly not the least important– are essential for promoting critical citizens with the capacity to take reasonable and informed decisions regarding the assessment, design, monitoring and implementation of the new technologies.
Gelo, Omar Carlo Gioacchino; Salvatore, Sergio
2016-07-01
Notwithstanding the many methodological advances made in the field of psychotherapy research, at present a metatheoretical, school-independent framework to explain psychotherapy change processes taking into account their dynamic and complex nature is still lacking. Over the last years, several authors have suggested that a dynamic systems (DS) approach might provide such a framework. In the present paper, we review the main characteristics of a DS approach to psychotherapy. After an overview of the general principles of the DS approach, we describe the extent to which psychotherapy can be considered as a self-organizing open complex system, whose developmental change processes are described in terms of a dialectic dynamics between stability and change over time. Empirical evidence in support of this conceptualization is provided and discussed. Finally, we propose a research design strategy for the empirical investigation of psychotherapy from a DS approach, together with a research case example. We conclude that a DS approach may provide a metatheoretical, school-independent framework allowing us to constructively rethink and enhance the way we conceptualize and empirically investigate psychotherapy. (PsycINFO Database Record
A Standard Model—Theoretic Approach to Operational Semmantics of Recursiv Programs
邵志清
1993-01-01
In this paper we try to introduce a new approach to operational semantics of recursive programs by using ideas in the“priority method”which is a fundamental tool in Recursion Theory.In lieu of modelling partial functions by introducing undefined values in a traditional approach,we shall define a priority derivation tree for every term,and by respecting thr rule“attacking the subtem of the highest priority first”we define transition relations,computation sequences etc.directly based on a standard interpretation whic includes no undefined value in its domain,Finally,we prove that our new approach generates the same opeational semantics as the traditional one.It is also pointed out that we can use our strategy oto refute a claim of Loeckx and Sieber that the opperational semantics of recursive programs cannot be built based on predicate logic.
Comparison of theoretical approaches for computing the bond length alternation of polymethineimine
Jacquemin, Denis [Laboratoire de Chimie Theorique Appliquee, Facultes Universitaires Notre-Dame de la Paix, rue de Bruxelles, 61, B-5000 Namur (Belgium)], E-mail: denis.jacquemin@fundp.ac.be; Perpete, Eric A. [Laboratoire de Chimie Theorique Appliquee, Facultes Universitaires Notre-Dame de la Paix, rue de Bruxelles, 61, B-5000 Namur (Belgium); Chermette, Henry [Laboratoire de Chimie Physique Theorique, Universite Claude Bernard, Lyon I Bat. 210, and CNRS UMR 5180 43, Boulevard du 11 Novembre 1918, F-69622 Villeurbanne Cedex (France); Ciofini, Ilaria; Adamo, Carlo [Ecole Nationale Superieure de Chimie de Paris, Laboratoire Electrochimie et Chimie Analytique, UMR CNRS ENSCP no. 7575, rue Pierre et Marie Curie, 11 F-75321 Paris Cedex 05 (France)
2007-01-25
Using electron-correlated wavefunction approaches and several pure and hybrid density functionals combined with three atomic basis sets, we have optimized the ground-state geometry of increasingly long polymethineimine oligomers presenting all-trans and gliding-plane symmetries. It turns out that MP2 bond length alternations (BLA) are in good agreement with higher-order electron-correlated wavefunction approaches, whereas, for both conformers, large qualitative and quantitative discrepancies between MP2 and DFT geometries have been found. Indeed, all the selected GGA, meta-GGA and hybrid functionals tend to overestimate bond length equalization in extended polymethineimine structures. On the other hand, self-interaction corrections included in the ADSIC framework provide, in this particular case, a more efficient approach to predict the BLA for medium-size oligomers.
Comparison of theoretical approaches for computing the bond length alternation of polymethineimine
Jacquemin, Denis; Perpète, Eric A.; Chermette, Henry; Ciofini, Ilaria; Adamo, Carlo
2007-01-01
Using electron-correlated wavefunction approaches and several pure and hybrid density functionals combined with three atomic basis sets, we have optimized the ground-state geometry of increasingly long polymethineimine oligomers presenting all- trans and gliding-plane symmetries. It turns out that MP2 bond length alternations (BLA) are in good agreement with higher-order electron-correlated wavefunction approaches, whereas, for both conformers, large qualitative and quantitative discrepancies between MP2 and DFT geometries have been found. Indeed, all the selected GGA, meta-GGA and hybrid functionals tend to overestimate bond length equalization in extended polymethineimine structures. On the other hand, self-interaction corrections included in the ADSIC framework provide, in this particular case, a more efficient approach to predict the BLA for medium-size oligomers.
Organizations in context: proposal for a new theoretical approach in prescriptive accident research
Dyhrberg, Mette Bang; Jensen, Per Langå
2004-01-01
between understanding the processes in the enterprises and understanding the contextual relations. Decision-making theories are used to explain the internal processes. And regulatory approaches are used to describe the role of the state in regard to accident prevention in enterprises. Eventually......, contextual theories are presented as theories to perceive the relation between enterprise and context. The conclusion is that there is a basis for using contextual theories in a new approach, but also an investigation of the potentials for making the theories action-orientated is needed....
Cristian Gustavo Pinto-Cortez
2014-11-01
Full Text Available Child sexual abuse is a serious public health problem and a violation of human rights from children and adolescents. A prolific research has been developed to determine the magnitude of the problem, psychological effects, risk factors and protective factors. In this context, resilience approach becomes important by explain the mechanisms that promote positive adaptation to adversity. In this paper, it is discussed in the first part, the analysis of the concept of resilience and its various stages of investigation over time. Finally, an integration of this model in understanding and approaching child and adolescent victimization is done.
A Graph Theoretical and Topological Approach to Chemical Structure, Reactivity, and Dynamics
1988-10-01
chirality polynomial of the regular icosahedron, a problem previously assumed to be intractable. 2 7 Professor King has also expanded, extended, and...algebra to determine polynomials for the description of chirality observations. 6 0 This review presents a critical analysis of the limitations of...Surfaces: Such maps are generalizations of the convex polyhedra so common in chemical applications. During the last twenty years, Tutte and others have
Andrej B. Ilin
2014-01-01
Full Text Available The article reveals the essence of the intellectual product, its relationship with research of higher schools in the region, the role of intellectual property in the development of innovative capacity of the region. It also contains the author’s definition of an intellectual product of Higher Education, a classification of intellectual products. Based on the analysis of existing theoretical approaches and regional practices, relations between concepts «intellectual product», «intellectual work», «intellectual capital» and «intellectual property» are especially highlighted by the author.
Nevo, Ofra; Wiseman, Hadas
2002-01-01
The Developmental Career Counseling model incorporates the following principles of Short-Term Dynamic Psychotherapy: life-span approach, limited time, working alliance, rapid and early assessment, central focus, active and directive counselor participation, therapeutic flexibility, and termination issues. The model enables career and personal…
Gerashenkova Tatyana
2016-01-01
Full Text Available The article states the necessity to form an innovation-investment strategy of enterprise development, offers an approach to its classification, determines the place of this strategy in a corporatewide strategy, gives the methodology of formation and the realization form of the innovation-investment development strategy.
Field Theory in Organizational Psychology: An Analysis of Theoretical Approaches in Leadership.
Garcia, Joseph E.
This literature review examines Kurt Lewin's influence in leadership psychology. Characteristics of field theory are described in detail and utilized in analyzing leadership research, including the trait approach, leader behavior studies, contingency theory, path-goal theory, and leader decision theory. Important trends in leadership research are…
A Theoretical Framework for Media Law Courses (Approaches to Teaching Freedom of Expression).
Helle, Steven
1991-01-01
Suggests that most students prefer teachers have a theme that provides coherence and cohesiveness to media law courses. Explains how libertarian and neoliberal themes can guide learning and enumerates some of the principles of the two theories. Identifies drawbacks of the case analysis approach to such courses. (SG)
Gardner, Frank L.; Moore, Zella E.
2004-01-01
While traditional cognitive-behavioral skills-training-based approaches to athletic performance enhancement posit that negative thoughts and emotions must be controlled, eliminated, or replaced for athlete-clients to perform optimally, recent evidence suggests that efforts to control, eliminate, or suppress these internal states may actually have…
Field Theory in Organizational Psychology: An Analysis of Theoretical Approaches in Leadership.
Garcia, Joseph E.
This literature review examines Kurt Lewin's influence in leadership psychology. Characteristics of field theory are described in detail and utilized in analyzing leadership research, including the trait approach, leader behavior studies, contingency theory, path-goal theory, and leader decision theory. Important trends in leadership research are…
COMPARATIVE STUDY BETWEEN TRADITIONAL AND ENTERPRISE RISK MANAGEMENT -A THEORETICAL APPROACH
Cican Simona-Iulia
2014-07-01
Full Text Available The complexity, volatility and unpredictability of the current economic environment are a daily reminder that organizations face many risks. The traditional approach, according to which risk is a necessary evil which must be removed, is no longer sufficient and that is why companies nowadays are forced to spend significant resources to manage risks. Risk transparency is what one looks for; therefore, identification and management of risks within an organization become increasingly necessary for success and longevity. Risk approach has a major role in a company’s ability to avoid, reduce and turn risks into opportunities. Enterprise risk management is a new concept that revolutionizes the traditional approach and summarizes risk management in an integrated, comprehensive and strategic system. Studies use several synonyms for enterprise risk management such as integrated risk management, holistic risk management, global risk management and strategic risk management. Enterprise risk management implements at the end of the last century a new way to deal with risks: the holistic approach. This risks approach – i.e. interaction of several types of risks which become increasingly threatening and varied and may cause more damage than individual risk – brings forward the need of risk management and raises issues at the highest level of company management. For a proper view on company risks, each individual risk and the possibility of risk interaction must be understood. This is essential to establish a risk classification according to their impact on the company. Traditional approach on risk management, as a management function, is limited to only threats and losses, so relatively few organizations see risks as potential earning-generated opportunities. However, risk management process is not radically changed. Enterprise risk management is an improved version of the traditional risk management, created by expanding its scope. The new risk
Laser cooling of MgCl and MgBr in theoretical approach
Wan, Mingjie; Shao, Juxiang; Huang, Duohui; Yang, Junsheng; Cao, Qilong; Jin, Chengguo; Wang, Fanhou, E-mail: fanhouwangyibin@163.com [Computational Physics Key Laboratory of Sichuan Province, Yibin University, Yibin 644007 (China); Gao, Yufeng [Institute of Atomic and Molecular Physics, Sichuan University, Chengdu 610065 (China)
2015-07-14
Ab initio calculations for three low-lying electronic states (X{sup 2}Σ{sup +}, A{sup 2}Π, and 2{sup 2}Π) of MgCl and MgBr molecules, including spin-orbit coupling, are performed using multi-reference configuration interaction plus Davidson correction method. The calculations involve all-electronic basis sets and Douglas–Kroll scalar relativistic correction. Spectroscopic parameters well agree with available theoretical and experimental data. Highly diagonally distributed Franck-Condon factors f{sub 00} for A{sup 2}Π{sub 3/2,1/2} (υ′ = 0) → X{sup 2}Σ{sup +}{sub 1/2} (υ″ = 0) are determined for both MgCl and MgBr molecules. Suitable radiative lifetimes τ of A{sup 2}Π{sub 3/2,1/2} (υ′ = 0) states for rapid laser cooling are also obtained. The proposed laser drives A{sup 2}Π{sub 3/2} (υ′ = 0) → X{sup 2}Σ{sup +}{sub 1/2} (υ″ = 0) transition by using three wavelengths (main pump laser λ{sub 00}; two repumping lasers λ{sub 10} and λ{sub 21}). These results indicate the probability of laser cooling MgCl and MgBr molecules.
Taormina, R.; Galelli, S.; Karakaya, G.; Ahipasaoglu, S. D.
2016-11-01
This work investigates the uncertainty associated to the presence of multiple subsets of predictors yielding data-driven models with the same, or similar, predictive accuracy. To handle this uncertainty effectively, we introduce a novel input variable selection algorithm, called Wrapper for Quasi Equally Informative Subset Selection (W-QEISS), specifically conceived to identify all alternate subsets of predictors in a given dataset. The search process is based on a four-objective optimization problem that minimizes the number of selected predictors, maximizes the predictive accuracy of a data-driven model and optimizes two information theoretic metrics of relevance and redundancy, which guarantee that the selected subsets are highly informative and with little intra-subset similarity. The algorithm is first tested on two synthetic test problems and then demonstrated on a real-world streamflow prediction problem in the Yampa River catchment (US). Results show that complex hydro-meteorological datasets are characterized by a large number of alternate subsets of predictors, which provides useful insights on the underlying physical processes. Furthermore, the presence of multiple subsets of predictors-and associated models-helps find a better trade-off between different measures of predictive accuracy commonly adopted for hydrological modelling problems.
A game theoretical approach to the evolution of structured communication codes.
Fontanari, José F; Perlovsky, Leonid I
2008-08-01
Structured meaning-signal mappings, i.e., mappings that preserve neighborhood relationships by associating similar signals with similar meanings, are advantageous in an environment where signals are corrupted by noise and sub-optimal meaning inferences are rewarded as well. The evolution of these mappings, however, cannot be explained within a traditional language evolutionary game scenario in which individuals meet randomly because the evolutionary dynamics is trapped in local maxima that do not reflect the structure of the meaning and signal spaces. Here we use a simple game theoretical model to show analytically that when individuals adopting the same communication code meet more frequently than individuals using different codes-a result of the spatial organization of the population-then advantageous linguistic innovations can spread and take over the population. In addition, we report results of simulations in which an individual can communicate only with its K nearest neighbors and show that the probability that the lineage of a mutant that uses a more efficient communication code becomes fixed decreases exponentially with increasing K. These findings support the mother tongue hypothesis that human language evolved as a communication system used among kin, especially between mothers and offspring.
Solubility of caffeine from green tea in supercritical CO2: a theoretical and empirical approach.
Gadkari, Pravin Vasantrao; Balaraman, Manohar
2015-12-01
Decaffeination of fresh green tea was carried out with supercritical CO2 in the presence of ethanol as co-solvent. The solubility of caffeine in supercritical CO2 varied from 44.19 × 10(-6) to 149.55 × 10(-6) (mole fraction) over a pressure and temperature range of 15 to 35 MPa and 313 to 333 K, respectively. The maximum solubility of caffeine was obtained at 25 MPa and 323 K. Experimental solubility data were correlated with the theoretical equation of state models Peng-Robinson (PR), Soave Redlich-Kwong (SRK), and Redlich-Kwong (RK). The RK model had regressed experimental data with 15.52 % average absolute relative deviation (AARD). In contrast, Gordillo empirical model regressed the best to experimental data with only 0.96 % AARD. Under supercritical conditions, solubility of caffeine in tea matrix was lower than the solubility of pure caffeine. Further, solubility of caffeine in supercritical CO2 was compared with solubility of pure caffeine in conventional solvents and a maximum solubility 90 × 10(-3) mol fraction was obtained with chloroform.
Hagemann, Ian S; O'Neill, Patrick K; Erill, Ivan; Pfeifer, John D
2015-09-01
The information-theoretic concept of Shannon entropy can be used to quantify the information provided by a diagnostic test. We hypothesized that in tumor types with stereotyped mutational profiles, the results of NGS testing would yield lower average information than in tumors with more diverse mutations. To test this hypothesis, we estimated the entropy of NGS testing in various cancer types, using results obtained from clinical sequencing. A set of 238 tumors were subjected to clinical targeted NGS across all exons of 27 genes. There were 120 actionable variants in 109 cases, occurring in the genes KRAS, EGFR, PTEN, PIK3CA, KIT, BRAF, NRAS, IDH1, and JAK2. Sequencing results for each tumor were modeled as a dichotomized genotype (actionable mutation detected or not detected) for each of the 27 genes. Based upon the entropy of these genotypes, sequencing was most informative for colorectal cancer (3.235 bits of information/case) followed by high grade glioma (2.938 bits), lung cancer (2.197 bits), pancreatic cancer (1.339 bits), and sarcoma/STTs (1.289 bits). In the most informative cancer types, the information content of NGS was similar to surgical pathology examination (modeled at approximately 2-3 bits). Entropy provides a novel measure of utility for laboratory testing in general and for NGS in particular. This metric is, however, purely analytical and does not capture the relative clinical significance of the identified variants, which may also differ across tumor types.
Relativistic many-body theory a new field-theoretical approach
Lindgren, Ingvar
2016-01-01
This revised second edition of the author’s classic text offers readers a comprehensively updated review of relativistic atomic many-body theory, covering the many developments in the field since the publication of the original title. In particular, a new final section extends the scope to cover the evaluation of QED effects for dynamical processes. The treatment of the book is based upon quantum-field theory, and demonstrates that when the procedure is carried to all orders of perturbation theory, two-particle systems are fully compatible with the relativistically covariant Bethe-Salpeter equation. This procedure can be applied to arbitrary open-shell systems, in analogy with the standard many-body theory, and it is also applicable to systems with more than two particles. Presently existing theoretical procedures for treating atomic systems are, in several cases, insufficient to explain the accurate experimental data recently obtained, particularly for highly charged ions. The main text is divided into...
An Estimation Theoretic Approach for Sparsity Pattern Recovery in the Noisy Setting
Hormati, Ali; Mohajer, Soheil; Vetterli, Martin
2009-01-01
Compressed sensing deals with the reconstruction of sparse signals using a small number of linear measurements. One of the main challenges in compressed sensing is to find the support of a sparse signal. In the literature, several bounds on the scaling law of the number of measurements for successful support recovery have been derived where the main focus is on random Gaussian measurement matrices. In this paper, we investigate the noisy support recovery problem from an estimation theoretic point of view, where no specific assumption is made on the underlying measurement matrix. The linear measurements are perturbed by additive white Gaussian noise. We define the output of a support estimator to be a set of position values in increasing order. We set the error between the true and estimated supports as the $\\ell_2$-norm of their difference. On the one hand, this choice allows us to use the machinery behind the $\\ell_2$-norm error metric and on the other hand, converts the support recovery into a more intuitiv...
Rafael Pérez Solano
2012-03-01
Full Text Available The distinctive spectral absorption characteristics of cancer cells make photoacoustic techniques useful for detection in vitro and in vivo. Here we report on our evaluation of the photoacoustic signal produced by a series of monolayers of different cell lines in vitro. Only the melanoma cell line HS936 produced a detectable photoacoustic signal in which amplitude was dependent on the number of cells. This finding appears to be related to the amount of melanin available in these cells. Other cell lines (i.e. HL60, SK-Mel-1, T47D, Hela, HT29 and PC12 exhibited values similar to a precursor of melanin (tyrosinase, but failed to produce sufficient melanin to generate a photoacoustic signal that could be distinguished from background noise. To better understand this phenomenon, we determined a formula for the time-domain photoacoustic wave equation for a monolayer of cells in a non-viscous fluid on the thermoelastic regime. The theoretical results showed that the amplitude and profile of the photoacoustic signal generated by a cell monolayer depended upon the number and distribution of the cells and the location of the point of detection. These findings help to provide a better understanding of the factors involved in the generation of a photoacoustic signal produced by different cells in vitro and in vivo.
Information Theoretic Approaches to Rapid Discovery of Relationships in Large Climate Data Sets
Knuth, Kevin H.; Rossow, William B.; Clancy, Daniel (Technical Monitor)
2002-01-01
Mutual information as the asymptotic Bayesian measure of independence is an excellent starting point for investigating the existence of possible relationships among climate-relevant variables in large data sets, As mutual information is a nonlinear function of of its arguments, it is not beholden to the assumption of a linear relationship between the variables in question and can reveal features missed in linear correlation analyses. However, as mutual information is symmetric in its arguments, it only has the ability to reveal the probability that two variables are related. it provides no information as to how they are related; specifically, causal interactions or a relation based on a common cause cannot be detected. For this reason we also investigate the utility of a related quantity called the transfer entropy. The transfer entropy can be written as a difference between mutual informations and has the capability to reveal whether and how the variables are causally related. The application of these information theoretic measures is rested on some familiar examples using data from the International Satellite Cloud Climatology Project (ISCCP) to identify relation between global cloud cover and other variables, including equatorial pacific sea surface temperature (SST), over seasonal and El Nino Southern Oscillation (ENSO) cycles.
A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay Constraints
Meshkati, Farhad; Poor, H Vincent; Schwartz, Stuart C
2007-01-01
A game-theoretic framework is used to study the effect of constellation size on the energy efficiency of wireless networks for M-QAM modulation. A non-cooperative game is proposed in which each user seeks to choose its transmit power (and possibly transmit symbol rate) as well as the constellation size in order to maximize its own utility while satisfying its delay quality-of-service (QoS) constraint. The utility function used here measures the number of reliable bits transmitted per joule of energy consumed, and is particularly suitable for energy-constrained networks. The best-response strategies and Nash equilibrium solution for the proposed game are derived. It is shown that in order to maximize its utility (in bits per joule), a user must choose the lowest constellation size that can accommodate the user's delay constraint. Using this framework, the tradeoffs among energy efficiency, delay, throughput and constellation size are also studied and quantified. The effect of trellis-coded modulation on energy...
Castro-Rebolledo, R; Giraldo-Prieto, M; Hincapié-Henao, L; Lopera, F; Pineda, D A
This article presents an updated review about the definition, diagnostic criteria, classifications, etiology and the evolution of the specific language impairment (SLI). The specific language impairment is characterized by a developmental language delay and an impaired language, that persist over time and it is not explained by sensorial, motor and mental disabilities, neither by psycopathological disorders, socio-emotional deprivation, nor brain injury. The diagnosis is based on exclusional criteria. Some researchers propose different classifications considering the children performance in language comprehension and language production. Genetical linkage to the FOXP2 gen in the SPCH1 region of the chromosome 7 and to the chromosomes 13, 16 y 19 has been reported. The neuroimage studies have shown alterations in the volume and perfusion of some brain structures related to language. The manifestations of SLI may change during the development of the children and may disturb the self-esteem, the academic performance and the social abilities. The variability in the linguistic and cognitive performance, and the variety in the etiological findings in children with SLI, don't allow to settle the affected population as an homogeneous group. Different theoretical positions have emerged as a consequence of this condition.
Vignola, Emanuele; Steinmann, Stephan N.; Vandegehuchte, Bart D.; Curulla, Daniel; Stamatakis, Michail; Sautet, Philippe
2017-08-01
The accurate description of the energy of adsorbate layers is crucial for the understanding of chemistry at interfaces. For heterogeneous catalysis, not only the interaction of the adsorbate with the surface but also the adsorbate-adsorbate lateral interactions significantly affect the activation energies of reactions. Modeling the interactions of the adsorbates with the catalyst surface and with each other can be efficiently achieved in the cluster expansion Hamiltonian formalism, which has recently been implemented in a graph-theoretical kinetic Monte Carlo (kMC) scheme to describe multi-dentate species. Automating the development of the cluster expansion Hamiltonians for catalytic systems is challenging and requires the mapping of adsorbate configurations for extended adsorbates onto a graphical lattice. The current work adopts machine learning methods to reach this goal. Clusters are automatically detected based on formalized, but intuitive chemical concepts. The corresponding energy coefficients for the cluster expansion are calculated by an inversion scheme. The potential of this method is demonstrated for the example of ethylene adsorption on Pd(111), for which we propose several expansions, depending on the graphical lattice. It turns out that for this system, the best description is obtained as a combination of single molecule patterns and a few coupling terms accounting for lateral interactions.
Theoretical approaches of semiconductor interfaces and of their defects : recent developments
Priester, C.
1991-04-01
We describe recent developments of theoretical studies concerning semiconductor interfaces from different points of view: the widely used effective mass approximation and its limitations are considered ; different ways to calculate band offsets are described and compared; the interesting problem of the effect of strains is discussed; several interface defects that have been recently studied are also considered. Nous donnons ici une revue des diverses études théoriques sur les interfaces de semiconducteurs et leurs défauts. Divers aspects sont considérés: d'une part le problème délicat de l'approximation de la masse effective (utilisée très fréquemment) et de ses limitations ; d'autre part nous passons en revue les différentes approches possibles pour le calcul des discontinuités de bandes à l'hétérojonction ; une attention particulière est accordée aux modifications apportées par la présence d'une contrainte biaxiale (due à un désaccord de maille); enfin divers défauts localisés à l'interface, qui ont fait l'objet d'études récentes, sont pris en compte.
A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing
Gustavo Miranda da Silva
2015-09-01
Full Text Available This work addresses an important issue regarding the performance of simultaneous test procedures: the construction of multiple tests that at the same time are optimal from a statistical perspective and that also yield logically-consistent results that are easy to communicate to practitioners of statistical methods. For instance, if hypothesis A implies hypothesis B, is it possible to create optimal testing procedures that reject A whenever they reject B? Unfortunately, several standard testing procedures fail in having such logical consistency. Although this has been deeply investigated under a frequentist perspective, the literature lacks analyses under a Bayesian paradigm. In this work, we contribute to the discussion by investigating three rational relationships under a Bayesian decision-theoretic standpoint: coherence, invertibility and union consonance. We characterize and illustrate through simple examples optimal Bayes tests that fulfill each of these requisites separately. We also explore how far one can go by putting these requirements together. We show that although fairly intuitive tests satisfy both coherence and invertibility, no Bayesian testing scheme meets the desiderata as a whole, strengthening the understanding that logical consistency cannot be combined with statistical optimality in general. Finally, we associate Bayesian hypothesis testing with Bayes point estimation procedures. We prove the performance of logically-consistent hypothesis testing by means of a Bayes point estimator to be optimal only under very restrictive conditions.
Amić, Ana; Marković, Zoran; Marković, Jasmina M Dimitrić; Jeremić, Svetlana; Lučić, Bono; Amić, Dragan
2016-12-01
Free radical scavenging and inhibitory potency against cyclooxygenase-2 (COX-2) by two abundant colon metabolites of polyphenols, i.e., 3-hydroxyphenylacetic acid (3-HPAA) and 4-hydroxyphenylpropionic acid (4-HPPA) were theoretically studied. Different free radical scavenging mechanisms are investigated in water and pentyl ethanoate as a solvent. By considering electronic properties of scavenged free radicals, hydrogen atom transfer (HAT) and sequential proton loss electron transfer (SPLET) mechanisms are found to be thermodynamically probable and competitive processes in both media. The Gibbs free energy change for reaction of inactivation of free radicals indicates 3-HPAA and 4-HPPA as potent scavengers. Their reactivity toward free radicals was predicted to decrease as follows: hydroxyl>alkoxyls>phenoxyl≈peroxyls>superoxide. Shown free radical scavenging potency of 3-HPAA and 4-HPPA along with their high μM concentration produced by microbial colon degradation of polyphenols could enable at least in situ inactivation of free radicals. Docking analysis with structural forms of 3-HPAA and 4-HPPA indicates dianionic ligands as potent inhibitors of COX-2, an inducible enzyme involved in colon carcinogenesis. Obtained results suggest that suppressing levels of free radicals and COX-2 could be achieved by 3-HPAA and 4-HPPA indicating that these compounds may contribute to reduced risk of colon cancer development. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Fawzy, Manal; Nasr, Mahmoud; Helmi, Shacker; Nagy, Heba
2016-11-01
Biomass of Oryza sativa (OS) was tested for the removal of Cd(II) ions from synthetic and real wastewater samples. Batch experiments were conducted to investigate the effects of operating parameters on Cd(II) biosorption. Fourier transform infrared spectroscopy, scanning electron microscopy, and energy-dispersive x-ray spectroscopy were used to examine the surface characteristics of the Cd(II)-loaded biomass. The maximum removal efficiency of Cd(II) was 89.4% at optimum pH 6.0, biosorbent dose 10.0 g L(-1), initial Cd(II) 50 mg L(-1), and biosorbent particle size 0.5 mm. The applicability of Langmuir and Freundlich isotherms to the sorbent system implied the existence of both monolayer and heterogeneous surface conditions. Kinetic studies revealed that the adsorption process of Cd(II) followed the pseudo-second-order model (r2: 0.99). On the theoretical side, an adaptive neuro-fuzzy inference system (ANFIS) was applied to select the operating parameter that mostly influences the Cd(II) biosorption process. Results from ANFIS indicated that pH was the most influential parameter affecting Cd(II) removal efficiency, indicating that the biomass of OS was strongly pH sensitive. Finally, the biomass was confirmed to adsorb Cd(II) from real wastewater samples with removal efficiency close to 100%. However, feasibility studies of such systems on a large-scale application remain to be investigated.
Chen, Yun; Yang, Hui
2016-12-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
In situ solid-state NMR for heterogeneous catalysis: a joint experimental and theoretical approach.
Zhang, Weiping; Xu, Shutao; Han, Xiuwen; Bao, Xinhe
2012-01-07
In situ solid-state NMR is a well-established tool for investigations of the structures of the adsorbed reactants, intermediates and products on the surface of solid catalysts. The techniques allow identifications of both the active sites such as acidic sites and reaction processes after introduction of adsorbates and reactants inside an NMR rotor under magic angle spinning (MAS). The in situ solid-state NMR studies of the reactions can be achieved in two ways, i.e. under batch-like or continuous-flow conditions. The former technique is low cost and accessible to the commercial instrument while the latter one is close to the real catalytic reactions on the solids. This critical review describes the research progress on the in situ solid-state NMR techniques and the applications in heterogeneous catalysis under batch-like and continuous-flow conditions in recent years. Some typical probe molecules are summarized here to detect the Brønsted and Lewis acidic sites by MAS NMR. The catalytic reactions discussed in this review include methane aromatization, olefin selective oxidation and olefin metathesis on the metal oxide-containing zeolites. With combining the in situ MAS NMR spectroscopy and the density functional theoretical (DFT) calculations, the intermediates on the catalyst can be identified, and the reaction mechanism is revealed. Reaction kinetic analysis in the nanospace instead of in the bulk state can also be performed by employing laser-enhanced MAS NMR techniques in the in situ flow mode (163 references).
Georgopoulos, Stéfanos L.; Diniz, Renata; Yoshida, Maria I.; Speziali, Nivaldo L.; Santos, Hélio F. Dos; Junqueira, Geórgia Maria A.; de Oliveira, Luiz F. C.
2006-08-01
Experimental and theoretical investigations of squarate salts [M 2(C 4O 4)] (M=Li, Na, K and Rb) were performed aiming to correlate the structures, vibrational analysis and aromaticity. Powder X-ray diffraction data show that these compounds are not isostructural, indicating that the metal-squarate and hydrogen bonds to water molecules interactions play a significant role on the the crystal packing. The infrared and Raman assigments suggest an equalization of the C-C bond lengths with the increasing of the counter-ion size. This result is interpreted as an enhancement in the electronic delocalization and consequently in the degree of aromaticity for salts with larger ions. Quantum mechanical calculations for structures, vibrational spectra and aromaticity index are in agreement with experimental finding, giving insights at molecular level for the role played by distinct complexation modes to the observed properties. Comparison between our results and literature, regarding molecular dynamics in different chemical environments, shows that aromaticity and hydrogen bonds are the most important forces driving the interactions in the solid structures of squarate ion.
A Game-Theoretic approach to Fault Diagnosis of Hybrid Systems
Davide Bresolin
2011-06-01
Full Text Available Physical systems can fail. For this reason the problem of identifying and reacting to faults has received a large attention in the control and computer science communities. In this paper we study the fault diagnosis problem for hybrid systems from a game-theoretical point of view. A hybrid system is a system mixing continuous and discrete behaviours that cannot be faithfully modeled neither by using a formalism with continuous dynamics only nor by a formalism including only discrete dynamics. We use the well known framework of hybrid automata for modeling hybrid systems, and we define a Fault Diagnosis Game on them, using two players: the environment and the diagnoser. The environment controls the evolution of the system and chooses whether and when a fault occurs. The diagnoser observes the external behaviour of the system and announces whether a fault has occurred or not. Existence of a winning strategy for the diagnoser implies that faults can be detected correctly, while computing such a winning strategy corresponds to implement a diagnoser for the system. We will show how to determine the existence of a winning strategy, and how to compute it, for some decidable classes of hybrid automata like o-minimal hybrid automata.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
Lewis, Allison, E-mail: lewis.allison10@gmail.com [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Smith, Ralph [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Williams, Brian [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Figueroa, Victor [Sandia National Laboratories, Albuquerque, NM 87185 (United States)
2016-11-01
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is to employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.
Nicolae Istudor
2016-08-01
Full Text Available The companies involved in the energy sector must reinvent themselves to be innovative and adaptable to contemporary environmental changes. The promotion of renewable energy in rural communities is a great challenge for these companies. They should focus on improving the environment scanning actions and the knowledge management (KM system and enhancing the collective intelligence to avoid the loss of information, to foster innovation, and to maintain a competitive advantage. To achieve these goals, energy companies require appropriate management tools and practices. The purpose of this study is to propose a theoretical framework of organizational intelligence (OI supported by a cross-perspective analysis of various aspects: economic intelligence (EI and KM practices, entropy processes, and organizational enablers. A pilot investigation for testing the framework in the case of Transelectrica S.A. has been elaborated. The findings reveal that the elements of the OI framework are embedded in Transelectrica’s system and they need to be further developed. As an intelligent company acting in the Romanian energy market, Transelectrica has a higher potential to promote projects in the renewable energy sector. The main conclusion highlights that OI is a multidimensional construct that provides the organization the ability to deal with environmental challenges in a “new economy”.
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Laser cooling of MgCl and MgBr in theoretical approach.
Wan, Mingjie; Shao, Juxiang; Gao, Yufeng; Huang, Duohui; Yang, Junsheng; Cao, Qilong; Jin, Chengguo; Wang, Fanhou
2015-07-14
Ab initio calculations for three low-lying electronic states (X(2)Σ(+), A(2)Π, and 2(2)Π) of MgCl and MgBr molecules, including spin-orbit coupling, are performed using multi-reference configuration interaction plus Davidson correction method. The calculations involve all-electronic basis sets and Douglas-Kroll scalar relativistic correction. Spectroscopic parameters well agree with available theoretical and experimental data. Highly diagonally distributed Franck-Condon factors f00 for A(2)Π3/2,1/2 (υ' = 0) → X(2)Σ(+) 1/2 (υ″ = 0) are determined for both MgCl and MgBr molecules. Suitable radiative lifetimes τ of A(2)Π3/2,1/2 (υ' = 0) states for rapid laser cooling are also obtained. The proposed laser drives A(2)Π3/2 (υ' = 0) → X(2)Σ(+) 1/2 (υ″ = 0) transition by using three wavelengths (main pump laser λ00; two repumping lasers λ10 and λ21). These results indicate the probability of laser cooling MgCl and MgBr molecules.
Margins of freedom: a field-theoretic approach to class-based health dispositions and practices.
Burnett, Patrick John; Veenstra, Gerry
2017-03-23
Pierre Bourdieu's theory of practice situates social practices in the relational interplay between experiential mental phenomena (habitus), resources (capitals) and objective social structures (fields). When applied to class-based practices in particular, the overarching field of power within which social classes are potentially made manifest is the primary field of interest. Applying relational statistical techniques to original survey data from Toronto and Vancouver, Canada, we investigated whether smoking, engaging in physical activity and consuming fruit and vegetables are dispersed in a three-dimensional field of power shaped by economic and cultural capitals and cultural dispositions and practices. We find that aesthetic dispositions and flexibility of developing and established dispositions are associated with positioning in the Canadian field of power and embedded in the logics of the health practices dispersed in the field. From this field-theoretic perspective, behavioural change requires the disruption of existing relations of harmony between the habitus of agents, the fields within which the practices are enacted and the capitals that inform and enforce the mores and regularities of the fields. The three-dimensional model can be explored at: http://relational-health.ca/margins-freedom.
Finite element methods for engineering sciences. Theoretical approach and problem solving techniques
Chaskalovic, J. [Ariel University Center of Samaria (Israel); Pierre and Marie Curie (Paris VI) Univ., 75 (France). Inst. Jean le Rond d' Alembert
2008-07-01
This self-tutorial offers a concise yet thorough grounding in the mathematics necessary for successfully applying FEMs to practical problems in science and engineering. The unique approach first summarizes and outlines the finite-element mathematics in general and then, in the second and major part, formulates problem examples that clearly demonstrate the techniques of functional analysis via numerous and diverse exercises. The solutions of the problems are given directly afterwards. Using this approach, the author motivates and encourages the reader to actively acquire the knowledge of finite- element methods instead of passively absorbing the material, as in most standard textbooks. The enlarged English-language edition, based on the original French, also contains a chapter on the approximation steps derived from the description of nature with differential equations and then applied to the specific model to be used. Furthermore, an introduction to tensor calculus using distribution theory offers further insight for readers with different mathematical backgrounds. (orig.)
Theoretical aspects of self-assembly of proteins: A Kirkwood-Buff-theory approach
Ben-Naim, Arieh
2013-06-01
A new approach to the problem of self-assembly of proteins induced by temperature, pressure, or changes in solute concentration is presented. The problem is formulated in terms of Le Chatelier principle, and a solution is sought in terms of the Kirkwood-Buff theory of solutions. In this article we focus on the pressure and solute effects on the association-dissociation equilibrium. We examine the role of both hydrophobic and hydrophilic effects. We argue that the latter are more important than the former. The solute effect, on the other hand, depends on the preferential solvation of the monomer and the aggregate with respect to solvent and co-solvent molecules. An experimental approach based on model compounds to study these effects is suggested.
Social representations: a theoretical approach in health - doi:10.5020/18061230.2011.p80
Isaiane Santos Bittencourt
2012-01-01
Full Text Available Objective: To present the theory of social representations, placing its epistemology and knowing the basic concepts of its approach as a structural unit of knowledge for health studies. Justification: The use of this theory comes from the need to understand social events under the lens of the meanings constructed by the community. Data Synthesis: This was a descriptive study of literature review, which used as a source of data collection the classical authors of social representations supported by articles from electronic search at Virtual Health Library (VHL. The definition and discussion of collected data enabled to introduce two themes, versed on the history and epistemology of representations and on the structural approach of representations in health studies. Conclusion: This review allowed highlight the importance of locating the objects of study with regard to contextual issues of individual and collective histories, valuing the plurality of relations, to come closer to reality that is represented by the subjects.
Fujii, Kanji; Fujii, Kanji; Shimomura, Takashi
2004-01-01
As a possible approach to the neutrino oscillation on the basis of quantum field theory, the expectation values of the flavor-neutrino currents are investigated by employing the finite-time transition matrix in the interaction representation. Such expectation values give us in the simplest form a possible way of treating the neutrino oscillation without recourse to any one flavor-neutrino states. The present paper is devoted to presenting the formulation and the main structures of the relevant expectation values.
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution
FUJII, Kanji; Shimomura, Takashi
2004-01-01
As a possible approach to the neutrino oscillation on the basis of quantum field theory, the expectation values of the flavor-neutrino currents are investigated by employing the finite-time transition matrix in the interaction representation. Such expectation values give us in the simplest form a possible way of treating the neutrino oscillation without recourse to any one flavor-neutrino states. The present paper is devoted to presenting the formulation and the main structures of the relevan...
Lemwang Chuhwanglim; Yahya Wijaya; Mark Woodward
2016-01-01
Religion in society has been a complex study for both academic and non-academic disciplines. Defining religion had become an issue since the beginning of world religions. This issue will continue to remain in society, unless world religions avoid imposed definition of religion from the world religions’ perspective. This research aims to study about how religion had been defined by many scholars theologically, politically, culturally, contextually, and how such different approaches...
Cristian Gustavo Pinto-Cortez
2014-01-01
Child sexual abuse is a serious public health problem and a violation of human rights from children and adolescents. A prolific research has been developed to determine the magnitude of the problem, psychological effects, risk factors and protective factors. In this context, resilience approach becomes important by explain the mechanisms that promote positive adaptation to adversity. In this paper, it is discussed in the first part, the analysis of the concept of resilience and its various st...
Prediction of the dollar to the ruble rate. A system-theoretic approach
Borodachev, Sergey M.
2017-07-01
Proposed a simple state-space model of dollar rate formation based on changes in oil prices and some mechanisms of money transfer between monetary and stock markets. Comparison of predictions by means of input-output model and state-space model is made. It concludes that with proper use of statistical data (Kalman filter) the second approach provides more adequate predictions of the dollar rate.
Optimal and Sustainable Exchange Rate Regimes; A Simple Game-Theoretic Approach
Masahiro Kawai
1992-01-01
This paper examines the question of how to design an optimal and sustainable exchange rate regime in a world economy of two interdependent countries. It develops a Barro-Gordon type two-country model and compares noncooperative equilibria under different assumptions of monetary policy credibility and different exchange rate regimes. Using a two-stage game approach to the strategic choice of policy instruments, it identifies optimal (in a Pare to sense) and sustainable (self-enforcing) exchang...
An information theoretic approach for generating an aircraft avoidance Markov Decision Process
Weinert, Andrew J.
Developing a collision avoidance system that can meet safety standards required of commercial aviation is challenging. A dynamic programming approach to collision avoidance has been developed to optimize and generate logics that are robust to the complex dynamics of the national airspace. The current approach represents the aircraft avoidance problem as Markov Decision Processes and independently optimizes a horizontal and vertical maneuver avoidance logics. This is a result of the current memory requirements for each logic, simply combining the logics will result in a significantly larger representation. The "curse of dimensionality" makes it computationally inefficient and unfeasible to optimize this larger representation. However, existing and future collision avoidance systems have mostly defined the decision process by hand. In response, a simulation-based framework was built to better understand how each potential state quantifies the aircraft avoidance problem with regards to safety and operational components. The framework leverages recent advances in signals processing and database, while enabling the highest fidelity analysis of Monte Carlo aircraft encounter simulations to date. This framework enabled the calculation of how well each state of the decision process quantifies the collision risk and the associated memory requirements. Using this analysis, a collision avoidance logic that leverages both horizontal and vertical actions was built and optimized using this simulation based approach.
CHEN Chun-bao; WANG Li-ya
2008-01-01
Many existing product family design methods assume a given platform, However, it is not an in-tuitive task to select the platform and unique variable within a product family. Meanwhile, most approachesare single-platform methods, in which design variables are either shared across all product variants or not atall. While in multiple-platform design, platform variables can have special value with regard to a subset ofproduct variants within the product family, and offer opportunities for superior overall design. An informationtheoretical approach incorporating fuzzy clustering and Shannon's entropy was proposed for platform variablesselection in multiple-platform product family. A 2-level chromosome genetic algorithm (2LCGA) was proposedand developed for optimizing the corresponding product family in a single stage, simultaneously determiningthe optimal settings for the product platform and unique variables. The single-stage approach can yield im-provements in the overall performance of the product family compared with two-stage approaches, in which thefirst stage involves determining the best settings for the platform and values of unique variables are found foreach product in the second stage. An example of design of a family of universal motors was used to verify theproposed method.
Grande, Maribel; Stergiotou, Iosifina; Borobio, Virginia; Sabrià, Joan; Soler, Anna; Borrell, Antoni
2017-07-01
A new maternal age-dependent method to estimate absolute excess risks of trisomy 21, either after a previous trisomy 21 (homotrisomy) or after another trisomy (heterotrisomy), is proposed to be added to the estimated risk by conventional screening methods. Excess risk at term for a subsequent trisomy 21 was calculated from midtrimester risks reported by Morris et al., decreasing from 0.49% at 20 years to 0.01% at 46 years at the index pregnancy. Excess risk after a previous uncommon trisomy was derived from data reported by Warburton et al., decreasing from 0.37% at 20 years to 0.01% at 50 years.
Wang, Jian-Xun; Xiao, Heng
2016-01-01
Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with the maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. In this work, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in ...
Peter Kastberg
2015-03-01
Full Text Available The market for organic foods is growing, however, the proportion of consumers buying organic foods is still considered low. Research shows that a significant barrier to consumers purchasing more organic foods is lack of information. This leads the relevant body of research to call for better communication around organic foods. The same body of research, however, neither questions what good communication surrounding organic foods is, nor what would make it better. Applying the communication theoretical formats of transmission, interaction, and coaction, respectively, onto instances of organic communication activities, I will discuss to what extent each format encourages consumer participation and learning. Transmission, typically in the form of monologuous mass communication, is cost effective. It is also a format that bars a sender, e.g., producer or farmer, from gauging deposits in the consumer, e.g., understanding the message, trusting the sender, etc. Interaction, typically in the form of dialoguous encounters, integrates feedback into communication allowing the sender to appreciate the level of understanding, trust, etc., which the communicative effort has given rise to, albeit at a higher price in terms of money, time, and manpower. In the format of coaction, typically in the form of co-operative endeavors, the deposit is a matter of what is coconstructed by the participants, e.g., understanding, trust, etc. Coaction thus satisfies the organic communicators craving for involving the consumer, and because food is a low-involvement commodity, this is critical. But emancipating the consumer comes at a price. First of all, coactional communication is dependent on highly motivated participants, and second, coactional communication is difficult if not impossible to control. Informed by these insights, I present an in-depth, critical discussion of the promises and pitfalls of how multicriteria assessments may be communicated and coconstructed on a
Foreign Scholars′ Theoretical Approaches to Using Social Networks as Educational Strategies
Pazyura Natalia
2017-06-01
Full Text Available Modern trends in development of information and communication technologies change many aspects in the process of education: from the role of participants to the forms and methods of knowledge delivery. ICTs make it possible to develop students′ creative potential. The emergence of online social groups was an important event in the sphere of communication but with time they began to be used by both teachers and students not only for communication, but also to achieve learning goals. Without any doubt, skillful use of social networks allows teachers to communicate with students at modern technological level, make classes more attractive and effective. An efficient teacher can prove that social networks are not only means of entertainment and communication with friends but are a working tool. The main aim of foreign language teaching is students′ communicative activity or practical use of a target language. The teacher is to activate every student’s activity in the process of learning, make situations for their creativity. The main objective of foreign language teaching is to educate an individual, who is able to communicate, continue education, including selfeducation. Different theories lay the basis for the study of social networks’ influence on different aspects of human activity and, particularly, education. The main theories are sociocultural theory and social constructivist theory. According to sociocultural theory, man is an integral part of the world they live in, so students are not independent in their activities. Social constructivist theory recognizes that students act in a certain environment, which under certain conditions enlarges their practical knowledge. These theories are focused on the effect of social interaction, language and culture in the learning process. Thus, theoretical basis proves positive effect of social networks, namely, they enhance substantial interaction in the educational environment of social groups as
Perez-Davis, M.E.; Gaier, J.R. [Lewis Research Center, Cleveland, OH (United States)
1994-09-01
In the foreseeable future, an expedition may be undertaken to explore the planet Mars. Some of the power source options being considered for such a mission are photovoltaics, regenerative fuel cells and nuclear reactors. In addition to electrical power requirements, environmental conditions en route to Mars, in the planetary orbit and on the Martian surface must be simulated and studied in order to anticipate and solve potential problems. Space power systems components such as photovoltaic arrays, radiators, and solar concentrators may be vulnerable to degradation in the Martian environment. Natural characteristics of Mars which may pose a threat to surface power systems include high velocity winds, dust, ultraviolet radiation, large daily variations in temperature, reaction to components of the soil, atmosphere and atmospheric condensates as well as synergistic combinations. Most of the current knowledge of the characteristics of the Martian atmosphere and soil composition was obtained from the Viking 1 and 2 missions in 1976. This paper presents a theoretical study used to assess the effects of the Martian atmospheric conditions on the power systems components. A computer program written at NASA Lewis Research Center in 1961 to 1962 for combustion research that uses a free-energy minimization technique was used to calculate chemical equilibrium for assigned thermodynamic states of temperature and pressure. The power system component materials selected for this study include: Silicon dioxide, silicon, carbon, copper, and titanium. Combinations of environments and materials considered in this study include: (1) Mars atmosphere with power surface material, (2) Mars atmosphere and dust component with power surface material, (3) Mars atmosphere and hydrogen peroxide or superoxide with power system material.
Perez-Davis, Marla E.; Gaier, James R.
1990-01-01
In the foreseeable future, an expedition may be undertaken to explore the planet Mars. Some of the power source options being considered for such a mission are photovoltaics, regenerative fuel cells and nuclear reactors. In addition to electrical power requirements, environmental conditions en route to Mars, in the planetary orbit and on the Martian surface must be simulated and studied in order to anticipate and solve potential problems. Space power systems components such as photovoltaic arrays, radiators, and solar concentrators may be vulnerable to degradation in the Martian environment. Natural characteristics of Mars which may pose a threat to surface power systems include high velocity winds, dust, ultraviolet radiation, large daily variation in temperature, reaction to components of the soil, atmosphere and atmospheric condensates as well as synergistic combinations. Most of the current knowledge of the characteristics of the Martian atmosphere and soil composition was obtained from the Viking 1 and 2 missions in 1976. A theoretical study is presented which was used to assess the effects of the Martian atmospheric conditions on the power systems components. A computer program written at NASA-Lewis for combustion research that uses a free energy minimization technique was used to calculate chemical equilibrium for assigned thermodynamic states of temperature and pressure. The power system component materials selected for this study include: silicon dioxide, silicon, carbon, copper, and titanium. Combinations of environments and materials considered include: (1) Mars atmosphere with power surface material, (2) Mars atmosphere and dust component with power surface material, and (3) Mars atmosphere and hydrogen peroxide or superoxide or superoxide with power system material. The chemical equilibrium calculations were performed at a composition ratio (oxidant to reactant) of 100. The temperature for the silicon dioxide material and silicon, which simulate
Horga, Guillermo; Maia, Tiago V
2012-01-01
Controlled processing is often referred to as "voluntary" or "willful" and therefore assumed to depend entirely on conscious processes. Recent studies using subliminal-priming paradigms, however, have started to question this assumption. Specifically, these studies have shown that subliminally presented stimuli can induce adjustments in control. Such findings are not immediately reconcilable with the view that conscious and unconscious processes are separate, with each having its own neural substrates and modus operandi. We propose a different theoretical perspective that suggests that conscious and unconscious processes might be implemented by the same neural substrates and largely perform the same neural computations, with the distinction between the two arising mostly from the quality of representations (although not all brain regions may be capable of supporting conscious representations). Thus, stronger and more durable neuronal firing would give rise to conscious processes; weaker or less durable neuronal firing would remain below the threshold of consciousness but still be causally efficacious in affecting behavior. We show that this perspective naturally explains the findings that subliminally presented primes induce adjustments in cognitive control. We also highlight an important gap in this literature: whereas subliminal-priming paradigms demonstrate that an unconsciously presented prime is sufficient to induce adjustments in cognitive control, they are uninformative about what occurs under standard task conditions. In standard tasks, the stimuli themselves are consciously perceived; however, the extent to which the processes that lead to adjustments in control are conscious or unconscious remains unexplored. We propose a new paradigm suitable to investigate these issues and to test important predictions of our hypothesis that conscious and unconscious processes both engage the same control machinery, differing mostly in the quality of the representations.
An information theoretic approach for non-rigid image registration using voxel class probabilities.
D'Agostino, Emiliano; Maes, Frederik; Vandermeulen, Dirk; Suetens, Paul
2006-06-01
We propose two information theoretic similarity measures that allow to incorporate tissue class information in non-rigid image registration. The first measure assumes that tissue class probabilities have been assigned to each of the images to be registered by prior segmentation of both of them. One image is then non-rigidly deformed to match the other such that the fuzzy overlap of corresponding voxel object labels becomes similar to the ideal case whereby the tissue probability maps of both images are identical. Image similarity is assessed during registration by the divergence between the ideal and actual joint class probability distributions of both images. A second registration measure is proposed that applies in case a segmentation is available for only one of the images, for instance an atlas image that is to be matched to a study image to guide the segmentation thereof. Intensities in one image are matched to the fuzzy class labels in the other image by minimizing the conditional entropy of the intensities in the first image given the class labels in the second image. We derive analytic expressions for the gradient of each measure with respect to individual voxel displacements to derive a force field that drives the registration process, which is regularized by a viscous fluid model. The performance of the class-based measures is evaluated in the context of non-rigid inter-subject registration and atlas-based segmentation of MR brain images and compared with maximization of mutual information using only intensity information. Our results demonstrate that incorporation of class information in the registration measure significantly improves the overlap between corresponding tissue classes after non-rigid matching. The methods proposed here open new perspectives for integrating segmentation and registration in a single process, whereby the output of one is used to guide the other.
Arturo Araujo
Full Text Available Many cancers are aneuploid. However, the precise role that chromosomal instability plays in the development of cancer and in the response of tumours to treatment is still hotly debated. Here, to explore this question from a theoretical standpoint we have developed an agent-based model of tissue homeostasis in which to test the likely effects of whole chromosome mis-segregation during cancer development. In stochastic simulations, chromosome mis-segregation events at cell division lead to the generation of a diverse population of aneuploid clones that over time exhibit hyperplastic growth. Significantly, the course of cancer evolution depends on genetic linkage, as the structure of chromosomes lost or gained through mis-segregation events and the level of genetic instability function in tandem to determine the trajectory of cancer evolution. As a result, simulated cancers differ in their level of genetic stability and in their growth rates. We used this system to investigate the consequences of these differences in tumour heterogeneity for anti-cancer therapies based on surgery and anti-mitotic drugs that selectively target proliferating cells. As expected, simulated treatments induce a transient delay in tumour growth, and reveal a significant difference in the efficacy of different therapy regimes in treating genetically stable and unstable tumours. These data support clinical observations in which a poor prognosis is correlated with a high level of chromosome mis-segregation. However, stochastic simulations run in parallel also exhibit a wide range of behaviours, and the response of individual simulations (equivalent to single tumours to anti-cancer therapy prove extremely variable. The model therefore highlights the difficulties of predicting the outcome of a given anti-cancer treatment, even in cases in which it is possible to determine the genotype of the entire set of cells within the developing tumour.
Linear game non-contextuality and Bell inequalities—a graph-theoretic approach
Rosicka, M.; Ramanathan, R.; Gnaciński, P.; Horodecki, K.; Horodecki, M.; Horodecki, P.; Severini, S.
2016-04-01
We study the classical and quantum values of a class of one- and two-party unique games, that generalizes the well-known XOR games to the case of non-binary outcomes. In the bipartite case the generalized XOR (XOR-d) games we study are a subclass of the well-known linear games. We introduce a ‘constraint graph’ associated to such a game, with the constraints defining the game represented by an edge-coloring of the graph. We use the graph-theoretic characterization to relate the task of finding equivalent games to the notion of signed graphs and switching equivalence from graph theory. We relate the problem of computing the classical value of single-party anti-correlation XOR games to finding the edge bipartization number of a graph, which is known to be MaxSNP hard, and connect the computation of the classical value of XOR-d games to the identification of specific cycles in the graph. We construct an orthogonality graph of the game from the constraint graph and study its Lovász theta number as a general upper bound on the quantum value even in the case of single-party contextual XOR-d games. XOR-d games possess appealing properties for use in device-independent applications such as randomness of the local correlated outcomes in the optimal quantum strategy. We study the possibility of obtaining quantum algebraic violation of these games, and show that no finite XOR-d game possesses the property of pseudo-telepathy leaving the frequently used chained Bell inequalities as the natural candidates for such applications. We also show this lack of pseudo-telepathy for multi-party XOR-type inequalities involving two-body correlation functions.
A theoretical approach to predict the performance of chevron-type plate haet exchangers
Martin, H. [Karlsruhe Univ. (T.H.) (Germany). Thermische Verfahrenstechnik
1996-08-01
Manufacturers of plate and frame heat exchangers nowadays mainly offer plates with chevron (or herringbone) corrugation patterns. The inclination angle {phi} of the crests and furrows of that sinusoidal pattern relative to the main flow direction has been shown to be the most important design parameter with respect to fluid friction and heat transfer. Two kinds of flow may exist in the gap between two plates (pressed together with the chevron pattern of the second plate turned into the opposite direction): The crossing flow of small substreams following the furrows of the first and the second plate, respectively, over the whole width of the corrugation pattern, dominating at lower inclination angles (lower pressure drop); and the wavy longitudinal flow between two vertical rows of contact points, prevailing at high {phi} angles (high pressure drop). The combined effects of the longer flow paths along the furrows, the crossing of the substreams, flow reversal at the edges of the chevron pattern, and the competition between crossing and longitudinal flow are taken into account to derive a relatively simple but physically reasonable equation for the friction factor {xi} as a function of the angle {phi} and the Reynolds number Re. Heat-transfer coefficients are then obtained from a theoretical equation for developing thermal boundary layers in fully developed laminar or turbulent channel flow - the generalized Leveque equation - predicting heat-transfer coefficients as being proportional to ({xi}.Re{sup 2}){sup 1/3}. It is shown, by comparison, that this prediction is in good agreement with experimental observations quoted in the literature. (orig.)
Costa, Dominique; Garrain, Pierre-Alain; Baaden, Marc
2013-04-01
Interactions between biomolecules and inorganic surfaces play an important role in natural environments and in industry, including a wide variety of conditions: marine environment, ship hulls (fouling), water treatment, heat exchange, membrane separation, soils, mineral particles at the earth's surface, hospitals (hygiene), art and buildings (degradation and biocorrosion), paper industry (fouling) and more. To better control the first steps leading to adsorption of a biomolecule on an inorganic surface, it is mandatory to understand the adsorption mechanisms of biomolecules of several sizes at the atomic scale, that is, the nature of the chemical interaction between the biomolecule and the surface and the resulting biomolecule conformations once adsorbed at the surface. This remains a challenging and unsolved problem. Here, we review the state of art in experimental and theoretical approaches. We focus on metallic biomaterial surfaces such as TiO(2) and stainless steel, mentioning some remarkable results on hydroxyapatite. Experimental techniques include atomic force microscopy, surface plasmon resonance, quartz crystal microbalance, X-ray photoelectron spectroscopy, fluorescence microscopy, polarization modulation infrared reflection absorption spectroscopy, sum frequency generation and time of flight secondary ion mass spectroscopy. Theoretical models range from detailed quantum mechanical representations to classical forcefield-based approaches.
An improved theoretical approach to the empirical corrections of density functional theory
Lii, Jenn-Huei; Hu, Ching-Han
2012-02-01
An empirical correction to density functional theory (DFT) has been developed in this study. The approach, called correlation corrected atomization-dispersion (CCAZD), involves short- and long-range terms. Short-range correction consists of bond ( 1,2-) and angle ( 1,3-) interactions, which remedies the deficiency of DFT in describing the proto-branching stabilization effects. Long-range correction includes a Buckingham potential function aiming to account for the dispersion interactions. The empirical corrections of DFT were parameterized to reproduce reported Δ H f values of the training set containing alkane, alcohol and ether molecules. The Δ H f of the training set molecules predicted by the CCAZD method combined with two different DFT methods, B3LYP and MPWB1K, with a 6-31G* basis set agreed well with the experimental data. For 106 alkane, alcohol and ether compounds, the average absolute deviations (AADs) in Δ H f were 0.45 and 0.51 kcal/mol for B3LYP- and MPWB1K-CCAZD, respectively. Calculations of isomerization energies, rotational barriers and conformational energies further validated the CCAZD approach. The isomerization energies improved significantly with the CCAZD treatment. The AADs for 22 energies of isomerization reactions were decreased from 3.55 and 2.44 to 0.55 and 0.82 kcal/mol for B3LYP and MPWB1K, respectively. This study also provided predictions of MM4, G3, CBS-QB3 and B2PLYP-D for comparison. The final test of the CCAZD approach on the calculation of the cellobiose analog potential surface also showed promising results. This study demonstrated that DFT calculations with CCAZD empirical corrections achieved very good agreement with reported values for various chemical reactions with a small basis set as 6-31G*.
Theoretical approaches to the steady-state statistical physics of interacting dissipative units
Bertin, Eric
2017-02-01
The aim of this review is to provide a concise overview of some of the generic approaches that have been developed to deal with the statistical description of large systems of interacting dissipative ‘units’. The latter notion includes, e.g. inelastic grains, active or self-propelled particles, bubbles in a foam, low-dimensional dynamical systems like driven oscillators, or even spatially extended modes like Fourier modes of the velocity field in a fluid. We first review methods based on the statistical properties of a single unit, starting with elementary mean-field approximations, either static or dynamic, that describe a unit embedded in a ‘self-consistent’ environment. We then discuss how this basic mean-field approach can be extended to account for spatial dependences, in the form of space-dependent mean-field Fokker-Planck equations, for example. We also briefly review the use of kinetic theory in the framework of the Boltzmann equation, which is an appropriate description for dilute systems. We then turn to descriptions in terms of the full N-body distribution, starting from exact solutions of one-dimensional models, using a matrix-product ansatz method when correlations are present. Since exactly solvable models are scarce, we also present some approximation methods which can be used to determine the N-body distribution in a large system of dissipative units. These methods include the Edwards approach for dense granular matter and the approximate treatment of multiparticle Langevin equations with colored noise, which models systems of self-propelled particles. Throughout this review, emphasis is put on methodological aspects of the statistical modeling and on formal similarities between different physical problems, rather than on the specific behavior of a given system.
Karapeychik Igor M.
2013-03-01
Full Text Available Within the frameworks of the author’s concept of the potential of an enterprise as the ability to conduct its immanently appropriate activity and also the idea of presentation of the size of the potential in the form of potential function from parameters of the state of an enterprise and foreign economic environment the article develops a scientific and methodical approach to construction and analysis of the potential function of an enterprise. The offered approach envisages building an economic and mathematical model of an enterprise of the optimisation type with consideration of environmental factors, determination of the size of economic potential as a maximum possible (optimal with the set condition of an enterprise and external environment of net income, statistical test of the model with possible values of external parameters (formation of statistical sampling of the graph of the potential function of an enterprise and application of statistical methods including methods of correlation, factor and regression analysis, for the study of its properties. Operability of this approach is shown on the example of the study of properties of the potential function of a model enterprise. In the course of approbation the article demonstrates its ability to reveal specific features of impact of external factors on economic potential of an enterprise; establishes, as a common regularity, differential influence of various environmental factors, caused not only by the nature of these factors, but also production and economic specific features and specific state of an enterprise. The article shows that the quantitative values of the force of influence of the said factors upon the value of economic potential, obtained during statistical analysis of the potential function of an enterprise, could serve as an instrument of ranking these factors by the priority level in the goal setting tasks at the stage of formation of the strategy of enterprise development
Aleksandr Anatolyevich Kuklin
2014-09-01
Full Text Available This article presents an evolution of theoretical and methodological approaches to the welfare study. Existing theories of wellbeing are grouped according to accounted method of goods and resources distribution among society members. As a welfare future as a category we highlight objective (measured and subjective (estimated components. Based on the analysis of scientific literature we determine the ratio of individual and social welfare. The main differences between the categories of “ welfare” and “wealth” are given. The main difference consists in multidirectional changes of welfare and wealth for an increase (decrease in income of the individual (country. In this article we present an analysis of modern approaches to the definition of welfare: state, institutional and expendable approach. The welfare level estimation is complicated due to the need to consider the subjective component. The article provides an analysis of existing approaches to quantitative welfare evaluation ranging from the most common techniques (HDI, GDP to alternative techniques (Happy Planet Index.Methodological devices are structured by levels of welfare assessment objects (world, country, region, people. Based on the analysis of the advantages and disadvantages of methods we can conclude that the most reliable method is a comprehensive approach, which includes economic, environmental, social, vital and infrastructure indicators. The author’s approach to the formation of a complex methodological tool for individual and territory welfare estimation is presented in this article.
A quantum-theoretical approach to the phenomenon of directed mutations in bacteria
Ogryzko, Vasily
2007-01-01
The Darwinian paradigm of biological evolution is based on the separability of the variation and selection processes. As a result, the population thinking had always been an integral part of the Darwinian approach. I propose an alternative scheme of biological adaptation. It is based on appreciation of limits of what we can observe considering an individual biological object. This leads to a possibility for the adaptation process to occur on the level of a single object, as a 'selection among virtual states of the organism'. I discuss the application of this idea to the phenomenon of adaptive mutations in bacteria.
Cost Allocation of Multiagency Water Resource Projects: Game Theoretic Approaches and Case Study
Lejano, Raul P.; Davos, Climis A.
1995-05-01
Water resource projects are often jointly carried out by a number of communities and agencies. Participation in a joint project depends on how costs are allocated among the participants and how cost shares compare with the cost of independent projects. Cooperative N-person game theory offers approaches which yield cost allocations that satisfy rationality conditions favoring participation. A new solution concept, the normalized nucleolus, is discussed and applied to a water reuse project in southern California. Results obtained with the normalized nucleolus are compared with those derived with more traditional solution concepts, namely, the nucleolus and the Shapley value.
A Theoretic Approach to SU(4) Kondo Effect in Carbon Nanotube Quantum Dots
ZHU Rui
2006-01-01
We propose a mean Geld approach to the transport properties of carbon nanotube quantum dots. Quantum interaction between spin and orbital pseudo-spin degrees of freedom results in an SU(4) Kondo effect at low temperatures. By calculating the chemical potentials and the tunnelling strengths, and hence the spectral functions for different coupling constants and applied magnetic fields, we find that this exotic Kondo effect manifests as a four-peak splitting in the non-linear conductance when an axial magnetic field is applied.
Governance approach to China's environmental challenges:Towards a theoretical synthesis
QI Ye; XUE Lan; ZHANG Lingyun
2007-01-01
This paper reviewed recent research in environmental governance as a response to environmental challenges at various spatial,temporal and administrative scales.It documented the shift of approach from regulation to governance,and attempted to provide a comprehensive understanding why and how the transformation occurred.It also described major factors and forces of environmental governance,and discussed research advance in environmental governance theory.Finally,this paper summarized recent research findings on environmental governance in China,and listed policy recommendation for enhancing the governance.
Hilberg, Sylke
2016-08-01
Extensive in-depth research is required for the implementation of natural tracer approaches to hydrogeological investigation to be feasible in mountainous regions. This review considers the application of hydrochemical and biotic parameters in mountain regions over the past few decades with particular reference to the Austrian Alps, as an example for alpine-type mountain belts. A brief introduction to Austria's hydrogeological arrangement is given to show the significance of fractured hard-rock aquifers for hydrogeological science as well as for water supply purposes. A literature search showed that research concerning fractured hard-rock aquifers in Austria is clearly underrepresented to date, especially when taking the abundance of this aquifer type and the significance of this topic into consideration. The application of abiotic natural tracers (hydrochemical and isotope parameters) is discussed generally and by means of examples from the Austrian Alps. The potential of biotic tracers (microbiota and meiofauna) is elucidated. It is shown that the meiofauna approach to investigating fractured aquifers has not yet been applied in the reviewed region, nor worldwide. Two examples of new approaches in mountainous fractured aquifers are introduced: (1) use of CO2 partial pressure and calcite saturation of spring water to reconstruct catchments and flow dynamics (abiotic approach), and, (2) consideration of hard-rock aquifers as habitats to reconstruct aquifer conditions (biotic approach).
A resource-based game theoretical approach for the paradox of the plankton.
Huang, Weini; de Araujo Campos, Paulo Roberto; Moraes de Oliveira, Viviane; Fagundes Ferrreira, Fernando
2016-01-01
The maintenance of species diversity is a central focus in ecology. It is not rare to observe more species than the number of limiting resources, especially in plankton communities. However, such high species diversity is hard to achieve in theory under the competitive exclusion principles, known as the plankton paradox. Previous studies often focus on the coexistence of predefined species and ignore the fact that species can evolve. We model multi-resource competitions using evolutionary games, where the number of species fluctuates under extinction and the appearance of new species. The interspecific and intraspecific competitions are captured by a dynamical payoff matrix, which has a size of the number of species. The competition strength (payoff entries) is obtained from comparing the capability of species in consuming resources, which can change over time. This allows for the robust coexistence of a large number of species, providing a possible solution to the plankton paradox.
Marte Sørebø Gulliksen
2009-12-01
Full Text Available This article presents and discusses approaches for exploring higher education in Art and Crafts. The concepts exploring versus research and the different foci in an insider perspective versus an outsider perspective introduces the theme. An insider perspective is said to be a useful starting point for inquiry, referring to Frayling’s trichotomy research into, research on and research through from 1993. The field of higher education in Art and Crafts education is shortly presented as comprising two main areas of knowledge: knowledge of education, and knowledge about the different subject areas within Art and Crafts. Both theory and practice are a part of these areas of knowledge. As higher education in Art and Crafts is a making profession, the most prominent challenge when exploring this today is thus said to be to develop research based knowledge on Education in Art and Crafts as a making discipline. Two keywords are deemed to be useful in approaching this theme: Mode 2 knowledge production and transdisciplinarity. The article concludes with describing specific ways of doing this today from within the context of application. Two examples of large research projects in Scandinavia are presented as examples of such projects.
Evolving Role and Nature of Workplace Leaders and Diversity: A Theoretical and Empirical Approach
Jan C. Visagie
2010-12-01
Full Text Available Blumer (1962 regarded the ‘many possibilities of uncertainty as inherent to the process of joint action.’ Joint action reflects the efforts of participants to work out the line of action in light of what they observe each other doing. Leadership appears to be approached from two fundamental perspectives: an organisational perspective (the influence that is exercised to change the direction of the organisation, and an individual task perspective (the influence that is directed at changing the work behaviour of an individual. In this article, it is suggested that the symbolic interaction of perspective integrates the two fundamental perspectives in that both perspectives require meaningful, reflexive integration and meaning, group membership, organisational role and experience. The evolving role of leaders to attract, retain and connect with a diverse workforce in a changing environment gives rise to interactive leadership competency requirements. This article suggests that managing diversity requires business leaders to adopt an approach to diversity management that is sensitive not only to race and ethnic differences, but also to the background and values of all individuals at work. The empirical study was done and four hundred and forty (440 leadership styles were measured in eleven (11 organisations. The study used the Hall and Hawker (1988 inventory leadership styles and a diversity questionnaire to measure diversity management experience.
Ogle, K.; Fell, M.; Barber, J. J.
2016-12-01
Empirical, field studies of plant functional traits have revealed important trade-offs among pairs or triplets of traits, such as the leaf (LES) and wood (WES) economics spectra. Trade-offs include correlations between leaf longevity (LL) vs specific leaf area (SLA), LL vs mass-specific leaf respiration rate (RmL), SLA vs RmL, and resistance to breakage vs wood density. Ordination analyses (e.g., PCA) show groupings of traits that tend to align with different life-history strategies or taxonomic groups. It is unclear, however, what underlies such trade-offs and emergent spectra. Do they arise from inherent physiological constraints on growth, or are they more reflective of environmental filtering? The relative importance of these mechanisms has implications for predicting biogeochemical cycling, which is influenced by trait distributions of the plant community. We address this question using an individual-based model of tree growth (ACGCA) to quantify the theoretical trait space of trees that emerges from physiological constraints. ACGCA's inputs include 32 physiological, anatomical, and allometric traits, many of which are related to the LES and WES. We fit ACGCA to 1.6 million USFS FIA observations of tree diameters and heights to obtain vectors of trait values that produce realistic growth, and we explored the structure of this trait space. No notable correlations emerged among the 496 trait pairs, but stepwise regressions revealed complicated multi-variate structure: e.g., relationships between pairs of traits (e.g., RmL and SLA) are governed by other traits (e.g., LL, radiation-use efficiency [RUE]). We also simulated growth under various canopy gap scenarios that impose varying degrees of environmental filtering to explore the multi-dimensional trait space (hypervolume) of trees that died vs survived. The centroid and volume of the hypervolumes differed among dead and live trees, especially under gap conditions leading to low mortality. Traits most predictive
Theoretical Prediction of Melting Relations in the Deep Mantle: the Phase Diagram Approach
Belmonte, D.; Ottonello, G. A.; Vetuschi Zuccolini, M.; Attene, M.
2016-12-01
Despite the outstanding progress in computer technology and experimental facilities, understanding melting phase relations in the deep mantle is still an open challenge. In this work a novel computational scheme to predict melting relations at HP-HT by a combination of first principles DFT calculations, polymer chemistry and equilibrium thermodynamics is presented and discussed. The adopted theoretical framework is physically-consistent and allows to compute multi-component phase diagrams relevant to Earth's deep interior in a broad range of P-T conditions by a convex-hull algorithm for Gibbs free energy minimisation purposely developed for high-rank simplexes. The calculated phase diagrams are in turn used as a source of information to gain new insights on the P-T-X evolution of magmas in the deep mantle, providing some thermodynamic constraints to both present-day and early Earth melting processes. High-pressure melting curves of mantle silicates are also obtained as by-product of phase diagram calculation. Application of the above method to the MgO-Al2O3-SiO2 (MAS) ternary system highlights as pressure effects are not only able to change the nature of melting of some minerals (like olivine and pyroxene) from eutectic to peritectic (and vice versa), but also simplify melting relations by drastically reducing the number of phases with a primary phase field at HP-HT conditions. It turns out that mineral phases like Majorite-Pyrope garnet and Anhydrous Phase B (Mg14Si5O24), which are often disregarded in modelling melting processes of mantle assemblages, are stable phases at solidus or liquidus conditions in a P-T range compatible with the mantle transition zone (i.e. P = 16 - 23 GPa and T = 2200 - 2700 °C) when their thermodynamic and thermophysical properties are properly assessed. Financial support to the Senior Author (D.B.) during his stay as Invited Scientist at the Institut de Physique du Globe de Paris (IPGP, Paris) is warmly acknowledged.
Bandyopadhyay, J
2014-04-01
Full Text Available In this article, a combined experimental and theoretical approach has been proposed to establish a relationship between the required shear force and the degree of delamination of clay tactoids during the melt-processing of polymer nanocomposites...
Yoon, Gil Ho; Kim, Y.Y.; Langelaar, M.;
2008-01-01
. Therefore, it is important to interpolate link stiffness properly to obtain stably converging results. The main objective of this work is two-fold (1) the investigation of the relationship between the link stiffness and the stiffness of a domain-discretizing patch by using a discrete model and a homogenized......The internal element connectivity parameterization (I-ECP) method is an alternative approach to overcome numerical instabilities associated with low-stiffness element states in non-linear problems. In I-ECP, elements are connected by zero-length links while their link stiffness values are varied...... model and (2) the suggestion of link stiffness interpolation functions. The effects of link stiffness penalization on solution convergence are then tested with several numerical examples. The developed homogenized I-ECP model can also be used to physically interpret an intermediate design variable state....
THE ROLE OF SUNK COSTS IN ENTRY PROCESS INTO A FOREIGN MARKET: THEORETICAL APPROACH
K. Nadtochii
2014-04-01
Full Text Available The category of sunk costs is studied along with specific features of entry barriers. Different scientific approaches in defining a category are compared to make author's own. The study elucidates a role of sunk costs, including its influence on researching a market, evaluating an efficiency of entering into market and evaluating its risk. The paper highlights three main components of sunk costs, that are: investments to reduce production costs of incumbents relative to newcomers, investments to change a structure of rival's costs and investments to positive change of a demand on a product. Author proposes to consider sunk costs as a strategic barrier, due to a huge influence of incumbents' activities. A need to invest these costs also determines a high competitiveness of a new firm.
Families with a Disabled Child, between Stress and Acceptance. A Theoretical Approach
Camelia SOPONARU
2015-06-01
Full Text Available Family is a hierarchically organized system, which mobilizes its capacity of linear and circular adjustment, of homeostasis, of modification, increase and change with each new challenge that it encounters. This paper focuses on the issue of stress in families with a disabled child, starting from the premise that the normal development and integration of the disabled child in society depend significantly on ensuring – within the family – a supportive, reassuring, adaptive climate. The family can create this climate by ensuring a positive relationship between the two spouses and by approaching correctly the child’s disability. Mention must also be made of the positive influence of parenting training on parental stress and on enhancing the parental sense of competence.
Theoretical aspects of pressure and solute denaturation of proteins: A Kirkwood-buff-theory approach
Ben-Naim, Arieh
2012-12-01
A new approach to the problem of pressure-denaturation (PD) and solute-denaturation (SD) of proteins is presented. The problem is formulated in terms of Le Chatelier principle, and a solution is sought in terms of the Kirkwood-Buff theory of solutions. It is found that both problems have one factor in common; the excluded volumes of the folded and the unfolded forms with respect to the solvent molecules. It is shown that solvent-induced effects operating on hydrophilic groups along the protein are probably the main reason for PD. On the other hand, the SD depends on the preferential solvation of the folded and the unfolded forms with respect to solvent and co-solvent molecules.
Leontina Pavaloaia
2012-10-01
Full Text Available Mineral resources represent an important natural resource whose exploitation, unless it is rational, can lead to their exhaustion and the collapse of sustainable development. Given the importance of mineral resources and the uncertainty concerning the estimation of extant reserves, they have been analyzed by several national and international institutions. In this article we shall present a few aspects concerning the ways to approach the reserves of mineral resources at national and international level, by considering both economic aspects and those aspects concerned with the definition, classification and aggregation of the reserves of mineral resources by various specialized institutions. At present there are attempts to homogenize practices concerning these aspects for the purpose of presenting correct and comparable information.
Theoretical Approaches for Understanding the Interplay Between Stress and Chemical Reactivity.
Kochhar, Gurpaul S; Heverly-Coulson, Gavin S; Mosey, Nicholas J
2015-01-01
The use of mechanical stresses to induce chemical reactions has attracted significant interest in recent years. Computational modeling can play a significant role in developing a comprehensive understanding of the interplay between stresses and chemical reactivity. In this review, we discuss techniques for simulating chemical reactions occurring under mechanochemical conditions. The methods described are broadly divided into techniques that are appropriate for studying molecular mechanochemistry and those suited to modeling bulk mechanochemistry. In both cases, several different approaches are described and compared. Methods for examining molecular mechanochemistry are based on exploring the force-modified potential energy surface on which a molecule subjected to an external force moves. Meanwhile, it is suggested that condensed phase simulation methods typically used to study tribochemical reactions, i.e., those occurring in sliding contacts, can be adapted to study bulk mechanochemistry.
Groups, matrices, and vector spaces a group theoretic approach to linear algebra
Carrell, James B
2017-01-01
This unique text provides a geometric approach to group theory and linear algebra, bringing to light the interesting ways in which these subjects interact. Requiring few prerequisites beyond understanding the notion of a proof, the text aims to give students a strong foundation in both geometry and algebra. Starting with preliminaries (relations, elementary combinatorics, and induction), the book then proceeds to the core topics: the elements of the theory of groups and fields (Lagrange's Theorem, cosets, the complex numbers and the prime fields), matrix theory and matrix groups, determinants, vector spaces, linear mappings, eigentheory and diagonalization, Jordan decomposition and normal form, normal matrices, and quadratic forms. The final two chapters consist of a more intensive look at group theory, emphasizing orbit stabilizer methods, and an introduction to linear algebraic groups, which enriches the notion of a matrix group. Applications involving symm etry groups, determinants, linear coding theory ...
Maria Luísa Bissoto
2011-06-01
Full Text Available The neurodevelopmental disorders, mainly those genetics ones, are argued with the aim to analyze the human development conceptions that underlie these, and its impact for understanding who is the individual that carries this disorder. Methodologically, epistemological presupposition from “classical” neuropsychology and from “neuroconstructivist” neuropsychology had been compared. As results of this parallel had been considered relevant: a. the role of the individual surrounding, b. the question concerning the plasticity and dynamical character of development and c. the formal developmental process, from prenatal to postnatal period. The concluding comments claims that the Neuroconstructivist approaches allow conceiving the developmental process within genetics neurodevelopmental disorders not as a “fault” but as a differentiated and particular one. That should be understood in the Educational and Rehabilitation settings not as a nosological category but as a specific way of an individual acting while looking for a mode of being-in-the-world.
A Billiard-Theoretic Approach to Elementary 1d Elastic Collisions
Redner, S
2004-01-01
A simple relation is developed between elastic collisions of freely-moving point particles in one dimension and a corresponding billiard system. For two particles with masses m_1 and m_2 on the half-line x>0 that approach an elastic barrier at x=0, the corresponding billiard system is an infinite wedge. The collision history of the two particles can be easily inferred from the corresponding billiard trajectory. This connection nicely explains the classic demonstrations of the ``dime on the superball'' and the ``baseball on the basketball'' that are a staple in elementary physics courses. It is also shown that three elastic particles on an infinite line and three particles on a finite ring correspond, respectively, to the motion of a billiard ball in an infinite wedge and on on a triangular billiard table. It is shown how to determine the angles of these two sets in terms of the particle masses.
Philippe eDe Timary
2011-04-01
Full Text Available This paper compares the cognitive-behavioural and psychoanalytical approaches with respect to the way in which each of them conceives of representation and deals with the issues that this involves. In both of them conscious and latent (unconscious representations play a crucial role. Highlighting similarities and differences facilitate communication on a theoretical level but also prove helpful to the clinical practitioners involved. We try to put forward an attempt at comparison, with the idea of going beyond the -- obviously important -- differences in vocabulary. In this attempt at comparison, we have successively compared the definitions of representation and the respective therapeutic interventions proposed by each approach. There are no doubt many overlapping elements in the way in which the workings of the mind are conceived of in these approaches, particularly as regards their links with affects. We next developed the implications of representation deficits in pathology, suggesting the important role played by elements that are avoided, suppressed from memory or repressed, and with respect to the need to treat such material in a specific manner so as to ensure some progress as to the symptoms presented. We finally summarized common and distinct aspects of the two perspectives. The very fact that two approaches that follow very distinct methodologies reach the same conclusion concerning the importance of distortions and failures of representation in generating mental distress strengthens, in our view, the epistemological reliability of the role of representation in psychopathology.
Mou, Yi; Berteletti, Ilaria; Hyde, Daniel C
2017-09-06
Preschool children vary tremendously in their numerical knowledge, and these individual differences strongly predict later mathematics achievement. To better understand the sources of these individual differences, we measured a variety of cognitive and linguistic abilities motivated by previous literature to be important and then analyzed which combination of these variables best explained individual differences in actual number knowledge. Through various data-driven Bayesian model comparison and selection strategies on competing multiple regression models, our analyses identified five variables of unique importance to explaining individual differences in preschool children's symbolic number knowledge: knowledge of the count list, nonverbal approximate numerical ability, working memory, executive conflict processing, and knowledge of letters and words. Furthermore, our analyses revealed that knowledge of the count list, likely a proxy for explicit practice or experience with numbers, and nonverbal approximate numerical ability were much more important to explaining individual differences in number knowledge than general cognitive and language abilities. These findings suggest that children use a diverse set of number-specific, general cognitive, and language abilities to learn about symbolic numbers, but the contribution of number-specific abilities may overshadow that of more general cognitive abilities in the learning process. Copyright © 2017 Elsevier Inc. All rights reserved.
Labunskaya V.A.
2016-12-01
Full Text Available The article argues that the prohibition on the part of society displays ethno lookism as "ordinary" discriminatory practices aimed at representatives of ethno-cultural groups with a certain type of appearance, it leads to the actualization of masked, hid- den forms of this type of discrimination. This determines the transition to the study of the relevance of non-straight ethno lookism ways of expression. Previously developed by the author, "ethno-socio-psychological model of an empirical study of the re- lationship to ethno lookism" remaining base is insufficient to explain the phenomenon of variable, selective "attitude to the ethno lookism". The wide variation in levels of adopting discriminatory behavior of another, significant variations in the relationship between the model included factors within the studied groups, and between them have led to the development of the second model, called "subjective empirical model to study the relationship ethno lookism". The creation of such an empirical model is due to explanatory and predictive capabilities of the principle of subjectivity, as well as the level of subjectivity relationships with its human being, with a variety of ways and forms of human interaction with other people.
Dust growth in protoplanetary disks - a comprehensive experimental/theoretical approach
Blum, Jürgen
2010-12-01
More than a decade of dedicated experimental work on the collisional physics of protoplanetary dust has brought us to a point at which the growth of dust aggregates can — for the first time — be self-consistently and reliably modeled. In this article, the emergent collision model for protoplanetery dust aggregates, as well as the numerical model for the evolution of dust aggregates in protoplanetary disks, is reviewed. It turns out that, after a brief period of rapid collisional growth of fluffy dust aggregates to sizes of a few centimeters, the protoplanetary dust particles are subject to bouncing collisions, in which their porosity is considerably decreased. The model results also show that low-velocity fragmentation can reduce the final mass of the dust aggregates but that it does not trigger a new growth mode as discussed previously. According to the current stage of our model, the direct formation of kilometer-sized planetesimals by collisional sticking seems unlikely, implying that collective effects, such as the streaming instability and the gravitational instability in dust-enhanced regions of the protoplanetary disk, are the best candidates for the processes leading to planetesimals.
THE QUALITY OF HUMAN RESOURCES – A REQUEST FOR HOTEL INDUSTRY DEVELOPMENT. A THEORETICAL APPROACH
Aruștei Carmen Claudia
2013-04-01
Full Text Available In this article we focus on the importance of human resource quality from hotel industry in obtaining quality services and further more in obtaining hotel industry development. We address this issue due to the fact that, usually, when talking about tourism or hotel industry development, the literature in the field offers macro solutions like, infrastructure development, service/product development and/or improving service quality. We consider that a micro approach is also important and from this perspective, we emphasis the role of human resource quality for industry development. The quality of human resources, as a dimension of service quality was not detailed extensively by the literature in the field but we found it relevant for hotel industry development, as this industry it is a service industry, and as Ritz-Carlton Company’s motto say “service comes only from people”. In this article we focus on the importance of human resource quality from hotel industry in obtaining quality services and further more in obtaining hotel industry development. We address this issue due to the fact that, usually, when talking about tourism or hotel industry development, the literature in the field offers macro solutions like, infrastructure development, service/product development and/or improving service quality. We consider that a micro approach is also important and from this perspective, we emphasis the role of human resource quality for industry development. The quality of human resources, as a dimension of service quality was not detailed extensively by the literature in the field but we found it relevant for hotel industry development, as this industry it is a service industry, and as Ritz-Carlton Company’s motto say “service comes only from people”. In this article we focus on the importance of human resource quality from hotel industry in obtaining quality services and further more in obtaining hotel industry development. We address this
Performance of a solar liquid desiccant air conditioner - An experimental and theoretical approach
Alizadeh, Shahab [Queensland University of Technology, Faculty of Built Environment and Engineering, GPO Box 2434, Brisbane 4001 (Australia)
2008-06-15
In this paper the results of testing a solar liquid desiccant air conditioner (LDAC) in the tropical climate of Queensland, Australia have been presented. The system uses polymer plate heat exchanger (PPHE) for dehumidification/indirect evaporative cooling, and a cooling pad as the direct evaporative cooler for the dry air leaving the PPHE. Lithium chloride, which is an effective desiccant in air dehumidification, was used in the experiments and a scavenger air regenerator concentrates the dilute solution from the dehumidifier using hot water from flat plate solar collectors. The data obtained from performance monitoring of the solar LDAC operating on a commercial site in Brisbane was compared with a previously developed model for the PPHE. The comparison reveals that good agreement exists between the experiments and model predictions. The inaccuracies are well within the measuring errors of the temperature, humidity and the air and solution flow rates. The above tests further indicate a satisfactory performance of the unit by independently controlling the air temperature and humidity inside the conditioned space. In order to prevent carryover of the solution particles into the environment, eliminators are used at outlet of the absorber unit and the regenerator. An alternative method in preventing the carryover is the use of indirect cooling, in which the supply air does not contact the solution. The method can be used to produce potable water from the atmospheric air in remote areas. The liquid desiccant system can be used in the HVAC industry, either as a packaged roof-top air conditioner, or as an air handler unit for commercial applications. The system could also be used for space heating in winter due to the property of desiccants to provide heat when wetted. (author)
Surface enhanced Raman spectroscopic studies on aspirin : An experimental and theoretical approach
Premkumar, R.; Premkumar, S.; Rekha, T. N.; Parameswari, A.; Mathavan, T.; Benial, A. Milton Franklin
2016-05-01
Surface enhanced Raman scattering (SERS) studies on aspirin molecule adsorbed on silver nanoparticles (AgNPs) were investigated by experimental and density functional theory approach. The AgNPs were synthesized by the solution-combustion method and characterized by the X-ray diffraction and high resolution-transmission electron microscopy techniques. The averaged particle size of synthesized AgNPs was calculated as ˜55 nm. The normal Raman spectrum (nRs) and SERS spectrum of the aspirin were recorded. The molecular structure of the aspirin and aspirin adsorbed on silver cluster were optimized by the DFT/ B3PW91 method with LanL2DZ basis set. The vibrational frequencies were calculated and assigned on the basis of potential energy distribution calculation. The calculated nRs and SERS frequencies were correlated well with the observed frequencies. The flat-on orientation was predicted from the nRs and SERS spectra, when the aspirin adsorbed on the AgNPs. Hence, the present studies lead to the understanding of adsorption process of aspirin on the AgNPs, which paves the way for biomedical applications.
Boi, Luciano
2008-01-01
This paper is aimed at exploring the genome at the level beyond that of DNA sequence alone. We stress the fact that the level of genes is not the sole "reality" in the living world, for there are different epigenetic processes that profoundly affect change in living systems. Moreover, epigenetics very likely influences the course of evolution and the unfolding of life. We further attempt to investigate how the genome is dynamically organized into the nuclear space within the cell. We mainly focus on analyses of higher order nuclear architecture and the dynamic interactions of chromatin with other nuclear components. We especially want to know how epigenetic phenomena influences genes expression and chromosome functions. The proper understanding of these processes require new concepts and approaches be introduced and developed. In particular, we think that research in biology has to shift from only describing molecular and local features of living systems to studying the regulatory networks of interactions among gene pathways, the folding and dynamics of chromatin structure and how environmental factors affects the behavior of organisms. There are essential components of biological information on living organisms which cannot be portrayed in the DNA sequence alone. In a post-genomic era, the importance of chromatin/epigenetic interface has become increasingly apparent. One of the purposes of current research should be to highlight the enormous impact of chromatin organization and dynamics on epigenetic phenomena, and, conversely, to emphasize the important role that epigenetic phenomena play in gene expression and cell regulation.
Mechanisms-based viscoplasticity: Theoretical approach and experimental validation for steel 304L
Zubelewicz, Aleksander; Oliferuk, Wiera
2016-03-01
We propose a mechanisms-based viscoplasticity approach for metals and alloys. First, we derive a stochastic model for thermally-activated motion of dislocations and, then, introduce power-law flow rules. The overall plastic deformation includes local plastic slip events taken with an appropriate weight assigned to each angle of the plane misorientation from the direction of maximum shear stress. As deformation progresses, the material experiences successive reorganizations of the slip systems. The microstructural evolution causes that a portion of energy expended on plastic deformation is dissipated and the rest is stored in the defect structures. We show that the reorganizations are stable in a homogeneously deformed material. The concept is tested for steel 304L, where we reproduce experimentally obtained stress-strain responses, we construct the Frost-Ashby deformation map and predict the rate of the energy storage. The storage is assessed in terms of synchronized measurements of temperature and displacement distributions on the specimen surface during tensile loading.
Vijay Kumar
2016-12-01
Full Text Available This paper outlines the quality inspection strategies in a supplier–buyer supply chain under a customer return policy. This paper primarily focuses on product quality and quality inspection techniques to maximize the actors’ and supply chain’s profits using game theory approach. The supplier–buyer setup is described in terms of textile manufacturer–retailer supply chain where quality inspection is an important aspect and the product return from the customer is generally accepted. Textile manufacturer produces the product, whereas, retailer acts as a reseller who buys the products from the textile manufacturer and sells them to the customers. In this context, the former invests in the product quality whereas the latter invests in the random quality inspection and traceability. The relationships between the textile manufacturer and the retailer are recognized as horizontal and vertical alliances and modeled using non-cooperative and cooperative games. The non-cooperative games are based on the Stackelberg and Nash equilibrium models. Further, bargaining and game change scenarios have been discussed to maximize the profit under different games. To understand the appropriateness of a strategic alliance, a computational study demonstrates textile manufacturer–retailer relation under different game scenarios.
Tejedor, A.; Foufoula-Georgiou, E.; Longjas, A.; Zaliapin, I. V.
2014-12-01
River deltas are intricate landscapes with complex channel networks that self-organize to deliver water, sediment, and nutrients from the apex to the delta top and eventually to the coastal zone. The natural balance of material and energy fluxes which maintains a stable hydrologic, geomorphologic, and ecological state of a river delta, is often disrupted by external factors causing topological and dynamical changes in the delta structure and function. A formal quantitative framework for studying river delta topology and transport dynamics and their response to change is lacking. Here we present such a framework based on spectral graph theory and demonstrate its value in quantifying the complexity of the delta network topology, computing its steady state fluxes, and identifying upstream (contributing) and downstream (nourishment) areas from any point in the network. We use this framework to construct vulnerability maps that quantify the relative change of sediment and water delivery to the shoreline outlets in response to possible perturbations in hundreds of upstream links. This enables us to evaluate which links (hotspots) and what management scenarios would most influence flux delivery to the outlets, paving the way of systematically examining how local or spatially distributed delta interventions can be studied within a systems approach for delta sustainability.
The role of information systems in management decision making-an theoretical approach
PhD. Associate Professor Department of Management & Informatics Mihane Berisha-Namani
2010-12-01
Full Text Available In modern conditions of globalisation and development of information technology, information processing activities have come to be seen as essential to successful of businesses and organizations. Information has become essential to make decisions and crucial asset in organisations, whereas information systems is technology required for information processing. The application of information systems technology in business and organisations has opened up new possibilities for running and managing organisations, as well as has improved management decision making. The purpose of this paper is to give an understanding of the role that information systems have in management decision making and to discuss the possibilities how managers of organisations can make best use of information systems. The paper starts with identifying the functions of management and managerial roles and continue with information systems usage in three levels of decision making. It specifically addresses the way how information systems can help managers reduce uncertainty in decision making and includes some important implications of information systems usage for managers. Thus, this study provide a framework of effective use of information systems generally and offers an alternative approach to investigate the impact that information systems technology have in management decision making specifically
Dezfoli, Amir Reza Ansari; Hwang, Weng-Sing; Huang, Wei-Chin; Tsai, Tsung-Wen
2017-01-01
There are serious questions about the grain structure of metals after laser melting and the ways that it can be controlled. In this regard, the current paper explains the grain structure of metals after laser melting using a new model based on combination of 3D finite element (FE) and cellular automaton (CA) models validated by experimental observation. Competitive grain growth, relation between heat flows and grain orientation and the effect of laser scanning speed on final micro structure are discussed with details. Grains structure after laser melting is founded to be columnar with a tilt angle toward the direction of the laser movement. Furthermore, this investigation shows that the grain orientation is a function of conduction heat flux at molten pool boundary. Moreover, using the secondary laser heat source (SLHS) as a new approach to control the grain structure during the laser melting is presented. The results proved that the grain structure can be controlled and improved significantly using SLHS. Using SLHS, the grain orientation and uniformity can be change easily. In fact, this method can help us to produce materials with different local mechanical properties during laser processing according to their application requirements.
Surface enhanced Raman spectroscopic studies on aspirin : An experimental and theoretical approach
Premkumar, R.; Premkumar, S.; Parameswari, A.; Mathavan, T.; Benial, A. Milton Franklin, E-mail: miltonfranklin@yahoo.com [Department of Physics, N.M.S.S.V.N College, Madurai-625019, Tamilnadu, India. (India); Rekha, T. N. [PG and Research Department of Physics, Lady Doak College, Madurai-625 002, Tamilnadu, India. (India)
2016-05-06
Surface enhanced Raman scattering (SERS) studies on aspirin molecule adsorbed on silver nanoparticles (AgNPs) were investigated by experimental and density functional theory approach. The AgNPs were synthesized by the solution-combustion method and characterized by the X-ray diffraction and high resolution-transmission electron microscopy techniques. The averaged particle size of synthesized AgNPs was calculated as ∼55 nm. The normal Raman spectrum (nRs) and SERS spectrum of the aspirin were recorded. The molecular structure of the aspirin and aspirin adsorbed on silver cluster were optimized by the DFT/ B3PW91 method with LanL2DZ basis set. The vibrational frequencies were calculated and assigned on the basis of potential energy distribution calculation. The calculated nRs and SERS frequencies were correlated well with the observed frequencies. The flat-on orientation was predicted from the nRs and SERS spectra, when the aspirin adsorbed on the AgNPs. Hence, the present studies lead to the understanding of adsorption process of aspirin on the AgNPs, which paves the way for biomedical applications.
Multi-Party Energy Management for Clusters of Roof Leased PV Prosumers: A Game Theoretical Approach
Nian Liu
2016-07-01
Full Text Available The roof-leased business mode is an important development method for the distributed photovoltaic (PV systems. In this paper, the benefits of the PV energy are considered in a PV cluster (PVC consisting of a certain number of prosumers and a PVC operator (PVCO. In order to distribute the benefits, a multi-party energy management method for the PVC is proposed, including an internal pricing model and a demand response (DR model. First, the dynamic internal pricing model for the trading between PVCO and prosumers is formulated according to the economic principle of demand and supply relation. Moreover, in order to improve the local consumption of PV energy, the DR model is formulated as a non-cooperative game among the prosumers. Meanwhile, the existence and uniqueness of the Nash Equilibrium (NE are proved, and a distributed solving algorithm is introduced to approach the NE solution. Finally, the PVC including four prosumers is selected as the study object, the results have shown that the internal pricing model and DR model can improve the benefit of both prosumers and PVCO, as well as the local consumption of PV energy.
A Survey of Game Theoretic Approaches to Modelling Decision-Making in Information Warfare Scenarios
Kathryn Merrick
2016-07-01
Full Text Available Our increasing dependence on information technologies and autonomous systems has escalated international concern for information- and cyber-security in the face of politically, socially and religiously motivated cyber-attacks. Information warfare tactics that interfere with the flow of information can challenge the survival of individuals and groups. It is increasingly important that both humans and machines can make decisions that ensure the trustworthiness of information, communication and autonomous systems. Subsequently, an important research direction is concerned with modelling decision-making processes. One approach to this involves modelling decision-making scenarios as games using game theory. This paper presents a survey of information warfare literature, with the purpose of identifying games that model different types of information warfare operations. Our contribution is a systematic identification and classification of information warfare games, as a basis for modelling decision-making by humans and machines in such scenarios. We also present a taxonomy of games that map to information warfare and cyber crime problems as a precursor to future research on decision-making in such scenarios. We identify and discuss open research questions including the role of behavioural game theory in modelling human decision making and the role of machine decision-making in information warfare scenarios.
Yoshida, Sanichiro
2015-01-01
This book introduces a comprehensive theory of deformation and fracture to engineers and applied scientists. The author explains the gist of local symmetry (gauge invariance) intuitively so that engineers and applied physicists can digest it easily, rather than describing physical or mathematical details of the principle. Applications of the theory to practical engineering are also described, such as nondestructive testing, in particular, with the use of an optical interferometric technique called ESPI (Electronic Speckle-Pattern Interferometry). The book provides information on how to apply physical concepts to engineering applications. This book also: · Describes a highly original way to reveal loading hysteresis of a given specimen · Presents a fundamentally new approach to deformation and fracture, which offers potential for new applications · Introduces the unique application of Electric Speckle-Pattern Interferometry—reading fringe patterns to connect...
TWSME of a NiTi strip in free bending conditions: experimental and theoretical approach
A. Fortini
2014-07-01
Full Text Available This paper deals with the two-way shape memory effect (TWSME induced on a strip of a nearequiatomic NiTi alloy by means of the shape memory cycling training method. This procedure is based on the deformation in martensite state to reach the desired cold shape followed by cycling the temperature from above Af to below Mf. To this end, the sample was thermally treated to memorise a bent shape, thermomechanical trained as described and thermally cycled in unloaded conditions in order to study the stability of the induced TWSME. Heating to Af was reached by a hot air stream flow whereas cooling to Mf was achieved through natural convection. The evolution of the curvature with the increasing number of cycles was evaluated. The thermomechanical behaviour of the strip undergoing uniform bending was simulated using a one-dimensional phenomenological model based on stress and the temperature as external control variables. Both martensite and austenite volume fractions were chosen as internal parameters and kinetic laws were used in order to describe their evolution during phase transformations. The experimental findings are compared with the model simulation and a numerical prediction based on the approach proposed in [25].
A Game-Theoretical Approach for Spectrum Efficiency Improvement in Cloud-RAN
Zhuofu Zhou
2016-01-01
Full Text Available As tremendous mobile devices access to the Internet in the future, the cells which can provide high data rate and more capacity are expected to be deployed. Specifically, in the next generation of mobile communication 5G, cloud computing is supposed to be applied to radio access network. In cloud radio access network (Cloud-RAN, the traditional base station is divided into two parts, that is, remote radio heads (RRHs and base band units (BBUs. RRHs are geographically distributed and densely deployed, so as to achieve high data rate and low latency. However, the ultradense deployment inevitably deteriorates spectrum efficiency due to the severer intercell interference among RRHs. In this paper, the downlink spectrum efficiency can be improved through the cooperative transmission based on forming the coalitions of RRHs. We formulate the problem as a coalition formation game in partition form. In the process of coalition formation, each RRH can join or leave one coalition to maximize its own individual utility while taking into account the coalition utility at the same time. Moreover, the convergence and stability of the resulting coalition structure are studied. The numeric simulation result demonstrates that the proposed approach based on coalition formation game is superior to the noncooperative method in terms of the aggregate coalition utility.
Revenue sharing in semiconductor industry supply chain: Cooperative game theoretic approach
Bikram K Bahinipati; Arun Kanda; S G Deshmukh
2009-06-01
This paper deﬁnes cooperation as the process of coordinating the objectives and activities of supply chain (SC) members. It also focuses on cooperation as a solution for hybrid coordination mechanism to form the basis for semiconductor industry supply chain management. In the complex and competitive environment of semiconductor industry supply chain, independent system members are facing the difﬁcult task of providing/sharing incentives resulting from e-market activities in a fair and equitable manner. So, various other activities are necessary for the e-market to make revenue sharing operations more stable and reliable. In this context, the importance of coalition in enhancing the e-market capability, for revenue generation and sharing, is used to develop a possible mechanism for financial compensation to the supply chain members. Interpreting the supply chain as cooperation, the concepts of the Shapely value are used in this paper for analysing the revenue sharing problem. The motivation behind such a scheme is to align the supply chain members’ cost structure with the bidding value during auction and bargaining for e-procurement. The appropriateness of the Shapely value is verified to ensure that a stable solution exists. The practical implication of this paper is how to make right decisions about revenue sharing. The principal contribution of this approach is for establishing a pooling coalition in order to provide a stable and cooperative solution.
A THEORETICAL AND POLICY APPROACH OF CLUSTERS, AS SUPPORTING INNOVATION AND DEVELOPMENT
Dalina Maria Andrei
2014-12-01
Full Text Available The concept of Clusters seems to belong to the present whereas actually it is an old economic concept. Even classics (e.g. David Ricardo or old neoclassic (e.g. Alfred Marshall have still significant references for the today, whereas present produced enough favorable environment for. This paper will develop some of all references about, economic thinking, description and policy issues to be noted on. Moreover, the new need of clusters consists in the scenery that the economy exposes and preserves today. Risks, disadvantages and all that shadow such new current character and actor‟ are not only inherent, but keep their place in context. The specific cluster policy, in its turn, explicitly identifies what the national level‟s interests and the barriers of achieving those goals are and how cluster approach can help overcome these problems. It equally weights the relative merits of active intervention from the national level, versus framework conditions and facilitation, vis-à-vis theories and models.
The Analysis of Theoretical Approaches to Identification of Factors of Regional Economic Growth
Kosyrieva Olena V.
2016-05-01
Full Text Available The article is devoted to the investigation of current approaches to identification of factors of economic growth in the regions. The theory of new economic geography based on works and studies of P. Krugman has been considered. There highlighted the key issues of regional economic growth requiring an in-depth studying and consideration in the elaboration of strategies for sustainable development of the regions. The views of leading domestic and foreign scholars as well as the OECD experts, the World Bank, the National Statistics Service of the United Kingdom and others on the main drivers of economic growth have been analyzed. On the basis of the study the factors of regional economic growth most commonly encountered in theory and practice have been generalized. It is proved that scientists and analysts most often classify as the factors of regional economic growth the following ones: those characterizing human potential, scientific-technical and innovative activity, management and institutional capacity. The factors less frequently correlated by specialists and scientists with the regional economic growth include: those characterizing the environment, business and sociopolitical factors, which is associated with difficulty of their quantification, but in any case does not diminish their importance.
Development of theoretical approaches to the control of a water-treatment system
Chiam, Tze Chao; Yih, Yuehwern; Mitchell, Cary
2010-12-01
The Advanced Life Support/NASA Specialized Center of Research and Training (ALS/NSCORT) focuses on research and development of technologies to support human habitation during space missions. This research was done as part of an effort to maintain crewmembers' water supply in a closed life-support system. The water subsystem was the primary focus of this study because water is one of the most expensive and important resources for human survival. This paper focuses on the application of previously-developed methodologies for controlling a water-treatment system. The water-treatment and usage model was set up based on the Baseline Value and Assumptions Document (BVAD) published by NASA. The conditions, or "states", of the system are captured hourly in the model. Control policies are defined in terms of treatment efficiencies. One control policy is to be used for the system at the beginning of each hour. A simulation is developed to study the impact of different control policies to this model. The Markov Decision Process (MDP) is used to aid the decision of choosing a policy to be used. In order for the MDP to pick the best policy, a state-space exploration technique, the Controlled Random Walk, is used for state-space exploration. This technique, together with re-initialization of initial system conditions, allow the system to visit a larger portion of the state space compared to some other techniques. In order to describe the state transitions, transition-probability matrices need to be computed. A probability-convergence technique is developed to minimize bias while computing these matrices. This technique is based on the Kullback-Leibler divergence measure, but modified for this application. A test case is constructed to assess the performance of best policies. Three scenarios with different levels of fluctuation, or disturbances, of water-consumption and treatment efficiencies are introduced in the test case. Results show that the best policies outperformed cases
A THEORETICAL APPROACH OF THE CONCEPT OF SOCIAL CAPITAL IN SUPPORTING ECONOMIC PERFORMANCE
Boldea Monica
2012-07-01
Full Text Available Any activity requires the presence of labor resources. If centuries ago the concept was that their presence was enough, now it takes a lot more. Moreover, since Aristotle the issue was taking into consideration all aspects of the community life that can lead to a "better life". In the current conditions we may consider resources in the broader context of the human factor and of the relations established within a society. Thus social capital was conceptualized. As opportunities of economic growth - based purely on the quantitative aspects of the determinants â€“ were limited, the need occurred to reconsider the qualitative and structural components. Social capital considers a number of the integrative components of social life. These refer to the relations established at family level up to the level of societal institutions. It is necessary that these relationships be well established, and for the proper performance it is necessary that aspects of education and health should be properly valued and assessed. This helps setting up strong institutions. Developed countries have the ability to create a proper environment for the manifestations of social capital; in these countries one can observe the growing importance of formal and more impersonal relations. But this just reinforces the occurrence and development of economic activities based on efficiency criteria leading to the countriesâ€™ economic development. The interpretations of economic development issues have undergone changes in recent decades. If previously it was considered that the essential difference between rich and poor countries is reflected in the amount of physical capital per person, later on the concept of capital has been expanded to include as well human capital, the lack of which was considered a serious obstacle to development, particularly in the case of poor countries. And given the fact that the transactions within an economic system take place in an
Tan, Xuesong Jonathan; Li, Liang; Guo, Wei
One important issue in cognitive transmission is for multiple secondary users to dynamically acquire spare spectrum from the single primary user. The existing spectrum sharing scheme adopts a deterministic Cournot game to formulate this problem, of which the solution is the Nash equilibrium. This formulation is based on two implicit assumptions. First, each secondary user is willing to fully exchange transmission parameters with all others and hence knows their complete information. Second, the unused spectrum of the primary user for spectrum sharing is always larger than the total frequency demand of all secondary users at the Nash equilibrium. However, both assumptions may not be true in general. To remedy this, the present paper considers a more realistic assumption of incomplete information, i.e., each secondary user may choose to conceal their private information for achieving higher transmission benefit. Following this assumption and given that the unused bandwidth of the primary user is large enough, we adopt a probabilistic Cournot game to formulate an opportunistic spectrum sharing scheme for maximizing the total benefit of all secondary users. Bayesian equilibrium is considered as the solution of this game. Moreover, we prove that a secondary user can improve their expected benefit by actively hiding its transmission parameters and increasing their variance. On the other hand, when the unused spectrum of the primary user is smaller than the maximal total frequency demand of all secondary users at the Bayesian equilibrium, we formulate a constrained optimization problem for the primary user to maximize its profit in spectrum sharing and revise the proposed spectrum sharing scheme to solve this problem heuristically. This provides a unified approach to overcome the aforementioned two limitations of the existing spectrum sharing scheme.
French, Simon D; Green, Sally E; O'Connor, Denise A; McKenzie, Joanne E; Francis, Jill J; Michie, Susan; Buchbinder, Rachelle; Schattner, Peter; Spike, Neil; Grimshaw, Jeremy M
2012-04-24
There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF) to advance the science of implementation research. The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be maintained as the primary framework to guide researchers through a
French Simon D
2012-04-01
Full Text Available Abstract Background There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF to advance the science of implementation research. Methods The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s of delivery could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? Results A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. Conclusions We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be
Lenich, Andreas
2011-04-22
Abstract Background Since cut-out still remains one of the major clinical challenges in the field of osteoporotic proximal femur fractures, remarkable developments have been made in improving treatment concepts. However, the mechanics of these complications have not been fully understood. We hypothesize using the experimental data and a theoretical model that a previous rotation of the femoral head due to de-central implant positioning can initiate a cut-out. Methods In this investigation we analysed our experimental data using two common screws (DHS\\/Gamma 3) and helical blades (PFN A\\/TFN) for the fixation of femur fractures in a simple theoretical model applying typical gait pattern on de-central positioned implants. In previous tests during a forced implant rotation by a biomechanical testing machine in a human femoral head the two screws showed failure symptoms (2-6Nm) at the same magnitude as torques acting in the hip during daily activities with de-central implant positioning, while the helical blades showed a better stability (10-20Nm). To calculate the torque of the head around the implant only the force and the leverarm is needed (N [Nm] = F [N] * × [m]). The force F is a product of the mass M [kg] multiplied by the acceleration g [m\\/s2]. The leverarm is the distance between the center of the head of femur and the implant center on a horizontal line. Results Using 50% of 75 kg body weight a torque of 0.37Nm for the 1 mm decentralized position and 1.1Nm for the 3 mm decentralized position of the implant was calculated. At 250% BW, appropriate to a normal step, torques of 1.8Nm (1 mm) and 5.5Nm (3 mm) have been calculated. Comparing of the experimental and theoretical results shows that both screws fail in the same magnitude as torques occur in a more than 3 mm de-central positioned implant. Conclusion We conclude the center-center position in the head of femur of any kind of lag screw or blade is to be achieved to minimize rotation of the femoral head
Nerlich Michael
2011-04-01
Full Text Available Abstract Background Since cut-out still remains one of the major clinical challenges in the field of osteoporotic proximal femur fractures, remarkable developments have been made in improving treatment concepts. However, the mechanics of these complications have not been fully understood. We hypothesize using the experimental data and a theoretical model that a previous rotation of the femoral head due to de-central implant positioning can initiate a cut-out. Methods In this investigation we analysed our experimental data using two common screws (DHS/Gamma 3 and helical blades (PFN A/TFN for the fixation of femur fractures in a simple theoretical model applying typical gait pattern on de-central positioned implants. In previous tests during a forced implant rotation by a biomechanical testing machine in a human femoral head the two screws showed failure symptoms (2-6Nm at the same magnitude as torques acting in the hip during daily activities with de-central implant positioning, while the helical blades showed a better stability (10-20Nm. To calculate the torque of the head around the implant only the force and the leverarm is needed (N [Nm] = F [N] * × [m]. The force F is a product of the mass M [kg] multiplied by the acceleration g [m/s2]. The leverarm is the distance between the center of the head of femur and the implant center on a horizontal line. Results Using 50% of 75 kg body weight a torque of 0.37Nm for the 1 mm decentralized position and 1.1Nm for the 3 mm decentralized position of the implant was calculated. At 250% BW, appropriate to a normal step, torques of 1.8Nm (1 mm and 5.5Nm (3 mm have been calculated. Comparing of the experimental and theoretical results shows that both screws fail in the same magnitude as torques occur in a more than 3 mm de-central positioned implant. Conclusion We conclude the center-center position in the head of femur of any kind of lag screw or blade is to be achieved to minimize rotation of the
An information-theoretic machine learning approach to expression QTL analysis.
Tao Huang
Full Text Available Expression Quantitative Trait Locus (eQTL analysis is a powerful tool to study the biological mechanisms linking the genotype with gene expression. Such analyses can identify genomic locations where genotypic variants influence the expression of genes, both in close proximity to the variant (cis-eQTL, and on other chromosomes (trans-eQTL. Many traditional eQTL methods are based on a linear regression model. In this study, we propose a novel method by which to identify eQTL associations with information theory and machine learning approaches. Mutual Information (MI is used to describe the association between genetic marker and gene expression. MI can detect both linear and non-linear associations. What's more, it can capture the heterogeneity of the population. Advanced feature selection methods, Maximum Relevance Minimum Redundancy (mRMR and Incremental Feature Selection (IFS, were applied to optimize the selection of the affected genes by the genetic marker. When we applied our method to a study of apoE-deficient mice, it was found that the cis-acting eQTLs are stronger than trans-acting eQTLs but there are more trans-acting eQTLs than cis-acting eQTLs. We compared our results (mRMR.eQTL with R/qtl, and MatrixEQTL (modelLINEAR and modelANOVA. In female mice, 67.9% of mRMR.eQTL results can be confirmed by at least two other methods while only 14.4% of R/qtl result can be confirmed by at least two other methods. In male mice, 74.1% of mRMR.eQTL results can be confirmed by at least two other methods while only 18.2% of R/qtl result can be confirmed by at least two other methods. Our methods provide a new way to identify the association between genetic markers and gene expression. Our software is available from supporting information.
Qi, Huan
Direct metal deposition (DMD), a laser-cladding based solid freeform fabrication technique, is capable of depositing multiple materials at desired composition which makes this technique a flexible method to fabricate heterogeneous components or functionally-graded structures. The inherently rapid cooling rate associated with the laser cladding process enables extended solid solubility in nonequilibrium phases, offering the possibility of tailoring new materials with advanced properties. This technical advantage opens the area of synthesizing a new class of materials designed by topology optimization method which have performance-based material properties. For better understanding of the fundamental phenomena occurring in multi-material laser cladding with coaxial powder injection, a self-consistent 3-D transient model was developed. Physical phenomena including laser-powder interaction, heat transfer, melting, solidification, mass addition, liquid metal flow, and species transportation were modeled and solved with a controlled-volume finite difference method. Level-set method was used to track the evolution of liquid free surface. The distribution of species concentration in cladding layer was obtained using a nonequilibrium partition coefficient model. Simulation results were compared with experimental observations and found to be reasonably matched. Multi-phase material microstructures which have negative coefficients of thermal expansion were studied for their DMD manufacturability. The pixel-based topology-optimal designs are boundary-smoothed by Bezier functions to facilitate toolpath design. It is found that the inevitable diffusion interface between different material-phases degrades the negative thermal expansion property of the whole microstructure. A new design method is proposed for DMD manufacturing. Experimental approaches include identification of laser beam characteristics during different laser-powder-substrate interaction conditions, an
Arantes, Lucas M; Varejão, Eduardo V V; Pelizzaro-Rocha, Karin J; Cereda, Cíntia M S; de Paula, Eneida; Lourenço, Maicon P; Duarte, Hélio A; Fernandes, Sergio A
2014-05-01
The aim of this work was to study the interaction between the local anesthetic benzocaine and p-sulfonic acid calix[n]arenes using NMR and theoretical calculations and to assess the effects of complexation on cytotoxicity of benzocaine. The architectures of the complexes were proposed according to (1) H NMR data (Job plot, binding constants, and ROESY) indicating details on the insertion of benzocaine in the cavity of the calix[n]arenes. The proposed inclusion compounds were optimized using the PM3 semiempirical method, and the electronic plus nuclear repulsion energy contributions were performed at the DFT level using the PBE exchange/correlation functional and the 6-311G(d) basis set. The remarkable agreement between experimental and theoretical approaches adds support to their use in the structural characterization of the inclusion complexes. In vitro cytotoxic tests showed that complexation intensifies the intrinsic toxicity of benzocaine, possibly by increasing the water solubility of the anesthetic and favoring its partitioning inside of biomembranes.
Yu, Donghai; Rong, Chunying; Lu, Tian; Chattaraj, Pratim K; De Proft, Frank; Liu, Shubin
2017-07-19
Even though the concept of aromaticity and antiaromaticity is extremely important and widely used, there still exist lots of controversies in the literature, which are believed to be originated from the fact that there are so many aromatic types discovered and at the same time there are many aromaticity indexes proposed. In this work, using seven series of substituted fulvene derivatives as an example and with the information-theoretic approach in density functional reactivity theory, we examine these concepts from a different perspective. We investigate the changing patterns of Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Renyi entropy on the ring carbon atoms of these systems. Meanwhile, we also consider variation trends of four representative kinds of aromaticity indexes such as FLU, HOMA, ASE and NICS. Statistical analyses among these quantities show that with the same ring structure of the derivatives, both information-theoretic quantities and aromaticity indexes obey the same changing pattern, which are valid across all seven systems studied. However, cross correlations between these two sets of quantities yield two completely opposite patterns. These ring-structure dependent correlations are in good agreement with Hückel's 4n + 2 rule of aromaticity and 4n rule of antiaromaticity. Our results should provide a novel and complementary viewpoint on how aromaticity and antiaromaticity should be appreciated and categorized. More studies are in progress to further our understanding about the matter.
Oesterling, Sven; Schalk, Oliver; Geng, Ting; Thomas, Richard D; Hansson, Tony; de Vivie-Riedle, Regina
2017-01-18
For the series furan, furfural and β-furfural we investigated the effect of substituents and their positioning on the photoinduced relaxation dynamics in a combined theoretical and experimental approach. Using time resolved photoelectron spectroscopy with a high intensity probe pulse, we can, for the first time, follow the whole deactivation process of furan through a two photon probe signal. Using the extended 2-electron 2-orbital model [Nenov et al., J. Chem. Phys., 2011, 135, 034304] we explain the formation of one central conical intersection and predict the influence of the aldehyde group of the derivatives on its geometry. This, as well as the relaxation mechanisms from photoexcitation to the final outcome was investigated using a variety of theoretical methods. Complete active space self consistent field was used for on-the-fly calculations while complete active space perturbation theory and coupled cluster theory were used to accurately describe critical configurations. Experiment and theory show the relaxation dynamics of furfural and β-furfural to be slowed down, and together they disclose an additional deactivation pathway, which is attributed to the nO lonepair state introduced with the aldehyde group.
Presseau, Justin; Schwalm, J D; Grimshaw, Jeremy M; Witteman, Holly O; Natarajan, Madhu K; Linklater, Stefanie; Sullivan, Katrina; Ivers, Noah M
2016-12-20
Despite evidence-based recommendations, adherence with secondary prevention medications post-myocardial infarction (MI) remains low. Taking medication requires behaviour change, and using behavioural theories to identify what factors determine adherence could help to develop novel adherence interventions. Compare the utility of different behaviour theory-based approaches for identifying modifiable determinants of medication adherence post-MI that could be targeted by interventions. Two studies were conducted with patients 0-2, 3-12, 13-24 or 25-36 weeks post-MI. Study 1: 24 patients were interviewed about barriers and facilitators to medication adherence. Interviews were conducted and coded using the Theoretical Domains Framework. Study 2: 201 patients answered a telephone questionnaire assessing Health Action Process Approach constructs to predict intention and medication adherence (MMAS-8). Study 1: domains identified: Beliefs about Consequences, Memory/Attention/Decision Processes, Behavioural Regulation, Social Influences and Social Identity. Study 2: 64, 59, 42 and 58% reported high adherence at 0-2, 3-12, 13-24 and 25-36 weeks. Social Support and Action Planning predicted adherence at all time points, though the relationship between Action Planning and adherence decreased over time. Using two behaviour theory-based approaches provided complimentary findings and identified modifiable factors that could be targeted to help translate Intention into action to improve medication adherence post-MI.
Evans, Jason; Sullivan, Jack
2011-01-01
A priori selection of models for use in phylogeny estimation from molecular sequence data is increasingly important as the number and complexity of available models increases. The Bayesian information criterion (BIC) and the derivative decision-theoretic (DT) approaches rely on a conservative approximation to estimate the posterior probability of a given model. Here, we extended the DT method by using reversible jump Markov chain Monte Carlo approaches to directly estimate model probabilities for an extended candidate pool of all 406 special cases of the general time reversible + Γ family. We analyzed 250 diverse data sets in order to evaluate the effectiveness of the BIC approximation for model selection under the BIC and DT approaches. Model choice under DT differed between the BIC approximation and direct estimation methods for 45% of the data sets (113/250), and differing model choice resulted in significantly different sets of trees in the posterior distributions for 26% of the data sets (64/250). The model with the lowest BIC score differed from the model with the highest posterior probability in 30% of the data sets (76/250). When the data indicate a clear model preference, the BIC approximation works well enough to result in the same model selection as with directly estimated model probabilities, but a substantial proportion of biological data sets lack this characteristic, which leads to selection of underparametrized models.
Efficient and Low-Cost 3D Structured Light System Based on a Modified Number-Theoretic Approach
Salvi Joaquim
2010-01-01
Full Text Available Abstract 3D scanning based on structured light (SL has been proven to be a powerful tool to measure the three-dimensional shape of surfaces, especially in biomechanics. We define a set of conditions that an optimal SL strategy should fulfill in the case of static scenes and then we present an efficient solution based on improving the number-theoretic approach (NTA. The proposal is compared to the well-known Gray code (GC plus phase shift (PS technique and the original NTA, all satisfying the same set of conditions but obtaining significant improvements with our implementation. The technique is validated in biomechanical applications such as the scanning of a footprint left on a "foam box" typically made for that purpose, where one of the ultimate goals could be the production of a shoe insole.
Cali F. Bartholomeusz
2012-01-01
Full Text Available Improving functional outcome, in addition to alleviating psychotic symptoms, is now a major treatment objective in schizophrenia research. Given the large body of evidence suggesting pharmacological treatments generally have minimal effects on indices of functioning, research has turned to psychosocial rehabilitation programs. Among these, neurocognitive and social cognitive interventions are at the forefront of this field and are argued to target core deficits inherent to the schizophrenia illness. However, to date, research trials have primarily focused on chronic schizophrenia populations, neglecting the early psychosis groups who are often as severely impaired in social and occupational functioning. This theoretical paper will outline the rationale for investigating adjunctive cognitive-based interventions in the early phases of psychotic illness, critically examine the current approach strategies used in these interventions, and assess the evidence supporting certain training programs for improving functional outcome in early psychosis. Potential pathways for future research will be discussed.
Efficient and Low-Cost 3D Structured Light System Based on a Modified Number-Theoretic Approach
Tomislav Pribanić
2010-01-01
Full Text Available 3D scanning based on structured light (SL has been proven to be a powerful tool to measure the three-dimensional shape of surfaces, especially in biomechanics. We define a set of conditions that an optimal SL strategy should fulfill in the case of static scenes and then we present an efficient solution based on improving the number-theoretic approach (NTA. The proposal is compared to the well-known Gray code (GC plus phase shift (PS technique and the original NTA, all satisfying the same set of conditions but obtaining significant improvements with our implementation. The technique is validated in biomechanical applications such as the scanning of a footprint left on a “foam box” typically made for that purpose, where one of the ultimate goals could be the production of a shoe insole.
Fahim-Al-Fattah, Md.; Rahman, Md. Tawabur; Islam, Md. Sherajul; Bhuiyan, Ashraful G.
2016-02-01
This paper presents a detailed study of theoretical performance of graphene field effect transistor (GFET) using analytical approach. GFET shows promising performance in terms of faster saturation as well as extremely high cutoff frequency (3.9THz). A significant shift of the Dirac point as well as an asymmetrical ambipolar behavior is observed on the transfer characteristics. Similarly, an approximate symmetrical capacitance-voltage (C-V) characteristics is obtained where it has guaranteed the consistency because it shows a significant saturation both in the accumulation and inversion region. In addition, a high transconductance of 6800uS at small channel length (20nm) along with high cutoff frequency (3.9THz) has been observed which demands for high speed field effect devices.
Votchenko, E.S.
2016-04-01
Full Text Available This scientific article touches a vital topic of contemporary relations between business and government - public-private partnerships (PPP in the system of public discourse. The article discusses the various modern theoretical approaches to the study of the social aspects of interaction between business and government in modern political science. The author considers the concept and models of foreign public-private partnerships, social investments and corporate citizenship. In the end, the author makes an interesting conclusion that in the modern scientific community is formed and becomes stable a new institutional paradigm of PPP – practice of corporate citizenship. Corporate social responsibility in the narrow sense of the definition goes beyond charity and philanthropy, and today it is expressed in a broad sense - as corporate citizenship, which implies mutual responsibility of business and government to the public.
Reniers, Genserik; Dullaert, Wout; Karel, Soudan
2009-08-15
Every company situated within a chemical cluster faces domino effect risks, whose magnitude depends on every company's own risk management strategies and on those of all others. Preventing domino effects is therefore very important to avoid catastrophes in the chemical process industry. Given that chemical companies are interlinked by domino effect accident links, there is some likelihood that even if certain companies fully invest in domino effects prevention measures, they can nonetheless experience an external domino effect caused by an accident which occurred in another chemical enterprise of the cluster. In this article a game-theoretic approach to interpret and model behaviour of chemical plants within chemical clusters while negotiating and deciding on domino effects prevention investments is employed.
Arko, Leopold; Quach, Eric; Sukul, Vishad; Desai, Anuj; Gassie, Kelly; Erkmen, Kadir
2015-07-01
We present surgical clipping of a giant middle cerebral artery aneurysm. The patient is a 64-year-old woman who suffered subarachnoid hemorrhage in 2005. She was treated with coiling of the aneurysm at an outside institution. She presented to our clinic with headaches and was found on angiography to have giant recurrence of the aneurysm. To allow adequate exposure for clipping, we performed the surgery through a cranio-orbito-zygomatic (COZ) skull base approach, which is demonstrated. The surgery was performed in an operating room/angiography hybrid suite allowing for high quality intraoperative angiography. The technique and room flow are also demonstrated. The video can be found here: http://youtu.be/eePcyOMi85M.
Boffeli, Troy J; Reinking, Ryan
2014-01-01
Transfer ulcers beneath the second metatarsal head are common after diabetes-related partial first ray amputation. Subsequent osteomyelitis of the second ray can further complicate this difficult situation. We present 2 cases depicting our plantar rotational flap technique for revision surgery involving conversion to either panmetatarsal head resection or transmetatarsal amputation (TMA). These cases are presented to demonstrate our indications, procedure selection criteria, flap technique, operative pearls, and staging protocol. The goals of this surgical approach are to excise and close the plantar ulcer beneath the second metatarsal head, remove any infected bone, allow staged surgery if needed, remove all remaining metatarsal heads to decrease the likelihood of repeat transfer ulcers, preserve the toes when practical, avoid excessive shortening of the foot, avoid multiple longitudinal dorsal incisions, and create a functional and cosmetically appealing foot. The flap is equally suited for either panmetatarsal head resection or TMA. The decision to pursue panmetatarsal head resection versus TMA largely depends on the condition of the remaining toes. Involvement of osteomyelitis in the base of the second proximal phalanx, the soft tissue viability of the remaining toes, the presence of a preoperative digital deformity, and the likelihood that saving the lesser toes will be beneficial from a cosmetic or footwear standpoint are factors we consider when deciding between panmetatarsal head resection and TMA. Retrospective chart review identified prompt healing of the flap in both patients. Neither patient experienced recurrent ulcers or required subsequent surgery within the first 12 months postoperatively.
O.A. Bilovodska
2010-12-01
Full Text Available In this article the comparative analysis of existent approaches concerning to the estimation of strategies of enterprises is made. Besides, the theoretic-methodical approach concerning the estimation of strategy of the enterprises is improved on the basis of accounting of the strategic aim of the enterprise and also interests of the manufacturer and goods consumer.
Florencio E. Hernández
2011-04-01
Full Text Available Many phenomena, including life itself and its biochemical foundations are fundamentally rooted in chirality. Combinatorial methodologies for catalyst discovery and optimization remain an invaluable tool for gaining access to enantiomerically pure compounds in the development of pharmaceuticals, agrochemicals, and flavors. Some exotic metamaterials exhibiting negative refractive index at optical frequencies are based on chiral structures. Chiroptical activity is commonly quantified in terms of circular dichroism (CD and optical rotatory dispersion (ORD. However, the linear nature of these effects limits their application in the far and near-UV region in highly absorbing and scattering biological systems. In order to surmount this barrier, in recent years we made important advancements on a novel non linear, low-scatter, long-wavelength CD approach called two-photon absorption circular dichroism (TPACD. Herein we present a descriptive analysis of the optics principles behind the experimental measurement of TPACD, i.e., the double L-scan technique, and its significance using pulsed lasers. We also make an instructive examination and discuss the reliability of our theoretical-computational approach, which uses modern analytical response theory, within a Time-Dependent Density Functional Theory (TD-DFT approach. In order to illustrate the potential of this novel spectroscopic tool, we first present the experimental and theoretical results obtained in C2-symmetric, axially chiral R-(+-1,1'-bi(2-naphthol, R-BINOL, a molecule studied at the beginning of our investigation in this field. Next, we reveal some preliminary results obtained for (R-3,3′-diphenyl-2,2′-bi-1-naphthol, R-VANOL, and (R-2,2′-diphenyl-3,3′-(4-biphenanthrol, R-VAPOL. This family of optically active compounds has been proven to be a suitable model for the structure-property relationship study of TPACD, because its members are highly conjugated yet photo-stable, and easily
Hernández, Florencio E; Rizzo, Antonio
2011-04-18
Many phenomena, including life itself and its biochemical foundations are fundamentally rooted in chirality. Combinatorial methodologies for catalyst discovery and optimization remain an invaluable tool for gaining access to enantiomerically pure compounds in the development of pharmaceuticals, agrochemicals, and flavors. Some exotic metamaterials exhibiting negative refractive index at optical frequencies are based on chiral structures. Chiroptical activity is commonly quantified in terms of circular dichroism (CD) and optical rotatory dispersion (ORD). However, the linear nature of these effects limits their application in the far and near-UV region in highly absorbing and scattering biological systems. In order to surmount this barrier, in recent years we made important advancements on a novel non linear, low-scatter, long-wavelength CD approach called two-photon absorption circular dichroism (TPACD). Herein we present a descriptive analysis of the optics principles behind the experimental measurement of TPACD, i.e., the double L-scan technique, and its significance using pulsed lasers. We also make an instructive examination and discuss the reliability of our theoretical-computational approach, which uses modern analytical response theory, within a Time-Dependent Density Functional Theory (TD-DFT) approach. In order to illustrate the potential of this novel spectroscopic tool, we first present the experimental and theoretical results obtained in C(2)-symmetric, axially chiral R-(+)-1,1'-bi(2-naphthol), R-BINOL, a molecule studied at the beginning of our investigation in this field. Next, we reveal some preliminary results obtained for (R)-3,3'-diphenyl-2,2'-bi-1-naphthol, R-VANOL, and (R)-2,2'-diphenyl-3,3'-(4-biphenanthrol), R-VAPOL. This family of optically active compounds has been proven to be a suitable model for the structure-property relationship study of TPACD, because its members are highly conjugated yet photo-stable, and easily derivatized at the 5
Limkumnerd, Surachate
2014-03-01
Interest in thin-film fabrication for industrial applications have driven both theoretical and computational aspects of modeling its growth. One of the earliest attempts toward understanding the morphological structure of a film's surface is through a class of solid-on-solid limited-mobility growth models such as the Family, Wolf-Villain, or Das Sarma-Tamborenea models, which have produced fascinating surface roughening behaviors. These models, however, restrict the motion of an incidence atom to be within the neighborhood of its landing site, which renders them inept for simulating long-distance surface diffusion such as that observed in thin-film growth using a molecular-beam epitaxy technique. Naive extension of these models by repeatedly applying the local diffusion rules for each hop to simulate large diffusion length can be computationally very costly when certain statistical aspects are demanded. We present a graph-theoretic approach to simulating a long-range diffusion-attachment growth model. Using the Markovian assumption and given a local diffusion bias, we derive the transition probabilities for a random walker to traverse from one lattice site to the others after a large, possibly infinite, number of steps. Only computation with linear-time complexity is required for the surface morphology calculation without other probabilistic measures. The formalism is applied, as illustrations, to simulate surface growth on a two-dimensional flat substrate and around a screw dislocation under the modified Wolf-Villain diffusion rule. A rectangular spiral ridge is observed in the latter case with a smooth front feature similar to that obtained from simulations using the well-known multiple registration technique. An algorithm for computing the inverse of a class of substochastic matrices is derived as a corollary.
余永亮; 童秉纲; 马晖扬
2003-01-01
Numerous studies on the aerodynamics of insect wing flapping were carried out on different approaches of flight investigations, model experiments, and numerical simulations, but the theoretical modeling remains to be explored. In the present paper, an analytic approach is presented to model the flow interactions of wing flapping in air for small insects with the surrounding flow fields being highly unsteady and highly viscous. The model of wing flapping is a 2-D flat plate, which makes plunging and pitching oscillations as well as quick rotations reversing its positions of leading and trailing edges, respectively, during stroke reversals. It contains three simplified aerodynamic assumptions:(i) unsteady potential flow; (ii) discrete vortices shed from both leading and trailing edges of the wing; (iii) Kutta conditions applied at both edges. Then the problem is reduced to the solution of the unsteady Laplace equation, by using distributed singularities, i.e., sources/sinks, and vortices in the field. To validate the present physical model and analytic method proposed via benchmark examples, two elemental motions in wing flapping and a case of whole flapping cycles are analyzed,and the predicted results agree well with available experimental and numerical data. This verifies that the present analytical approach may give qualitatively correct and quantitatively reasonable results.Furthermore, the total fluid-dynamic force in the present method can be decomposed into three parts:one due to the added inertial (or mass) effect, the other and the third due to the induction of vortices shed from the leading- and the trailing-edge and their images respectively, and this helps to reveal the flow control mechanisms in insect wing flapping.
Iida, Kenji; Noda, Masashi; Nobusada, Katsuyuki
2017-02-01
We have developed a theoretical approach for describing the electronic properties of hetero-interface systems under an applied electrode bias. The finite-temperature density functional theory is employed for controlling the chemical potential in their interfacial region, and thereby the electronic charge of the system is obtained. The electric field generated by the electronic charging is described as a saw-tooth-like electrostatic potential. Because of the continuum approximation of dielectrics sandwiched between electrodes, we treat dielectrics with thicknesses in a wide range from a few nanometers to more than several meters. Furthermore, the approach is implemented in our original computational program named grid-based coupled electron and electromagnetic field dynamics (GCEED), facilitating its application to nanostructures. Thus, the approach is capable of comprehensively revealing electronic structure changes in hetero-interface systems with an applied bias that are practically useful for experimental studies. We calculate the electronic structure of a SiO2-graphene-boron nitride (BN) system in which an electrode bias is applied between the graphene layer and an electrode attached on the SiO2 film. The electronic energy barrier between graphene and BN is varied with an applied bias, and the energy variation depends on the thickness of the BN film. This is because the density of states of graphene is so low that the graphene layer cannot fully screen the electric field generated by the electrodes. We have demonstrated that the electronic properties of hetero-interface systems are well controlled by the combination of the electronic charging and the generated electric field.
Suflita, Joseph M.; Duncan, Kathleen E.
2010-08-14
The Joint United States - European Union Theoretical and Practical Course on Molecular Approaches for in situ Biodegradation was held May 24 through June 7 at The University of Oklahoma. Twenty-four graduate and postgraduate students from both the United States and the European Union attended the course. Nine states and ten European countries were represented. Students were assigned living quarters and laboratory partners to maximize interactions between US and EU participants as well as to mix people with different technical backgrounds together. The students used the latest methods in molecular biology to characterize beneficial microorganisms and genes involved in the biodegradation of pollutants at a nearby landfill as well as an active hydrocarbon-producing site, part of which is undergoing bioremediation. Seminars by distinguished scientists were organized to expose the students to the breadth of the environmental field, including field assay and engineering applications, laboratory scale bioreactors, microbiology, genetics, regulation, pathway analysis, design of recombinant bacteria, and application of the associated techniques to the field. Lectures were given by various OU faculty on the principles behind the techniques to be used in the laboratory. These lectures included troubleshooting hints and encouraged questions and comments from the audience. The laboratory experiments covered chemical, microbiological, and molecular genetic analyses of soils; bioavailability of contaminants; enrichment cultures; gene probing; PCR amplification of known genes and gene families; identification of microbes based traditional and nontraditional approaches, nutritional capabilities, and 16S rRNA sequence; mRNA detection; and enzyme assays. Field trips were made to the USGS landfill field sampling site, and to the Tall Grass Prairie Preserve, a Nature Conservancy site which also featured long-term studies of bioremediation of crude oil and brine spills by one of the
Yi Tang
2017-05-01
Full Text Available In a competitive electricity market with substantial involvement of renewable electricity, maximizing profits by optimizing bidding strategies is crucial to different power producers including conventional power plants and renewable ones. This paper proposes a game-theoretic bidding optimization method based on bi-level programming, where power producers are at the upper level and utility companies are at the lower level. The competition among the multiple power producers is formulated as a non-cooperative game in which bidding curves are their strategies, while uniform clearing pricing is considered for utility companies represented by an independent system operator. Consequently, based on the formulated game model, the bidding strategies for power producers are optimized for the day-ahead market and the intraday market with considering the properties of renewable energy; and the clearing pricing for the utility companies, with respect to the power quantity from different power producers, is optimized simultaneously. Furthermore, a distributed algorithm is provided to search the solution of the generalized Nash equilibrium. Finally, simulation results were performed and discussed to verify the feasibility and effectiveness of the proposed non-cooperative game-based bi-level optimization approach.
Painter, K. J.; Hunt, G. S.; Wells, K. L.; Johansson, J. A.; Headon, D. J.
2012-01-01
In his seminal 1952 paper, ‘The Chemical Basis of Morphogenesis’, Alan Turing lays down a milestone in the application of theoretical approaches to understand complex biological processes. His deceptively simple demonstration that a system of reacting and diffusing chemicals could, under certain conditions, generate spatial patterning out of homogeneity provided an elegant solution to the problem of how one of nature's most intricate events occurs: the emergence of structure and form in the developing embryo. The molecular revolution that has taken place during the six decades following this landmark publication has now placed this generation of theoreticians and biologists in an excellent position to rigorously test the theory and, encouragingly, a number of systems have emerged that appear to conform to some of Turing's fundamental ideas. In this paper, we describe the history and more recent integration between experiment and theory in one of the key models for understanding pattern formation: the emergence of feathers and hair in the skins of birds and mammals. PMID:23919127
Painter, K J; Hunt, G S; Wells, K L; Johansson, J A; Headon, D J
2012-08-06
In his seminal 1952 paper, 'The Chemical Basis of Morphogenesis', Alan Turing lays down a milestone in the application of theoretical approaches to understand complex biological processes. His deceptively simple demonstration that a system of reacting and diffusing chemicals could, under certain conditions, generate spatial patterning out of homogeneity provided an elegant solution to the problem of how one of nature's most intricate events occurs: the emergence of structure and form in the developing embryo. The molecular revolution that has taken place during the six decades following this landmark publication has now placed this generation of theoreticians and biologists in an excellent position to rigorously test the theory and, encouragingly, a number of systems have emerged that appear to conform to some of Turing's fundamental ideas. In this paper, we describe the history and more recent integration between experiment and theory in one of the key models for understanding pattern formation: the emergence of feathers and hair in the skins of birds and mammals.
Yin, W.; Peyton, A. J.; Stefani, F.; Gerbeth, G.
2009-10-01
A completely contactless flow measurement technique based on the principle of EM induction measurements—contactless inductive flow tomography (CIFT)—has been previously reported by a team based at Forschungszentrum Dresden-Rossendorf (FZD). This technique is suited to the measurement of velocity fields in high conductivity liquids, and the possible applications range from monitoring metal casting and silicon crystal growth in industry to gaining insights into the working of the geodynamo. The forward problem, i.e. calculating the induced magnetic field from a known velocity profile, can be described as a linear relationship when the magnetic Reynolds number is small. Previously, an integral equation method was used to formulate the forward problem; however, although the sensitivity matrices were calculated, they were not explicitly expressed and computation involved the solution of an ill-conditioned system of equations using a so-called deflation method. In this paper, we present the derivation of the sensitivity matrix directly from electromagnetic field theory and the results are expressed very concisely as the cross product of two field vectors. A numerical method based on a finite difference method has also been developed to verify the formulation. It is believed that this approach provides a simple yet fast route to the forward solution of CIFT. Furthermore, a method for sensor design selection based on eigenvalue analysis is presented.
Taylor, Natalie; Long, Janet C; Debono, Deborah; Williams, Rachel; Salisbury, Elizabeth; O'Neill, Sharron; Eykman, Elizabeth; Braithwaite, Jeffrey; Chin, Melvin
2016-03-12
Lynch syndrome is an inherited disorder associated with a range of cancers, and found in 2-5 % of colorectal cancers. Lynch syndrome is diagnosed through a combination of significant family and clinical history and pathology. The definitive diagnostic germline test requires formal patient consent after genetic counselling. If diagnosed early, carriers of Lynch syndrome can undergo increased surveillance for cancers, which in turn can prevent late stage cancers, optimise treatment and decrease mortality for themselves and their relatives. However, over the past decade, international studies have reported that only a small proportion of individuals with suspected Lynch syndrome were referred for genetic consultation and possible genetic testing. The aim of this project is to use behaviour change theory and implementation science approaches to increase the number and speed of healthcare professional referrals of colorectal cancer patients with a high-likelihood risk of Lynch syndrome to appropriate genetic counselling services. The six-step Theoretical Domains Framework Implementation (TDFI) approach will be used at two large, metropolitan hospitals treating colorectal cancer patients. Steps are: 1) form local multidisciplinary teams to map current referral processes; 2) identify target behaviours that may lead to increased referrals using discussion supported by a retrospective audit; 3) identify barriers to those behaviours using the validated Influences on Patient Safety Behaviours Questionnaire and TDFI guided focus groups; 4) co-design interventions to address barriers using focus groups; 5) co-implement interventions; and 6) evaluate intervention impact. Chi square analysis will be used to test the difference in the proportion of high-likelihood risk Lynch syndrome patients being referred for genetic testing before and after intervention implementation. A paired t-test will be used to assess the mean time from the pathology test results to referral for high
Korzhynska Tetiana
2015-03-01
Full Text Available The article deals with the content of future customs officers’ professional training. Such notions as “training”, “specialists’ professional training”, “customs officers’ professional training”. The article presents an overview of the scientific literature, theoretical analyzes of materials concerning the issues of future customs officers’ professional training in the higher educational institutions of Ukraine and the Republic of Poland, still remaining open. The major scientific approaches of leading domestic and foreign scientists to study future customs officers’ professional training have been characterized, in particular competence approach, functional approach, instrumental approach, learner-centred approach, axiological approach, historical and systematic-andcomprehensive approaches, theoretical generalization The conception of the research problem in the educational sphere of higher educational institutions has been theoretically generalised. The point of the article is focused on the fact that professional training of future specialists of customs authorities is a process of learning, understanding and specific knowledge, skills and proficiencies of future customs officers who perform the functions of future professional activities, fulfil their functions in accordance with their professional duties during customs service. Also it has been noted that professional training of future customs specialists is caused by personal, practical, functional, historical factors that it is important in the process of understanding this phenomenon and in the research and development processes.
Pabst, Stefan Ulf
2013-04-15
The concept of atoms as the building blocks of matter has existed for over 3000 years. A revolution in the understanding and the description of atoms and molecules has occurred in the last century with the birth of quantum mechanics. After the electronic structure was understood, interest in studying the dynamics of electrons, atoms, and molecules increased. However, time-resolved investigations of these ultrafast processes were not possible until recently. The typical time scale of atomic and molecular processes is in the picosecond to attosecond realm. Tremendous technological progress in recent years makes it possible to generate light pulses on these time scales. With such ultrashort pulses, atomic and molecular dynamics can be triggered, watched, and controlled. Simultaneously, the need rises for theoretical models describing the underlying mechanisms. This doctoral thesis focuses on the development of theoretical models which can be used to study the dynamical behavior of electrons, atoms, and molecules in the presence of ultrashort light pulses. Several examples are discussed illustrating how light pulses can trigger and control electronic, atomic, and molecular motions. In the first part of this work, I focus on the rotational motion of asymmetric molecules, which happens on picosecond and femtosecond time scales. Here, the aim is to align all three axes of the molecule as well as possible. To investigate theoretically alignment dynamics, I developed a program that can describe alignment motion ranging from the impulsive to the adiabatic regime. The asymmetric molecule SO{sub 2} is taken as an example to discuss strategies of optimizing 3D alignment without the presence of an external field (i.e., field-free alignment). Field-free alignment is particularly advantageous because subsequent experiments on the aligned molecule are not perturbed by the aligning light pulse. Wellaligned molecules in the gas phase are suitable for diffraction experiments. From the
Arran Gare
2008-10-01
, they are approaching the question: What is Life? from different directions. Focussing on the work of Robert Rosen, in this paper I will try to show what revisions in our understanding of science theoretical biologists need to accept in order to do justice to the insights of the philosophical biologists. I will suggest that these revisions should be accepted, and spell out some of the implications of such a science./span/p
Cissé, Cheickna; Mathieu, Sophie V; Abeih, Mohamed B Ould; Flanagan, Lindsey; Vitale, Sylvia; Catty, Patrice; Boturyn, Didier; Michaud-Soret, Isabelle; Crouzy, Serge
2014-12-19
The FUR protein (ferric uptake regulator) is an iron-dependent global transcriptional regulator. Specific to bacteria, FUR is an attractive antibacterial target since virulence is correlated to iron bioavailability. Recently, four anti-FUR peptide aptamers, composed of 13 amino acid variable loops inserted into a thioredoxinA scaffold, were identified, which were able to interact with Escherichia coli FUR (EcFUR), inhibit its binding to DNA and to decrease the virulence of pathogenic E. coli in a fly infection model. The first characterization of anti-FUR linear peptides (pF1 6 to 13 amino acids) derived from the variable part of the F1 anti-FUR peptide aptamer is described herein. Theoretical and experimental approaches, in original combination, were used to study interactions of these peptides with FUR in order to understand their mechanism of inhibition. After modeling EcFUR by homology, docking with Autodock was combined with molecular dynamics simulations in implicit solvent to take into account the flexibility of the partners. All calculations were cross-checked either with other programs or with experimental data. As a result, reliable structures of EcFUR and its complex with pF1 are given and an inhibition pocket formed by the groove between the two FUR subunits is proposed. The location of the pocket was validated through experimental mutation of key EcFUR residues at the site of proposed peptide interaction. Cyclisation of pF1, mimicking the peptide constraint in F1, improved inhibition. The details of the interactions between peptide and protein were analyzed and a mechanism of inhibition of these anti-FUR molecules is proposed.
Hill, Jason E.; Matlock, Kevin; Pal, Ranadip; Nutter, Brian; Mitra, Sunanda
2016-03-01
Magnetic Resonance Imaging (MRI) is a vital tool in the diagnosis and characterization of multiple sclerosis (MS). MS lesions can be imaged with relatively high contrast using either Fluid Attenuated Inversion Recovery (FLAIR) or Double Inversion Recovery (DIR). Automated segmentation and accurate tracking of MS lesions from MRI remains a challenging problem. Here, an information theoretic approach to cluster the voxels in pseudo-colorized multispectral MR data (FLAIR, DIR, T2-weighted) is utilized to automatically segment MS lesions of various sizes and noise levels. The Improved Jump Method (IJM) clustering, assisted by edge suppression, is applied to the segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF) and MS lesions, if present, into a subset of slices determined to be the best MS lesion candidates via Otsu's method. From this preliminary clustering, the modal data values for the tissues can be determined. A Euclidean distance is then used to estimate the fuzzy memberships of each brain voxel for all tissue types and their 50/50 partial volumes. From these estimates, binary discrete and fuzzy MS lesion masks are constructed. Validation is provided by using three synthetic MS lesions brains (mild, moderate and severe) with labeled ground truths. The MS lesions of mild, moderate and severe designations were detected with a sensitivity of 83.2%, and 88.5%, and 94.5%, and with the corresponding Dice similarity coefficient (DSC) of 0.7098, 0.8739, and 0.8266, respectively. The effect of MRI noise is also examined by simulated noise and the application of a bilateral filter in preprocessing.
Cristina Raluca Popescu
2015-05-01
Full Text Available In the paper “The Assessment Methodology RADAR – A Theoretical Approach of a Methodology for Coordinating the Efforts to Improve the Organizational Processes to Achieve Excellence” the authors present the basic features of the assessment methodology RADAR that is designed to coordinate the efforts to improve the organizational processes in order to achieve excellence.
Clifford, S. M.
2005-12-01
The abundance and distribution of water on Mars has important implications for understanding the planet's geologic, hydrologic, and climatic history; the potential origin and continued survival of life; and the accessibility of a critical in-situ resource for sustaining future human explorers. For this reason, the search for water has become a key objective of NASA's Mars Exploration Program. Evidence of water, both past and present, is found almost everywhere, but most persuasively in the form of the planet's outflow channels -- broad scoured depressions hundreds of kilometers long that emerge abruptly from large areas of collapsed and disrupted terrain, the apparent result of a massive release of subpermafrost groundwater. Based on a conservative estimate of the volume of water required to erode the channels, Carr (Icarus, 68, 187-216, 1986) has estimated that Mars may possess a total planetary inventory of water equivalent to a global ocean 0.5 - 1 km deep. Of this global inventory, ~0.000001% is found in the atmosphere, while ~5-10% is visible as ice in the perennial polar caps. This leaves ~90-95% of the planetary inventory of water unaccounted for, the vast bulk of which is believed to reside, as ground ice and groundwater, within the planet's crust. Theoretical and geomorphic approaches to assessing the current distribution and state of subsurface water on Mars face numerous obstacles -- thus geophysical techniques hold the most promise. The first such investigation, the Gamma-Ray Neutron Spectrometer aboard the Mars Odyssey Orbiter, arrived at Mars in 2001. It revealed that the top half-meter of the Martian regolith is rich in hydrogen at latitudes above ~40-degrees, an observation consistent with the presence of near-surface ground ice. Assessing the distribution of water at greater depths (up to several kilometers) is one of the chief objectives of the MARSIS experiment on ESA's Mars Express spacecraft. MARSIS is a low-frequency (1-5 MHz) orbital radar
IGNACIO TAMAYO TORRES
2008-06-01
Full Text Available This paper poses, from a theoretical perspective, the role that may have Internet as a key factor to foster the cultural convergence among countries. In this regard, we refl ect about a central theoretic proposal, taking as main reference the dynamic constructivist approach. The gist of the paper goes down the lines of how shared values generated in the on-line communication processes developed by individuals (consumers from different cultures-either between each other or between them and the medium-, can be passed on to every culture of origin, fostering their rapprochement in the long run.
张菀心; 刘文俊
2014-01-01
For centuries, foreign teachers had a dream that is how to learned foreign language by learning the method of mother tongue? In 1983, Krashen and Terrell invented the Natural Approach. The natural approach lays stress on comprehensible input, setting up a goal at different stage and paying attention to the learner’s interest and feelings. This paper tells about the theoretical basis of the Natural Approach and the writer’s practice of applying some ideas of the Natural Method to her teaching of oral English. And take an example of Zun Yi normal college.
张菀心; 刘文俊
2014-01-01
For centuries, foreign teachers had a dream that is how to learned foreign language by learning the method of mother tongue? In 1983, Krashen and Terrell invented the Natural Approach. The natural approach lays stress on comprehensible input, setting up a goal at different stage and paying attention to the learner’s interest and feelings. This paper tells about the theoretical basis of the Natural Approach and the writer’s practice of applying some ideas of the Natural Method to her teaching of oral English. And take an example of Zun Yi normal college.
New theoretical approach to the RF-dynamics of soft magnetic FeTaN films for CMOS components
Seemann, K. E-mail: klaus.seemann@imf.fzk.de; Leiste, H.; Bekker, V
2004-07-01
A new approach to a theoretical description of the RF-behavior of magnetic films possessing an in-plane uniaxial anisotropy was carried out by combining the Landau-Lifschitz and Maxwell equations for modeling spin dynamics and eddy current loss, respectively, to a comprehensive mathematical expression. Because of that, it is possible to compute the complex permeability in a wide frequency range. The theory was utilized to describe the dynamic response of soft magnetic FeTaN films deposited by reactive RF-magnetron sputtering for microelectronic RF-components like micro-inductors. A 6-inch target consisting of Fe{sub 90}Ta{sub 10} was used to grow the films on oxidized silicon substrates with a titanium seed layer for better film adhesion. The concentration of nitrogen could be adjusted by a flow control system. With regard to aluminum CMOS processes carried out at around 400 deg. C and to induce an in-plane uniaxial anisotropy the films were annealed at this temperature in a static magnetic field of 50 mT. It was found that the nitrogen content of the films in conjunction with the needed temperatures played an important role to obtain a marked uniaxial anisotropy in the film plane. Consequently, an annealing procedure had to be elaborated to obtain the in-plane uniaxial anisotropy at the relevant CMOS temperatures. For the application in RF-components and in view of eddy current losses, the magnetic FeTaN films were investigated for different thicknesses which delivered information that slipped in a multilayer design. Multilayers, i.e. magnetic single layers with a sufficient thickness separated by insulating inter-layers made of Si{sub 3}N{sub 4}, were necessary to decrease eddy currents at radio frequencies. This led to an improved frequency behavior. The introduced theory was then compared satisfactorily with the frequency-dependent permeability of FeTaN films of various thickness. It is concluded that an optimum film thickness can be roughly calculated in
Mikeš, Daniel
2010-05-01
Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same
Oprisan, Sorinel Adrian [Department of Psychology, University of New Orleans, New Orleans, LA (United States)]. E-mail: soprisan@uno.edu
2001-11-30
There has been increased theoretical and experimental research interest in autonomous mobile robots exhibiting cooperative behaviour. This paper provides consistent quantitative measures of organizational degree of a two-dimensional environment. We proved, by the way of numerical simulations, that the theoretically derived values of the feature are reliable measures of aggregation degree. The slope of the feature's dependence on memory radius leads to an optimization criterion for stochastic functional self-organization. We also described the intellectual heritages that have guided our research, as well as possible future developments. (author)
Sluyters, J.H.; Sluyters-Rehbach, M.; Struys, J.
1984-01-01
Abstract The theoretical expressions for the faradaic admittance and the faradaic demodulation voltage are rewritten, introducing the thermodynamic restrictions proposed by Reinmuth in 1972 and without any specification of the mechanism of the electrode reaction. The result is applied to general fi
Yablonskiy, Dmitriy A.; Sukstanskii, Alexander L.; He, Xiang
2012-01-01
Quantitative evaluation of brain hemodynamics and metabolism, particularly the relationship between brain function and oxygen utilization, is important for understanding normal human brain operation as well as pathophysiology of neurological disorders. It can also be of great importance for evaluation of hypoxia within tumors of the brain and other organs. A fundamental discovery by Ogawa and co-workers of the BOLD (Blood Oxygenation Level Dependent) contrast opened a possibility to use this effect to study brain hemodynamic and metabolic properties by means of MRI measurements. Such measurements require developing theoretical models connecting MRI signal to brain structure and functioning and designing experimental techniques allowing MR measurements of salient features of theoretical models. In our review we discuss several such theoretical models and experimental methods for quantification brain hemodynamic and metabolic properties. Our review aims mostly at methods for measuring oxygen extraction fraction, OEF, based on measuring blood oxygenation level. Combining measurement of OEF with measurement of CBF allows evaluation of oxygen consumption, CMRO2. We first consider in detail magnetic properties of blood – magnetic susceptibility, MR relaxation and theoretical models of intravascular contribution to MR signal under different experimental conditions. Then, we describe a “through-space” effect – the influence of inhomogeneous magnetic fields, created in the extravascular space by intravascular deoxygenated blood, on the MR signal formation. Further we describe several experimental techniques taking advantage of these theoretical models. Some of these techniques - MR susceptometry, and T2-based quantification of oxygen OEF – utilize intravascular MR signal. Another technique – qBOLD – evaluates OEF by making use of through-space effects. In this review we targeted both scientists just entering the MR field and more experienced MR researchers
Marie Stiborová
2014-06-01
Full Text Available This review summarizes the results found in studies investigating the enzymatic activation of two genotoxic nitro-aromatics, an environmental pollutant and carcinogen 3-nitrobenzanthrone (3-NBA and a natural plant nephrotoxin and carcinogen aristolochic acid I (AAI, to reactive species forming covalent DNA adducts. Experimental and theoretical approaches determined the reasons why human NAD(PH:quinone oxidoreductase (NQO1 and cytochromes P450 (CYP 1A1 and 1A2 have the potential to reductively activate both nitro-aromatics. The results also contributed to the elucidation of the molecular mechanisms of these reactions. The contribution of conjugation enzymes such as N,O-acetyltransferases (NATs and sulfotransferases (SULTs to the activation of 3-NBA and AAI was also examined. The results indicated differences in the abilities of 3-NBA and AAI metabolites to be further activated by these conjugation enzymes. The formation of DNA adducts generated by both carcinogens during their reductive activation by the NOQ1 and CYP1A1/2 enzymes was investigated with pure enzymes, enzymes present in subcellular cytosolic and microsomal fractions, selective inhibitors, and animal models (including knock-out and humanized animals. For the theoretical approaches, flexible in silico docking methods as well as ab initio calculations were employed. The results summarized in this review demonstrate that a combination of experimental and theoretical approaches is a useful tool to study the enzyme-mediated reaction mechanisms of 3-NBA and AAI reduction.
Eklund, Per; Dahlqvist, Martin; Tengstrand, Olof; Hultman, Lars; Lu, Jun; Nedfors, Nils; Jansson, Ulf; Rosén, Johanna
2012-07-01
Since the advent of theoretical materials science some 60 years ago, there has been a drive to predict and design new materials in silicio. Mathematical optimization procedures to determine phase stability can be generally applicable to complex ternary or higher-order materials systems where the phase diagrams of the binary constituents are sufficiently known. Here, we employ a simplex-optimization procedure to predict new compounds in the ternary Nb-Ge-C system. Our theoretical results show that the hypothetical Nb2GeC is stable, and excludes all reasonably conceivable competing hypothetical phases. We verify the existence of the Nb2GeC phase by thin film synthesis using magnetron sputtering. This hexagonal nanolaminated phase has a and c lattice parameters of ˜3.24Å and 12.82 Å.
Armas-Pérez, Julio C; Hernández-Ortiz, Juan P; de Pablo, Juan J
2015-12-28
A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.
Stöltzner, Michael
Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.
Vietta, E P
1995-01-01
The author establishes a research line based on a theoretical-methodological referential for the qualitative investigation of psychiatric nursing and mental health. Aspects of humanist and existential philosophies and personalism were evaluated integrating them in a unique perspective. In order to maintain the scientific method of research in this referential the categorization process which will be adopted in this kind of investigation was explained.
Grimm, Daniel
2008-01-01
This thesis comprises detailed experimental and theoretical investigations of the transport properties of one-dimensional nanostructures. Most of the work is dedicated to the exploration of the fascinating effects occurring in single wall carbon nanotubes (SWCNT). These particular nanostructures gained an overwhelming interest in the past two decades due to its outstanding electronic and mechanical features. We have investigated the properties of a novel family of carbon nanostructures, named...