WorldWideScience

Sample records for estimation theoretic approach

  1. An observer-theoretic approach to estimating neutron flux distribution

    International Nuclear Information System (INIS)

    Park, Young Ho; Cho, Nam Zin

    1989-01-01

    State feedback control provides many advantages such as stabilization and improved transient response. However, when the state feedback control is considered for spatial control of a nuclear reactor, it requires complete knowledge of the distributions of the system state variables. This paper describes a method for estimating the flux spatial distribution using only limited flux measurements. It is based on the Luenberger observer in control theory, extended to the distributed parameter systems such as the space-time reactor dynamics equation. The results of the application of the method to simple reactor models showed that the flux distribution is estimated by the observer very efficiently using information from only a few sensors

  2. Overview of the Practical and Theoretical Approaches to the Estimation of Mineral Resources. A Financial Perspective

    Directory of Open Access Journals (Sweden)

    Leontina Pavaloaia

    2012-10-01

    Full Text Available Mineral resources represent an important natural resource whose exploitation, unless it is rational, can lead to their exhaustion and the collapse of sustainable development. Given the importance of mineral resources and the uncertainty concerning the estimation of extant reserves, they have been analyzed by several national and international institutions. In this article we shall present a few aspects concerning the ways to approach the reserves of mineral resources at national and international level, by considering both economic aspects and those aspects concerned with the definition, classification and aggregation of the reserves of mineral resources by various specialized institutions. At present there are attempts to homogenize practices concerning these aspects for the purpose of presenting correct and comparable information.

  3. Validation by theoretical approach to the experimental estimation of efficiency for gamma spectrometry of gas in 100 ml standard flask

    International Nuclear Information System (INIS)

    Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.

    1996-01-01

    Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs

  4. A Theoretical Approach

    African Journals Online (AJOL)

    NICO

    L-rhamnose and L-fucose: A Theoretical Approach ... L-ramnose and L-fucose, by means of the Monte Carlo conformational search method. The energy of the conformers ..... which indicates an increased probability for the occurrence of.

  5. Impact of a financial risk-sharing scheme on budget-impact estimations: a game-theoretic approach.

    Science.gov (United States)

    Gavious, Arieh; Greenberg, Dan; Hammerman, Ariel; Segev, Ella

    2014-06-01

    As part of the process of updating the National List of Health Services in Israel, health plans (the 'payers') and manufacturers each provide estimates on the expected number of patients that will utilize a new drug. Currently, payers face major financial consequences when actual utilization is higher than the allocated budget. We suggest a risk-sharing model between the two stakeholders; if the actual number of patients exceeds the manufacturer's prediction, the manufacturer will reimburse the payers by a rebate rate of α from the deficit. In case of under-utilization, payers will refund the government at a rate of γ from the surplus budget. Our study objective was to identify the optimal early estimations of both 'players' prior to and after implementation of the risk-sharing scheme. Using a game-theoretic approach, in which both players' statements are considered simultaneously, we examined the impact of risk-sharing within a given range of rebate proportions, on players' early budget estimations. When increasing manufacturer's rebate α to be over 50 %, then manufacturers will announce a larger number, and health plans will announce a lower number of patients than they would without risk sharing, thus substantially decreasing the gap between their estimates. Increasing γ changes players' estimates only slightly. In reaction to applying a substantial risk-sharing rebate α on the manufacturer, both players are expected to adjust their budget estimates toward an optimal equilibrium. Increasing α is a better vehicle for reaching the desired equilibrium rather than increasing γ, as the manufacturer's rebate α substantially influences both players, whereas γ has little effect on the players behavior.

  6. Theoretical Approaches to Coping

    Directory of Open Access Journals (Sweden)

    Sofia Zyga

    2013-01-01

    Full Text Available Introduction: Dealing with stress requires conscious effort, it cannot be perceived as equal to individual's spontaneous reactions. The intentional management of stress must not be confused withdefense mechanisms. Coping differs from adjustment in that the latter is more general, has a broader meaning and includes diverse ways of facing a difficulty.Aim: An exploration of the definition of the term "coping", the function of the coping process as well as its differentiation from other similar meanings through a literature review.Methodology: Three theoretical approaches of coping are introduced; the psychoanalytic approach; approaching by characteristics; and the Lazarus and Folkman interactive model.Results: The strategic methods of the coping approaches are described and the article ends with a review of the approaches including the functioning of the stress-coping process , the classificationtypes of coping strategies in stress-inducing situations and with a criticism of coping approaches.Conclusions: The comparison of coping in different situations is difficult, if not impossible. The coping process is a slow process, so an individual may select one method of coping under one set ofcircumstances and a different strategy at some other time. Such selection of strategies takes place as the situation changes.

  7. Theoretical approaches to elections defining

    OpenAIRE

    Natalya V. Lebedeva

    2011-01-01

    Theoretical approaches to elections defining develop the nature, essence and content of elections, help to determine their place and a role as one of the major national law institutions in democratic system.

  8. Theoretical approaches to elections defining

    Directory of Open Access Journals (Sweden)

    Natalya V. Lebedeva

    2011-01-01

    Full Text Available Theoretical approaches to elections defining develop the nature, essence and content of elections, help to determine their place and a role as one of the major national law institutions in democratic system.

  9. Theoretical Approaches to Political Communication.

    Science.gov (United States)

    Chesebro, James W.

    Political communication appears to be emerging as a theoretical and methodological academic area of research within both speech-communication and political science. Five complimentary approaches to political science (Machiavellian, iconic, ritualistic, confirmational, and dramatistic) may be viewed as a series of variations which emphasize the…

  10. A theoretical approach to the problem of dose-volume constraint estimation and their impact on the dose-volume histogram selection

    International Nuclear Information System (INIS)

    Schinkel, Colleen; Stavrev, Pavel; Stavreva, Nadia; Fallone, B. Gino

    2006-01-01

    This paper outlines a theoretical approach to the problem of estimating and choosing dose-volume constraints. Following this approach, a method of choosing dose-volume constraints based on biological criteria is proposed. This method is called ''reverse normal tissue complication probability (NTCP) mapping into dose-volume space'' and may be used as a general guidance to the problem of dose-volume constraint estimation. Dose-volume histograms (DVHs) are randomly simulated, and those resulting in clinically acceptable levels of complication, such as NTCP of 5±0.5%, are selected and averaged producing a mean DVH that is proven to result in the same level of NTCP. The points from the averaged DVH are proposed to serve as physical dose-volume constraints. The population-based critical volume and Lyman NTCP models with parameter sets taken from literature sources were used for the NTCP estimation. The impact of the prescribed value of the maximum dose to the organ, D max , on the averaged DVH and the dose-volume constraint points is investigated. Constraint points for 16 organs are calculated. The impact of the number of constraints to be fulfilled based on the likelihood that a DVH satisfying them will result in an acceptable NTCP is also investigated. It is theoretically proven that the radiation treatment optimization based on physical objective functions can sufficiently well restrict the dose to the organs at risk, resulting in sufficiently low NTCP values through the employment of several appropriate dose-volume constraints. At the same time, the pure physical approach to optimization is self-restrictive due to the preassignment of acceptable NTCP levels thus excluding possible better solutions to the problem

  11. Theoretical Approaches to Nuclear Proliferation

    Directory of Open Access Journals (Sweden)

    Konstantin S. Tarasov

    2015-01-01

    Full Text Available This article analyses discussions between representatives of three schools in the theory of international relations - realism, liberalism and constructivism - on the driving factors of nuclear proliferation. The paper examines major theoretical approaches, outlined in the studies of Russian and foreign scientists, to the causes of nuclear weapons development, while unveiling their advantages and limitations. Much of the article has been devoted to alternative approaches, particularly, the role of mathematical modeling in assessing proliferation risks. The analysis also reveals a variety of different approaches to nuclear weapons acquisition, as well as the absence of a comprehensive proliferation theory. Based on the research results the study uncovers major factors both favoring and impeding nuclear proliferation. The author shows that the lack of consensus between realists, liberals and constructivists on the nature of proliferation led a number of scientists to an attempt to explain nuclear rationale by drawing from the insights of more than one school in the theory of IR. Detailed study of the proliferation puzzle contributes to a greater understating of contemporary international realities, helps to identify mechanisms that are most likely to deter states from obtaining nuclear weapons and is of the outmost importance in predicting short- and long-term security environment. Furthermore, analysis of the existing scientific literature on nuclear proliferation helps to determine future research agenda of the subject at hand.

  12. Theoretical Approaches to Lignin Chemistry

    OpenAIRE

    Shevchenko, Sergey M.

    1994-01-01

    A critical review is presented of the applications of theoretical methods to the studies of the structure and chemical reactivity of lignin, including simulation of macromolecular properties, conformational calculations, quantum chemical analyses of electronic structure, spectra and chemical reactivity. Modern concepts of spatial organization and chemical reactivity of lignins are discussed.

  13. Machine learning a theoretical approach

    CERN Document Server

    Natarajan, Balas K

    2014-01-01

    This is the first comprehensive introduction to computational learning theory. The author's uniform presentation of fundamental results and their applications offers AI researchers a theoretical perspective on the problems they study. The book presents tools for the analysis of probabilistic models of learning, tools that crisply classify what is and is not efficiently learnable. After a general introduction to Valiant's PAC paradigm and the important notion of the Vapnik-Chervonenkis dimension, the author explores specific topics such as finite automata and neural networks. The presentation

  14. Managerial Leadership - A Theoretical Approach

    Directory of Open Access Journals (Sweden)

    Felicia Cornelia MACARIE

    2007-06-01

    Full Text Available The paper endeavors to offer an overview of the major theories on leadership and the way in which it influences the management of contemporary organizations. Numerous scholars highlight that there are numerous overlaps between the concepts of management and leadership. This is the reason why the first section of the paper focuses on providing an extensive overview of the literature regarding the meaning of the two aforementioned concepts. The second section addresses more in depth the concept of leadership and managerial leadership and focuses on the ideal profile of the leader. The last section of the paper critically discusses various types of leadership and more specifically modern approaches to the concept and practices of leadership.

  15. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  16. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  17. Game theoretic approaches for spectrum redistribution

    CERN Document Server

    Wu, Fan

    2014-01-01

    This brief examines issues of spectrum allocation for the limited resources of radio spectrum. It uses a game-theoretic perspective, in which the nodes in the wireless network are rational and always pursue their own objectives. It provides a systematic study of the approaches that can guarantee the system's convergence at an equilibrium state, in which the system performance is optimal or sub-optimal. The author provides a short tutorial on game theory, explains game-theoretic channel allocation in clique and in multi-hop wireless networks and explores challenges in designing game-theoretic m

  18. Partial discharge transients: The field theoretical approach

    DEFF Research Database (Denmark)

    McAllister, Iain Wilson; Crichton, George C

    1998-01-01

    Up until the mid-1980s the theory of partial discharge transients was essentially static. This situation had arisen because of the fixation with the concept of void capacitance and the use of circuit theory to address what is in essence a field problem. Pedersen rejected this approach and instead...... began to apply field theory to the problem of partial discharge transients. In the present paper, the contributions of Pedersen using the field theoretical approach will be reviewed and discussed....

  19. New Theoretical Approach Integrated Education and Technology

    Science.gov (United States)

    Ding, Gang

    2010-01-01

    The paper focuses on exploring new theoretical approach in education with development of online learning technology, from e-learning to u-learning and virtual reality technology, and points out possibilities such as constructing a new teaching ecological system, ubiquitous educational awareness with ubiquitous technology, and changing the…

  20. Radiotherapy problem under fuzzy theoretic approach

    International Nuclear Information System (INIS)

    Ammar, E.E.; Hussein, M.L.

    2003-01-01

    A fuzzy set theoretic approach is used for radiotherapy problem. The problem is faced with two goals: the first is to maximize the fraction of surviving normal cells and the second is to minimize the fraction of surviving tumor cells. The theory of fuzzy sets has been employed to formulate and solve the problem. A linguistic variable approach is used for treating the first goal. The solutions obtained by the modified approach are always efficient and best compromise. A sensitivity analysis of the solutions to the differential weights is given

  1. Theoretical estimation of Z´ boson mass

    International Nuclear Information System (INIS)

    Maji, Priya; Banerjee, Debika; Sahoo, Sukadev

    2016-01-01

    The discovery of Higgs boson at the LHC brings a renewed perspective in particle physics. With the help of Higgs mechanism, standard model (SM) allows the generation of particle mass. The ATLAS and CMS experiments at the LHC have predicted the mass of Higgs boson as m_H=125-126 GeV. Recently, it is claimed that the Higgs boson might interact with dark matter and there exists relation between the Higgs boson and dark matter (DM). Hertzberg has predicted a correlation between the Higgs mass and the abundance of dark matter. His theoretical result is in good agreement with current data. He has predicted the mass of Higgs boson as GeV. The Higgs boson could be coupled to the particle that constitutes all or part of the dark matter in the universe. Light Z´ boson could have important implications in dark matter phenomenology

  2. Theoretical and methodological approaches in discourse analysis.

    Science.gov (United States)

    Stevenson, Chris

    2004-01-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a framework for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  3. Theoretical and methodological approaches in discourse analysis.

    Science.gov (United States)

    Stevenson, Chris

    2004-10-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power. Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a frame- work for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  4. A gauge-theoretic approach to gravity.

    Science.gov (United States)

    Krasnov, Kirill

    2012-08-08

    Einstein's general relativity (GR) is a dynamical theory of the space-time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang-Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach.

  5. A sign-theoretic approach to biotechnology

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio

    ” semiotic networks across hierarchical levels and for relating the different emergent codes in living systems. I consider this an important part of the work because there I define some of the main concepts that will help me to analyse different codes and semiotic processes in living systems in order...... to exemplify what is the relevance of a sign-theoretic approach to biotechnology. In particular, I introduce the notion of digital-analogical consensus as a semiotic pattern for the creation of complex logical products that constitute specific signs. The chapter ends with some examples of conspicuous semiotic...... to exemplify how a semiotic approach can be of help when organising the knowledge that can lead us to understanding the relevance, the role and the position of signal transduction networks in relation to the larger semiotic networks in which they function, i.e.: in the hierarchical formal processes of mapping...

  6. Potential benefits of remote sensing: Theoretical framework and empirical estimate

    Science.gov (United States)

    Eisgruber, L. M.

    1972-01-01

    A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.

  7. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  8. Recent Theoretical Approaches to Minimal Artificial Cells

    Directory of Open Access Journals (Sweden)

    Fabio Mavelli

    2014-05-01

    Full Text Available Minimal artificial cells (MACs are self-assembled chemical systems able to mimic the behavior of living cells at a minimal level, i.e. to exhibit self-maintenance, self-reproduction and the capability of evolution. The bottom-up approach to the construction of MACs is mainly based on the encapsulation of chemical reacting systems inside lipid vesicles, i.e. chemical systems enclosed (compartmentalized by a double-layered lipid membrane. Several researchers are currently interested in synthesizing such simple cellular models for biotechnological purposes or for investigating origin of life scenarios. Within this context, the properties of lipid vesicles (e.g., their stability, permeability, growth dynamics, potential to host reactions or undergo division processes… play a central role, in combination with the dynamics of the encapsulated chemical or biochemical networks. Thus, from a theoretical standpoint, it is very important to develop kinetic equations in order to explore first—and specify later—the conditions that allow the robust implementation of these complex chemically reacting systems, as well as their controlled reproduction. Due to being compartmentalized in small volumes, the population of reacting molecules can be very low in terms of the number of molecules and therefore their behavior becomes highly affected by stochastic effects both in the time course of reactions and in occupancy distribution among the vesicle population. In this short review we report our mathematical approaches to model artificial cell systems in this complex scenario by giving a summary of three recent simulations studies on the topic of primitive cell (protocell systems.

  9. Theoretical approaches to social innovation – A critical literature review

    NARCIS (Netherlands)

    Butzin, A.; Davis, A.; Domanski, D.; Dhondt, S.; Howaldt, J.; Kaletka, C.; Kesselring, A.; Kopp, R.; Millard, J.; Oeij, P.; Rehfeld, D.; Schaper-Rinkel, P.; Schwartz, M.; Scoppetta, A.; Wagner-Luptacik, P.; Weber, M.

    2014-01-01

    The SI-DRIVE report “Theoretical approaches to Social Innovation – A Critical Literature Review” delivers a comprehensive overview on the state of the art of theoretically relevant building blocks for advancing a theoretical understanding of social innovation. It collects different theoretical

  10. Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium

    Energy Technology Data Exchange (ETDEWEB)

    M Weimar

    1998-12-10

    This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.

  11. Child Language Acquisition: Contrasting Theoretical Approaches

    Science.gov (United States)

    Ambridge, Ben; Lieven, Elena V. M.

    2011-01-01

    Is children's language acquisition based on innate linguistic structures or built from cognitive and communicative skills? This book summarises the major theoretical debates in all of the core domains of child language acquisition research (phonology, word-learning, inflectional morphology, syntax and binding) and includes a complete introduction…

  12. A game theoretic approach to assignment problems

    NARCIS (Netherlands)

    Klijn, F.

    2000-01-01

    Game theory deals with the mathematical modeling and analysis of conflict and cooperation in the interaction of multiple decision makers. This thesis adopts two game theoretic methods to analyze a range of assignment problems that arise in various economic situations. The first method has as

  13. Game Theoretical Approach to Supply Chain Microfinance

    OpenAIRE

    Sim , Jaehun; Prabhu , Vittaldas ,

    2013-01-01

    Part 1: Sustainable Production; International audience; This paper considers a supply chain microfinance model in which a manufacturer acts as a lender and a raw material supplier as a borrower. Using a game theoretical analysis, the study investigates how investment levels, raw material prices, and profit margins are influenced by loan interest rates under two types of decentralized channel policies: manufacturer Stackelberg and vertical Nash game. In addition, the study shows how the profit...

  14. Theoretical and experimental estimates of the Peierls stress

    CSIR Research Space (South Africa)

    Nabarro, FRN

    1997-03-01

    Full Text Available - sidered in its original derivation. It is argued that the conditions of each type of experiment determine whether the P-N or the H formula is appropriate. ? 2. THEORETICAL Peierls's original estimate was based on a simple cubic lattice... with elastic isotropy and Poisson's ratio v. The result was (T z 20p exp [-47r/( 1 - v)]. (1) This value is so small that a detailed discussion of its accuracy would be point- Nabarro (1947) corrected an algebraic error in Peierls's calculation...

  15. A group theoretic approach to quantum information

    CERN Document Server

    Hayashi, Masahito

    2017-01-01

    This textbook is the first one addressing quantum information from the viewpoint of group symmetry. Quantum systems have a group symmetrical structure. This structure enables to handle systematically quantum information processing. However, there is no other textbook focusing on group symmetry for quantum information although there exist many textbooks for group representation. After the mathematical preparation of quantum information, this book discusses quantum entanglement and its quantification by using group symmetry. Group symmetry drastically simplifies the calculation of several entanglement measures although their calculations are usually very difficult to handle. This book treats optimal information processes including quantum state estimation, quantum state cloning, estimation of group action and quantum channel etc. Usually it is very difficult to derive the optimal quantum information processes without asymptotic setting of these topics. However, group symmetry allows to derive these optimal solu...

  16. UNCERTAINTY IN NEOCLASSICAL AND KEYNESIAN THEORETICAL APPROACHES: A BEHAVIOURAL PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Sinziana BALTATESCU

    2015-11-01

    Full Text Available The ”mainstream” neoclassical assumptions about human economic behavior are currently challenged by both behavioural researches on human behaviour and other theoretical approaches which, in the context of the recent economic and financial crisis find arguments to reinforce their theoretical statements. The neoclassical “perfect rationality” assumption is most criticized and provokes the mainstream theoretical approach to efforts of revisiting the theoretical framework in order to re-state the economic models validity. Uncertainty seems, in this context, to be the concept that allows other theoretical approaches to take into consideration a more realistic individual from the psychological perspective. This paper is trying to present a comparison between the neoclassical and Keynesian approach of the uncertainty, considering the behavioural arguments and challenges addressed to the mainstream theory.

  17. THE NETWORKS IN TOURISM: A THEORETICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Maria TĂTĂRUȘANU

    2016-12-01

    Full Text Available The economic world in which tourism companies act today is in a continuous changing process. The most important factor of these changes is the globalization of their environment, both in economic, social, natural and cultural aspects. The tourism companies can benefit from the opportunities brought by globalization, but also could be menaced by the new context. How could react the companies to these changes in order to create and maintain long term competitive advantage for their business? In the present paper we make a literature review of the new tourism companies´ business approach: the networks - a result and/or a reason for exploiting the opportunities or, on the contrary, for keeping their actual position on the market. It’s a qualitative approach and the research methods used are analyses, synthesis, abstraction, which are considered the most appropriate to achieve the objective of the paper.

  18. Theoretical approach to the scanning tunneling microscope

    International Nuclear Information System (INIS)

    Noguera, C.

    1990-01-01

    Within a one-electron approach, based on a Green's-function formalism, a nonperturbative expression for the tunneling current is obtained and used to discuss which spectroscopic information may be deduced from a scanning-tunneling-microscope experiment. It is shown up to which limits the voltage dependence of the tunneling current reproduces the local density of states at the surface, and how the reflection coefficients of the electronic waves at the surface may modify it

  19. Information-theoretic approach to uncertainty importance

    International Nuclear Information System (INIS)

    Park, C.K.; Bari, R.A.

    1985-01-01

    A method is presented for importance analysis in probabilistic risk assessments (PRA) for which the results of interest are characterized by full uncertainty distributions and not just point estimates. The method is based on information theory in which entropy is a measure of uncertainty of a probability density function. We define the relative uncertainty importance between two events as the ratio of the two exponents of the entropies. For the log-normal and log-uniform distributions the importance measure is comprised of the median (central tendency) and of the logarithm of the error factor (uncertainty). Thus, if accident sequences are ranked this way, and the error factors are not all equal, then a different rank order would result than if the sequences were ranked by the central tendency measure alone. As an illustration, the relative importance of internal events and in-plant fires was computed on the basis of existing PRA results

  20. MEDICAL BRAIN DRAIN - A THEORETICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Boncea Irina

    2013-07-01

    Full Text Available Medical brain drain is defined as the migration of health personnel from developing countries to developed countries and between industrialized nations in search for better opportunities. This phenomenon became a global growing concern due to its impact on both the donor and the destination countries. This article aims to present the main theoretical contributions starting from 1950 until today and the historical evolution, in the attempt of correlating the particular case of medical brain drain with the theory and evolution of the brain drain in general. This article raises questions and offers answers, identifies the main issues and looks for possible solutions in order to reduce the emigration of medical doctors. Factors of influence include push (low level of income, poor working conditions, the absence of job openings and social recognition, oppressive political climate and pull (better remuneration and working conditions, prospects for career development, job satisfaction, security factors. Developing countries are confronting with the loss of their most valuable intellectuals and the investment in their education, at the benefit of developed nations. An ethical debate arises as the disparities between countries increases, industrialized nations filling in the gaps in health systems with professionals from countries already facing shortages. However, recent literature emphasizes the possibility of a “beneficial brain drain” through education incentives offered by the emigration prospects. Other sources of “brain gain” for donor country are the remittances, the scientific networks and return migration. Measures to stem the medical brain drain involve the common effort and collaboration between developing and developed countries and international organizations. Measures adopted by donor countries include higher salaries, better working conditions, security, career opportunities, incentives to stimulate return migration. Destination

  1. The dynamics of alliances. A game theoretical approach

    NARCIS (Netherlands)

    Ridder, A. de

    2007-01-01

    In this dissertation, Annelies de Ridder presents a game theoretical approach to strategic alliances. More specifically, the dynamics of and within alliances have been studied. To do so, four new models have been developed in the game theoretical tradition. Both coalition theory and strategic game

  2. Social representations: a theoretical approach in health

    Directory of Open Access Journals (Sweden)

    Isaiane Santos Bittencourt

    2011-03-01

    Full Text Available Objective: To present the theory of social representations, placing its epistemology and knowing the basic concepts of its approach as a structural unit of knowledge for health studies. Justification: The use of this theory comes from the need to understand social eventsunder the lens of the meanings constructed by the community. Data Synthesis: This was a descriptive study of literature review, which used as a source of data collection the classical authors of social representations supported by articles from electronic search at Virtual Health Library (VHL. The definition and discussion of collected data enabled to introduce two themes, versed on the history and epistemology of representations and on the structuralapproach of representations in health studies. Conclusion: This review allowed highlight the importance of locating the objects of study with regard to contextual issues of individual and collective histories, valuing the plurality of relations, to come closer to reality that is represented by the subjects.

  3. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  4. A relevance theoretic approach to intertextuality in print advertising

    African Journals Online (AJOL)

    Anonymous vs. acknowledged intertexts: A relevance theoretic approach to intertextuality in print advertising. ... make intertextual references to texts from mass media genres other than advertising as part of an ... AJOL African Journals Online.

  5. Theoretical approaches to lightness and perception.

    Science.gov (United States)

    Gilchrist, Alan

    2015-01-01

    . Evidence for and against these approaches is reviewed.

  6. Dramaturgical and Music-Theoretical Approaches to Improvisation Pedagogy

    Science.gov (United States)

    Huovinen, Erkki; Tenkanen, Atte; Kuusinen, Vesa-Pekka

    2011-01-01

    The aim of this article is to assess the relative merits of two approaches to teaching musical improvisation: a music-theoretical approach, focusing on chords and scales, and a "dramaturgical" one, emphasizing questions of balance, variation and tension. Adult students of music pedagogy, with limited previous experience in improvisation,…

  7. Preservation of Newspapers: Theoretical Approaches and Practical Achievements

    Science.gov (United States)

    Hasenay, Damir; Krtalic, Maja

    2010-01-01

    The preservation of newspapers is the main topic of this paper. A theoretical overview of newspaper preservation is given, with an emphasis on the importance of a systematic and comprehensive approach. Efficient newspaper preservation implies understanding the meaning of preservation in general, as well as understanding specific approaches,…

  8. One-dimensional barcode reading: an information theoretic approach

    Science.gov (United States)

    Houni, Karim; Sawaya, Wadih; Delignon, Yves

    2008-03-01

    In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.

  9. Approaches to estimating decommissioning costs

    International Nuclear Information System (INIS)

    Smith, R.I.

    1990-07-01

    The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

  10. Molecular approach of uranyl/mineral surfaces: theoretical approach

    International Nuclear Information System (INIS)

    Roques, J.

    2009-01-01

    As migration of radio-toxic elements through the geosphere is one of the processes which may affect the safety of a radioactive waste storage site, the author shows that numerical modelling is a support to experimental result exploitation, and allows the development of new interpretation and prediction codes. He shows that molecular modelling can be used to study processes of interaction between an actinide ion (notably a uranyl ion) and a mineral surface (a TiO 2 substrate). He also reports the predictive theoretical study of the interaction between an uranyl ion and a gibbsite substrate

  11. BEHAVIORAL INPUTS TO THE THEORETICAL APPROACH OF THE ECONOMIC CRISIS

    Directory of Open Access Journals (Sweden)

    Sinziana BALTATESCU

    2015-09-01

    Full Text Available The current economic and financial crisis gave room for the theoretical debates to reemerge. The economic reality challenged the mainstream neoclassical approach leaving the opportunity for the Austrian School, Post Keynesianism or Institutionalists to bring in front theories that seem to better explain the economic crisis and thus, leaving space for more efficient economic policies to result. In this context, the main assumptions of the mainstream theoretical approach are challenged and reevaluated, behavioral economics is one of the main challengers. Without developing in an integrated school of thought yet, behavioral economics brings new elements within the framework of economic thinking. How are the main theoretical approaches integrating these new elements and whether this process is going to narrow the theory or enrich it to be more comprehensive are questions to which this paper tries to answer, or, at least, to leave room for an answer.

  12. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  13. Twistor-theoretic approach to topological field theories

    International Nuclear Information System (INIS)

    Ito, Kei.

    1991-12-01

    The two-dimensional topological field theory which describes a four-dimensional self-dual space-time (gravitational instanton) as a target space, which we constructed before, is shown to be deeply connected with Penrose's 'twistor theory'. The relations are presented in detail. Thus our theory offers a 'twistor theoretic' approach to topological field theories. (author)

  14. SOCIOLOGICAL UNDERSTANDING OF INTERNET: THEORETICAL APPROACHES TO THE NETWORK ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. E. Dobrinskaya

    2016-01-01

    Full Text Available The network is an efficient way of social structure analysis for contemporary sociologists. It gives broad opportunities for detailed and fruitful research of different patterns of ties and social relations by quantitative analytical methods and visualization of network models. The network metaphor is used as the most representative tool for description of a new type of society. This new type is characterized by flexibility, decentralization and individualization. Network organizational form became the dominant form in modern societies. The network is also used as a mode of inquiry. Actually three theoretical network approaches in the Internet research case are the most relevant: social network analysis, “network society” theory and actor-network theory. Every theoretical approach has got its own notion of network. Their special methodological and theoretical features contribute to the Internet studies in different ways. The article represents a brief overview of these network approaches. This overview demonstrates the absence of a unified semantic space of the notion of “network” category. This fact, in turn, points out the need for detailed analysis of these approaches to reveal their theoretical and empirical possibilities in application to the Internet studies. 

  15. Multiple stakeholders in road pricing: A game theoretic approach

    NARCIS (Netherlands)

    Ohazulike, Anthony; Still, Georg J.; Kern, Walter; van Berkum, Eric C.; Hausken, Kjell; Zhuang, Jun

    2015-01-01

    We investigate a game theoretic approach as an alternative to the standard multi-objective optimization models for road pricing. Assuming that various, partly conflicting traffic externalities (congestion, air pollution, noise, safety, etcetera) are represented by corresponding players acting on a

  16. An Activity Theoretical Approach to Social Interaction during Study Abroad

    Science.gov (United States)

    Shively, Rachel L.

    2016-01-01

    This case study examines how one study abroad student oriented to social interaction during a semester in Spain. Using an activity theoretical approach, the findings indicate that the student not only viewed social interaction with his Spanish host family and an expert-Spanish-speaking age peer as an opportunity for second language (L2) learning,…

  17. A theoretical signal processing framework for linear diffusion MRI: Implications for parameter estimation and experiment design.

    Science.gov (United States)

    Varadarajan, Divya; Haldar, Justin P

    2017-11-01

    The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A queer-theoretical approach to community health psychology.

    Science.gov (United States)

    Easpaig, Bróna R Nic Giolla; Fryer, David M; Linn, Seònaid E; Humphrey, Rhianna H

    2014-01-01

    Queer-theoretical resources offer ways of productively rethinking how central concepts such as 'person-context', 'identity' and 'difference' may be understood for community health psychologists. This would require going beyond consideration of the problems with which queer theory is popularly associated to cautiously engage with the aspects of this work relevant to the promotion of collective practice and engaging with processes of marginalisation. In this article, we will draw upon and illustrate the queer-theoretical concepts of 'performativity' and 'cultural intelligibility' before moving towards a preliminary mapping of what a queer-informed approach to community health psychology might involve.

  19. Monoenergetic approximation of a polyenergetic beam: a theoretical approach

    International Nuclear Information System (INIS)

    Robinson, D.M.; Scrimger, J.W.

    1991-01-01

    There exist numerous occasions in which it is desirable to approximate the polyenergetic beams employed in radiation therapy by a beam of photons of a single energy. In some instances, commonly used rules of thumb for the selection of an appropriate energy may be valid. A more accurate approximate energy, however, may be determined by an analysis which takes into account both the spectral qualities of the beam and the material through which it passes. The theoretical basis of this method of analysis is presented in this paper. Experimental agreement with theory for a range of materials and beam qualities is also presented and demonstrates the validity of the theoretical approach taken. (author)

  20. A theoretical approach to artificial intelligence systems in medicine.

    Science.gov (United States)

    Spyropoulos, B; Papagounos, G

    1995-10-01

    The various theoretical models of disease, the nosology which is accepted by the medical community and the prevalent logic of diagnosis determine both the medical approach as well as the development of the relevant technology including the structure and function of the A.I. systems involved. A.I. systems in medicine, in addition to the specific parameters which enable them to reach a diagnostic and/or therapeutic proposal, entail implicitly theoretical assumptions and socio-cultural attitudes which prejudice the orientation and the final outcome of the procedure. The various models -causal, probabilistic, case-based etc. -are critically examined and their ethical and methodological limitations are brought to light. The lack of a self-consistent theoretical framework in medicine, the multi-faceted character of the human organism as well as the non-explicit nature of the theoretical assumptions involved in A.I. systems restrict them to the role of decision supporting "instruments" rather than regarding them as decision making "devices". This supporting role and, especially, the important function which A.I. systems should have in the structure, the methods and the content of medical education underscore the need of further research in the theoretical aspects and the actual development of such systems.

  1. Child education and management: theoretical approaches on legislation

    Directory of Open Access Journals (Sweden)

    Rúbia Borges

    2017-11-01

    Full Text Available The aim of this work was to investigate theoretical approaches regarding to daycare centers and management, considering childhood education for different audiences, such children and babies on the childhood perspective. On qualitative approach, this research is bibliographical and reflects on official documents about the theme. The development of this research occurred through analysis on educational Brazilian laws, starting by the Federal Constitution (FC, Law of Guidelines and Bases for National Education (LGB, National Curriculum Guidelines and the Education National Plan (ENP. The results point to a generalist legislation that allow certain autonomy on the education. However, there is the need to deepen theoretical and practical studies on the reality of institutions which have the education as the paramount purpose, in order to offer education with quality and attending to the needs from the audience in these institutions.

  2. Blogging in Higher Education: Theoretical and Practical Approach

    OpenAIRE

    Gulfidan CAN; Devrim OZDEMIR

    2006-01-01

    In this paper the blogging method, which includes new forms of writing, is supported as an alternative approach to address the frequently asserted problems in higher education such as product-oriented assessment and lack of value given to students' writing as contribution to the discourse of the academic disciplines. Both theoretical and research background information is provided to clarify the rationale of using this method in higher education. Furthermore, recommended way of using this met...

  3. Theoretical and expert system approach to photoionization theories

    Directory of Open Access Journals (Sweden)

    Petrović Ivan D.

    2016-01-01

    Full Text Available The influence of the ponderomotive and the Stark shifts on the tunneling transition rate was observed, for non-relativistic linearly polarized laser field for alkali atoms, with three different theoretical models, the Keldysh theory, the Perelomov, Popov, Terent'ev (PPT theory, and the Ammosov, Delone, Krainov (ADK theory. We showed that aforementioned shifts affect the transition rate differently for different approaches. Finally, we presented a simple expert system for analysis of photoionization theories.

  4. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  5. Principle-theoretic approach of kondo and construction-theoretic formalism of gauge theories

    International Nuclear Information System (INIS)

    Jain, L.C.

    1986-01-01

    Einstein classified various theories in physics as principle-theories and constructive-theories. In this lecture Kondo's approach to microscopic and macroscopic phenomena is analysed for its principle theoretic pursuit as followed by construction. The fundamentals of his theory may be recalled as Tristimulus principle, Observation principle, Kawaguchi spaces, empirical information, epistemological point of view, unitarity, intrinsicality, and dimensional analysis subject to logical and geometrical achievement. On the other hand, various physicists have evolved constructive gauge theories through the phenomenological point of view, often a collective one. Their synthetic method involves fibre bundles and connections, path integrals as well as other hypothetical structures. They lead towards clarity, completeness and adaptability

  6. Theoretical orientations in environmental planning: An inquiry into alternative approaches

    Science.gov (United States)

    Briassoulis, Helen

    1989-07-01

    In the process of devising courses of action to resolve problems arising at the society-environment interface, a variety of planning approaches are followed, whose adoption is influenced by—among other things—the characteristics of environmental problems, the nature of the decision-making context, and the intellectual traditions of the disciplines contributing to the study of these problems. This article provides a systematic analysis of six alternative environmental planning approaches—comprehensive/rational, incremental, adaptive, contingency, advocacy, and participatory/consensual. The relative influence of the abovementioned factors is examined, the occurrence of these approaches in real-world situations is noted, and their environmental soundness and political realism is evaluated. Because of the disparity between plan formulation and implementation and between theoretical form and empirical reality, a synthetic view of environmental planning approaches is taken and approaches in action are identified, which characterize the totality of the planning process from problem definition to plan implementation, as well as approaches in the becoming, which may be on the horizon of environmental planning of tomorrow. The suggested future research directions include case studies to verify and detail the presence of the approaches discussed, developing measures of success of a given approach in a given decision setting, and an intertemporal analysis of environmental planning approaches.

  7. An attempt of classification of theoretical approaches to national identity

    Directory of Open Access Journals (Sweden)

    Milošević-Đorđević Jasna S.

    2003-01-01

    Full Text Available It is compulsory that complex social concepts should be defined in different ways and approached from the perspective of different science disciplines. Therefore, it is difficult to precisely define them without overlapping of meaning with other similar concepts. This paper has made an attempt towards theoretical classification of the national identity and differentiate that concept in comparison to the other related concepts (race, ethnic group, nation, national background, authoritativeness, patriarchy. Theoretical assessments are classified into two groups: ones that are dealing with nature of national identity and others that are stating one or more dimensions of national identity, crucial for its determination. On the contrary to the primordialistic concept of national identity, describing it as a fundamental, deeply rooted human feature, there are many numerous contemporary theoretical approaches (instrumentalist, constructivist, functionalistic, emphasizing changeable, fluid, instrumentalist function of the national identity. Fundamental determinants of national identity are: language, culture (music, traditional myths, state symbols (territory, citizenship, self-categorization, religion, set of personal characteristics and values.

  8. Systematic Approach for Decommissioning Planning and Estimating

    International Nuclear Information System (INIS)

    Dam, A. S.

    2002-01-01

    Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

  9. A System Theoretical Inspired Approach to Knowledge Construction

    DEFF Research Database (Denmark)

    Mathiasen, Helle

    2008-01-01

    student's knowledge construction, in the light of operative constructivism, inspired by the German sociologist N. Luhmann's system theoretical approach to epistemology. Taking observations as operations based on distinction and indication (selection) contingency becomes a fundamental condition in learning......  Abstract The aim of this paper is to discuss the relation between teaching and learning. The point of departure is that teaching environments (communication forums) is a potential facilitator for learning processes and knowledge construction. The paper present a theoretical frame work, to discuss...... processes, and a condition which teaching must address as far as teaching strives to stimulate non-random learning outcomes. Thus learning outcomes understood as the individual learner's knowledge construction cannot be directly predicted from events and characteristics in the environment. This has...

  10. Information Ergonomics A theoretical approach and practical experience in transportation

    CERN Document Server

    Sandl, Peter

    2012-01-01

    The variety and increasing availability of hypermedia information systems, which are used in stationary applications like operators’ consoles as well as mobile systems, e.g. driver information and navigation systems in automobiles form a foundation for the mediatization of the society. From the human engineering point of view this development and the ensuing increased importance of information systems for economic and private needs require careful deliberation of the derivation and application of ergonomics methods particularly in the field of information systems. This book consists of two closely intertwined parts. The first, theoretical part defines the concept of an information system, followed by an explanation of action regulation as well as cognitive theories to describe man information system interaction. A comprehensive description of information ergonomics concludes the theoretical approach. In the second, practically oriented part of this book authors from industry as well as from academic institu...

  11. Uncertainty estimates for theoretical atomic and molecular data

    International Nuclear Information System (INIS)

    Chung, H-K; Braams, B J; Bartschat, K; Császár, A G; Drake, G W F; Kirchner, T; Kokoouline, V; Tennyson, J

    2016-01-01

    Sources of uncertainty are reviewed for calculated atomic and molecular data that are important for plasma modeling: atomic and molecular structures and cross sections for electron-atom, electron-molecule, and heavy particle collisions. We concentrate on model uncertainties due to approximations to the fundamental many-body quantum mechanical equations and we aim to provide guidelines to estimate uncertainties as a routine part of computations of data for structure and scattering. (topical review)

  12. Multifractal rainfall extremes: Theoretical analysis and practical estimation

    International Nuclear Information System (INIS)

    Langousis, Andreas; Veneziano, Daniele; Furcolo, Pierluigi; Lepore, Chiara

    2009-01-01

    We study the extremes generated by a multifractal model of temporal rainfall and propose a practical method to estimate the Intensity-Duration-Frequency (IDF) curves. The model assumes that rainfall is a sequence of independent and identically distributed multiplicative cascades of the beta-lognormal type, with common duration D. When properly fitted to data, this simple model was found to produce accurate IDF results [Langousis A, Veneziano D. Intensity-duration-frequency curves from scaling representations of rainfall. Water Resour Res 2007;43. (doi:10.1029/2006WR005245)]. Previous studies also showed that the IDF values from multifractal representations of rainfall scale with duration d and return period T under either d → 0 or T → ∞, with different scaling exponents in the two cases. We determine the regions of the (d, T)-plane in which each asymptotic scaling behavior applies in good approximation, find expressions for the IDF values in the scaling and non-scaling regimes, and quantify the bias when estimating the asymptotic power-law tail of rainfall intensity from finite-duration records, as was often done in the past. Numerically calculated exact IDF curves are compared to several analytic approximations. The approximations are found to be accurate and are used to propose a practical IDF estimation procedure.

  13. SOCIOLOGICAL UNDERSTANDING OF INTERNET: THEORETICAL APPROACHES TO THE NETWORK ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. E. Dobrinskaya

    2016-01-01

    Full Text Available Internet studies are carried out by various scientific disciplines and in different research perspectives. Sociological studies of the Internet deal with a new technology, a revolutionary means of mass communication and a social space. There is a set of research difficulties associated with the Internet. Firstly, the high speed and wide spread of Internet technologies’ development. Secondly, the collection and filtration of materials concerning with Internet studies. Lastly, the development of new conceptual categories, which are able to reflect the impact of the Internet development in contemporary world. In that regard the question of the “network” category use is essential. Network is the base of Internet functioning, on the one hand. On the other hand, network is the ground for almost all social interactions in modern society. So such society is called network society. Three theoretical network approaches in the Internet research case are the most relevant: network society theory, social network analysis and actor-network theory. Each of these theoretical approaches contributes to the study of the Internet. They shape various images of interactions between human beings in their entity and dynamics. All these approaches also provide information about the nature of these interactions. 

  14. Theoretical approaches to determining the financial provision of public transportation

    Directory of Open Access Journals (Sweden)

    O.A. Vygovska

    2018-03-01

    Full Text Available The work is devoted to the improvement of theoretical approaches in determining the financial provision of transportation by public transport at the regional level. The author summarizes the concept of the «financial security» and defines the main difference from the term «financing». The systematization of key differences in the financial provision of a transport company from other financial entities of the economic sector at the national and regional levels is carried out. The disadvantages and advantages of sources of financial support are analyzed. The purpose of the article is to study theoretical approaches in determining the financial provision of transportation by public transport at the regional level. The prospects for further scientific research are the need to identify new scientific approaches and techniques to substantiate and elaborate the concept of the «financial provision of transportation by public transport». The practical application of the research should be formed in a detailed analysis of cash flow streams in the system of «state – regional authority – economic entity». The financial provision of transportation by public transport at the regional level has not been given the sufficient attention in the scientific research within the country. This fact confirms the need for a thorough analysis of the transport industry as a whole.

  15. MEDIATIC NARRATIVES AND IDENTIFICATION PROCESSES. A THEORETICAL AND METHODOLOGICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Salomé Sola Morales

    2013-04-01

    Full Text Available This article, theoretical and argumentative, lays the conceptual and methodological basis for the study of the link between identity and narrative media identification processes undertaken by individuals and groups. Thus, the setting national identifications, professional, religious or gender is here proposed as the result of the dialectic between the 'media narrative identity', which the media produce and convey, and identification processes that individuals and groups perform. Furthermore we propose the use of the biographical method as a form of empirical approach to psycho-social phenomenon

  16. Success Determination by Innovation: A Theoretical Approach in Marketing

    Directory of Open Access Journals (Sweden)

    Raj Kumar Gautam

    2012-10-01

    Full Text Available The paper aims at to identify the main issues in the marketing which needs immediate attention of the marketers. The importance of innovation in the marketing has also been highlighted and marketing mix have been related to innovative and creative ideas. The study is based on the secondary data, various research papers, articles has been studied to develop a innovative approach in the marketing. Marketing innovative ideas relating to business lead generation, product, price, distribution, promotion of product, and revenue generation have been highlighted in the paper. All the suggestions are theoretical and may have relevance and implication to the marketers.

  17. Success Determination by Innovation: A Theoretical Approach in Marketing

    Directory of Open Access Journals (Sweden)

    Raj Kumar Gautam

    2012-11-01

    Full Text Available The paper aims at to identify the main issues in the marketing which needs immediate attention of the marketers. The importance of innovation in the marketing has also been highlighted and marketing mix have been related to innovative and creative ideas. The study is based on the secondary data, various research papers, articles has been studied to develop a innovative approach in the marketing. Marketing innovative ideas relating to business lead generation, product, price, distribution, promotion of product, and revenue generation have been highlighted in the paper. All the suggestions are theoretical and may have relevance and implication to the marketers.

  18. Vanadium supersaturated silicon system: a theoretical and experimental approach

    Science.gov (United States)

    Garcia-Hemme, Eric; García, Gregorio; Palacios, Pablo; Montero, Daniel; García-Hernansanz, Rodrigo; Gonzalez-Diaz, Germán; Wahnon, Perla

    2017-12-01

    The effect of high dose vanadium ion implantation and pulsed laser annealing on the crystal structure and sub-bandgap optical absorption features of V-supersaturated silicon samples has been studied through the combination of experimental and theoretical approaches. Interest in V-supersaturated Si focusses on its potential as a material having a new band within the Si bandgap. Rutherford backscattering spectrometry measurements and formation energies computed through quantum calculations provide evidence that V atoms are mainly located at interstitial positions. The response of sub-bandgap spectral photoconductance is extended far into the infrared region of the spectrum. Theoretical simulations (based on density functional theory and many-body perturbation in GW approximation) bring to light that, in addition to V atoms at interstitial positions, Si defects should also be taken into account in explaining the experimental profile of the spectral photoconductance. The combination of experimental and theoretical methods provides evidence that the improved spectral photoconductance up to 6.2 µm (0.2 eV) is due to new sub-bandgap transitions, for which the new band due to V atoms within the Si bandgap plays an essential role. This enables the use of V-supersaturated silicon in the third generation of photovoltaic devices.

  19. Towards a Set Theoretical Approach to Big Data Analytics

    DEFF Research Database (Denmark)

    Mukkamala, Raghava Rao; Hussain, Abid; Vatrapu, Ravi

    2014-01-01

    Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal...... this technique to the data analysis of big social data collected from Facebook page of the fast fashion company, H&M....... and software realms. In this paper, we first present and discuss a theory and conceptual model of social data. Second, we outline a formal model based on set theory and discuss the semantics of the formal model with a real-world social data example from Facebook. Third, we briefly present and discuss...

  20. Introduction to superfluidity field-theoretical approach and applications

    CERN Document Server

    Schmitt, Andreas

    2015-01-01

    Superfluidity – and closely related to it, superconductivity – are very general phenomena that can occur on vastly different energy scales. Their underlying theoretical mechanism of spontaneous symmetry breaking is even more general and applies to a multitude of physical systems.  In these lecture notes, a pedagogical introduction to the field-theory approach to superfluidity is presented. The connection to more traditional approaches, often formulated in a different language, is carefully explained in order to provide a consistent picture that is useful for students and researchers in all fields of physics. After introducing the basic concepts, such as the two-fluid model and the Goldstone mode, selected topics of current research are addressed, such as the BCS-BEC crossover and Cooper pairing with mismatched Fermi momenta.

  1. How cells engulf: a review of theoretical approaches to phagocytosis

    Science.gov (United States)

    Richards, David M.; Endres, Robert G.

    2017-12-01

    Phagocytosis is a fascinating process whereby a cell surrounds and engulfs particles such as bacteria and dead cells. This is crucial both for single-cell organisms (as a way of acquiring nutrients) and as part of the immune system (to destroy foreign invaders). This whole process is hugely complex and involves multiple coordinated events such as membrane remodelling, receptor motion, cytoskeleton reorganisation and intracellular signalling. Because of this, phagocytosis is an excellent system for theoretical study, benefiting from biophysical approaches combined with mathematical modelling. Here, we review these theoretical approaches and discuss the recent mathematical and computational models, including models based on receptors, models focusing on the forces involved, and models employing energetic considerations. Along the way, we highlight a beautiful connection to the physics of phase transitions, consider the role of stochasticity, and examine links between phagocytosis and other types of endocytosis. We cover the recently discovered multistage nature of phagocytosis, showing that the size of the phagocytic cup grows in distinct stages, with an initial slow stage followed by a much quicker second stage starting around half engulfment. We also address the issue of target shape dependence, which is relevant to both pathogen infection and drug delivery, covering both one-dimensional and two-dimensional results. Throughout, we pay particular attention to recent experimental techniques that continue to inform the theoretical studies and provide a means to test model predictions. Finally, we discuss population models, connections to other biological processes, and how physics and modelling will continue to play a key role in future work in this area.

  2. The Theoretical and Empirical Approaches to the Definition of Audit Risk

    Directory of Open Access Journals (Sweden)

    Berezhniy Yevgeniy B.

    2017-12-01

    Full Text Available The risk category is one of the key factors in planning the audit and assessing its results. The article is aimed at generalizing the theoretical and empirical approaches to the definition of audit risk and methods of its reduction. The structure of audit risk was analyzed and it has been determined, that each of researchers approached to structuring of audit risk from the subjective point of view. The author’s own model of audit risk has been proposed. The basic methods of assessment of audit risk are generalized, the theoretical and empirical approaches to its definition are allocated, also it is noted, that application of any of the given models can be suitable rather for approximate estimation, than for exact calculation of an audit risk, as it is accompanied by certain shortcomings.

  3. Nuclear Fermi Dynamics: physical content versus theoretical approach

    International Nuclear Information System (INIS)

    Griffin, J.J.

    1977-01-01

    Those qualitative properties of nuclei, and of their energetic collisions, which seem of most importance for the flow of nuclear matter are listed and briefly discussed. It is suggested that nuclear matter flow is novel among fluid dynamical problems. The name, Nuclear Fermi Dynamics, is proposed as an appropriate unambiguous label. The Principle of Commensurability, which suggests the measurement of the theoretical content of an approach against its expected predictive range is set forth and discussed. Several of the current approaches to the nuclear matter flow problem are listed and subjected to such a test. It is found that the Time-Dependent Hartree-Fock (TDHF) description, alone of all the major theoretical approaches currently in vogue, incorporates each of the major qualitative features within its very concise single mathematical assumption. Some limitations of the conventional TDHF method are noted, and one particular defect is discussed in detail: the Spurious Cross Channel Correlations which arise whenever several asymptotic reaction channels must be simultaneously described by a single determinant. A reformulated Time-Dependent-S-Matrix Hartree-Fock Theory is proposed, which obviates this difficulty. It is noted that the structure of TD-S-HF can be applied to a more general class of non-linear wave mechanical problems than simple TDHF. Physical requirements minimal to assure that TD-S-HF represents a sensible reaction theory are utilized to prescribe the definition of acceptable asymptotic channels. That definition, in turn, defines the physical range of the TD-S-HF theory as the description of collisions of certain mathematically well-defined objects of mixed quantal and classical character, the ''TDHF droplets.''

  4. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  5. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  6. Analytic game—theoretic approach to ground-water extraction

    Science.gov (United States)

    Loáiciga, Hugo A.

    2004-09-01

    The roles of cooperation and non-cooperation in the sustainable exploitation of a jointly used groundwater resource have been quantified mathematically using an analytical game-theoretic formulation. Cooperative equilibrium arises when ground-water users respect water-level constraints and consider mutual impacts, which allows them to derive economic benefits from ground-water indefinitely, that is, to achieve sustainability. This work shows that cooperative equilibrium can be obtained from the solution of a quadratic programming problem. For cooperative equilibrium to hold, however, enforcement must be effective. Otherwise, according to the commonized costs-privatized profits paradox, there is a natural tendency towards non-cooperation and non-sustainable aquifer mining, of which overdraft is a typical symptom. Non-cooperative behavior arises when at least one ground-water user neglects the externalities of his adopted ground-water pumping strategy. In this instance, water-level constraints may be violated in a relatively short time and the economic benefits from ground-water extraction fall below those obtained with cooperative aquifer use. One example illustrates the game theoretic approach of this work.

  7. Factors determining early internationalization of entrepreneurial SMEs: Theoretical approach

    Directory of Open Access Journals (Sweden)

    Agne Matiusinaite

    2015-12-01

    Full Text Available Purpose – This study extends the scientific discussion of early internationalization of SMEs. The main purpose of this paper – to develop a theoretical framework to investigate factors determining early internationalization of international new ventures. Design/methodology/approach – The conceptual framework is built on the analysis and synthesis of scientific literature. Findings – This paper presents different factors, which determine early internationalization of international new ventures. These factors are divided to entrepreneurial, organizational and contextual factors. We argue that early internationalization of international new ventures is defined by entrepreneurial characteristics and previous experience of the entrepreneur, opportunities recognition and exploitation, risk tolerance, specific of the organization, involvement into networks and contextual factors. Study proved that only interaction between factors and categories has an effect for business development and successful implementation of early internationalization. Research limitations/implications – The research was conducted on the theoretical basis of scientific literature. The future studies could include a practical confirmation or denial of such allocation of factors. Originality/value – The originality of this study lies in the finding that factor itself has limited effect to early internationalization. Only the interoperability of categories and factors gives a positive impact on early internationalization of entrepreneurial SMEs.

  8. Investigations on Actuator Dynamics through Theoretical and Finite Element Approach

    Directory of Open Access Journals (Sweden)

    Somashekhar S. Hiremath

    2010-01-01

    Full Text Available This paper gives a new approach for modeling the fluid-structure interaction of servovalve component-actuator. The analyzed valve is a precision flow control valve-jet pipe electrohydraulic servovalve. The positioning of an actuator depends upon the flow rate from control ports, in turn depends on the spool position. Theoretical investigation is made for No-load condition and Load condition for an actuator. These are used in finite element modeling of an actuator. The fluid-structure-interaction (FSI is established between the piston and the fluid cavities at the piston end. The fluid cavities were modeled with special purpose hydrostatic fluid elements while the piston is modeled with brick elements. The finite element method is used to simulate the variation of cavity pressure, cavity volume, mass flow rate, and the actuator velocity. The finite element analysis is extended to study the system's linearized response to harmonic excitation using direct solution steady-state dynamics. It was observed from the analysis that the natural frequency of the actuator depends upon the position of the piston in the cylinder. This is a close match with theoretical and simulation results. The effect of bulk modulus is also presented in the paper.

  9. Towards a capability approach to child growth: A theoretical framework.

    Science.gov (United States)

    Haisma, Hinke; Yousefzadeh, Sepideh; Boele Van Hensbroek, Pieter

    2018-04-01

    Child malnutrition is an important cause of under-5 mortality and morbidity around the globe. Despite the partial success of (inter)national efforts to reduce child mortality, under-5 mortality rates continue to be high. The multidimensional approaches of the Sustainable Development Goals may suggest new directions for rethinking strategies for reducing child mortality and malnutrition. We propose a theoretical framework for developing a "capability" approach to child growth. The current child growth monitoring practices are based on 2 assumptions: (a) that anthropometric and motor development measures are the appropriate indicators; and (b) that child growth can be assessed using a single universal standard that is applicable around the world. These practices may be further advanced by applying a capability approach to child growth, whereby growth is redefined as the achievement of certain capabilities (of society, parents, and children). This framework is similar to the multidimensional approach to societal development presented in the seminal work of Amartya Sen. To identify the dimensions of healthy child growth, we draw upon theories from the social sciences and evolutionary biology. Conceptually, we consider growth as a plural space and propose assessing growth by means of a child growth matrix in which the context is embedded in the assessment. This approach will better address the diversities and the inequalities in child growth. Such a multidimensional measure will have implications for interventions and policy, including prevention and counselling, and could have an impact on child malnutrition and mortality. © 2017 The Authors. Maternal and Child Nutrition Published by John Wiley & Sons, Ltd.

  10. A theoretical approach to photosynthetically active radiation silicon sensor

    International Nuclear Information System (INIS)

    Tamasi, M.J.L.; Martínez Bogado, M.G.

    2013-01-01

    This paper presents a theoretical approach for the development of low cost radiometers to measure photosynthetically active radiation (PAR). Two alternatives are considered: a) glass optical filters attached to a silicon sensor, and b) dielectric coating on a silicon sensor. The devices proposed are based on radiometers previously developed by the Argentine National Atomic Energy Commission. The objective of this work is to adapt these low cost radiometers to construct reliable instruments for measuring PAR. The transmittance of optical filters and sensor response have been analyzed for different dielectric materials, number of layers deposited, and incidence angles. Uncertainties in thickness of layer deposition were evaluated. - Highlights: • Design of radiometers to measure photosynthetically active radiation • The study has used a filter and a Si sensor to modify spectral response. • Dielectric multilayers on glass and silicon sensor • Spectral response related to different incidence angles, materials and spectra

  11. Decommissioning Cost Estimating -The ''Price'' Approach

    International Nuclear Information System (INIS)

    Manning, R.; Gilmour, J.

    2002-01-01

    Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs

  12. Consistency of extreme flood estimation approaches

    Science.gov (United States)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  13. Online adaptive approach for a game-theoretic strategy for complete vehicle energy management

    NARCIS (Netherlands)

    Chen, H.; Kessels, J.T.B.A.; Weiland, S.

    2015-01-01

    This paper introduces an adaptive approach for a game-theoretic strategy on Complete Vehicle Energy Management. The proposed method enhances the game-theoretic approach such that the strategy is able to adapt to real driving behavior. The classical game-theoretic approach relies on one probability

  14. An integrated theoretical and practical approach for teaching hydrogeology

    Science.gov (United States)

    Bonomi, Tullia; Fumagalli, Letizia; Cavallin, Angelo

    2013-04-01

    Hydrogeology as an earth science intersects the broader disciplines of geology, engineering, and environmental studies but it does not overlap fully with any of them. It is focused on its own range of problems and over time has developed a rich variety of methods and approaches. The resolution of many hydrogeological problems requires knowledge of elements of geology, hydraulics, physics and chemistry; moreover in recent years the knowledge of modelling techniques has become a necessary ability. Successful transfer of all this knowledge to the students depends on the breadth of material taught in courses, the natural skills of the students and any practical experience the students can obtain. In the Department of Earth and Environmental Sciences of the University of Milano-Bicocca, the teaching of hydrogeology is developed in three inter-related courses: 1) general hydrogeology, 2) applied hydrogeology, 3) groundwater pollution and remediation. The sequence focuses on both groundwater flux and contaminant transport, supplemented by workshops involving case studies and computer labs, which provide the students with practical translation of the theoretical aspects of the science into the world of work. A second key aspect of the program utilizes the students' skill at learning through online approaches, and this is done through three approaches: A) by developing the courses on a University e-learning platform that allows the students to download lectures, articles, and teacher comments, and to participate in online forums; B) by carring out exercises through computer labs where the student analyze and process hydrogeological data by means of different numerical codes, that in turn enable them to manage databases and to perform aquifer test analysis, geostatistical analysis, and flux and transport modelling both in the unsaturated and saturated zone. These exercises are of course preceded by theoretical lectures on codes and software, highlighting their features and

  15. Optimization of Investment Planning Based on Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Butsenko

    2018-03-01

    Full Text Available The game-theoretic approach has a vast potential in solving economic problems. On the other hand, the theory of games itself can be enriched by the studies of real problems of decision-making. Hence, this study is aimed at developing and testing the game-theoretic technique to optimize the management of investment planning. This technique enables to forecast the results and manage the processes of investment planning. The proposed method of optimizing the management of investment planning allows to choose the best development strategy of an enterprise. This technique uses the “game with nature” model, and the Wald criterion, the maximum criterion and the Hurwitz criterion as criteria. The article presents a new algorithm for constructing the proposed econometric method to optimize investment project management. This algorithm combines the methods of matrix games. Furthermore, I show the implementation of this technique in a block diagram. The algorithm includes the formation of initial data, the elements of the payment matrix, as well as the definition of maximin, maximal, compromise and optimal management strategies. The methodology is tested on the example of the passenger transportation enterprise of the Sverdlovsk Railway in Ekaterinburg. The application of the proposed methodology and the corresponding algorithm allowed to obtain an optimal price strategy for transporting passengers for one direction of traffic. This price strategy contributes to an increase in the company’s income with minimal risk from the launch of this direction. The obtained results and conclusions show the effectiveness of using the developed methodology for optimizing the management of investment processes in the enterprise. The results of the research can be used as a basis for the development of an appropriate tool and applied by any economic entity in its investment activities.

  16. A Theoretical Explanation of Marital Conflicts by Paradigmatic Approach

    Directory of Open Access Journals (Sweden)

    اسماعیل جهانی دولت آباد

    2017-06-01

    Full Text Available Due to the economic, social and cultural changes in recent decades and consequently alterations in the form and duties of families and expectations of individuals from marriage, the institution of the family and marriage are enormously involved with different challenges and conflicts in comparison to past years. Fragile marital relationships, conflicts and divorce are results of such situations in Iran. Accordingly, the present study, which is designed through meta-analysis and deduction based on the concept analysis and reconceptualization of recent studies, has committed to manifest a proper different paradigm to explain marital conflicts. This paradigm is relying on various theoretical approaches, particularly the theory of symbolic interactionism as the main explanatory mean, and also applying the concept of “Marital Paradigm” as the missing information in previous studies of this field. It explains the marital conflicts between couples as paradigmatic conflicts; and its main idea is that marital conflict is not the result of one or more fixed and specified factors, but it is the production of encountering the opposing (or different paradigms.

  17. Intelligent cognitive radio jamming - a game-theoretical approach

    Science.gov (United States)

    Dabcevic, Kresimir; Betancourt, Alejandro; Marcenaro, Lucio; Regazzoni, Carlo S.

    2014-12-01

    Cognitive radio (CR) promises to be a solution for the spectrum underutilization problems. However, security issues pertaining to cognitive radio technology are still an understudied topic. One of the prevailing such issues are intelligent radio frequency (RF) jamming attacks, where adversaries are able to exploit on-the-fly reconfigurability potentials and learning mechanisms of cognitive radios in order to devise and deploy advanced jamming tactics. In this paper, we use a game-theoretical approach to analyze jamming/anti-jamming behavior between cognitive radio systems. A non-zero-sum game with incomplete information on an opponent's strategy and payoff is modelled as an extension of Markov decision process (MDP). Learning algorithms based on adaptive payoff play and fictitious play are considered. A combination of frequency hopping and power alteration is deployed as an anti-jamming scheme. A real-life software-defined radio (SDR) platform is used in order to perform measurements useful for quantifying the jamming impacts, as well as to infer relevant hardware-related properties. Results of these measurements are then used as parameters for the modelled jamming/anti-jamming game and are compared to the Nash equilibrium of the game. Simulation results indicate, among other, the benefit provided to the jammer when it is employed with the spectrum sensing algorithm in proactive frequency hopping and power alteration schemes.

  18. On Algebraic Approach for MSD Parametric Estimation

    OpenAIRE

    Oueslati , Marouene; Thiery , Stéphane; Gibaru , Olivier; Béarée , Richard; Moraru , George

    2011-01-01

    This article address the identification problem of the natural frequency and the damping ratio of a second order continuous system where the input is a sinusoidal signal. An algebra based approach for identifying parameters of a Mass Spring Damper (MSD) system is proposed and compared to the Kalman-Bucy filter. The proposed estimator uses the algebraic parametric method in the frequency domain yielding exact formula, when placed in the time domain to identify the unknown parameters. We focus ...

  19. Distinguishing prognostic and predictive biomarkers: An information theoretic approach.

    Science.gov (United States)

    Sechidis, Konstantinos; Papangelou, Konstantinos; Metcalfe, Paul D; Svensson, David; Weatherall, James; Brown, Gavin

    2018-05-02

    The identification of biomarkers to support decision-making is central to personalised medicine, in both clinical and research scenarios. The challenge can be seen in two halves: identifying predictive markers, which guide the development/use of tailored therapies; and identifying prognostic markers, which guide other aspects of care and clinical trial planning, i.e. prognostic markers can be considered as covariates for stratification. Mistakenly assuming a biomarker to be predictive, when it is in fact largely prognostic (and vice-versa) is highly undesirable, and can result in financial, ethical and personal consequences. We present a framework for data-driven ranking of biomarkers on their prognostic/predictive strength, using a novel information theoretic method. This approach provides a natural algebra to discuss and quantify the individual predictive and prognostic strength, in a self-consistent mathematical framework. Our contribution is a novel procedure, INFO+, which naturally distinguishes the prognostic vs predictive role of each biomarker and handles higher order interactions. In a comprehensive empirical evaluation INFO+ outperforms more complex methods, most notably when noise factors dominate, and biomarkers are likely to be falsely identified as predictive, when in fact they are just strongly prognostic. Furthermore, we show that our methods can be 1-3 orders of magnitude faster than competitors, making it useful for biomarker discovery in 'big data' scenarios. Finally, we apply our methods to identify predictive biomarkers on two real clinical trials, and introduce a new graphical representation that provides greater insight into the prognostic and predictive strength of each biomarker. R implementations of the suggested methods are available at https://github.com/sechidis. konstantinos.sechidis@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  20. Bioinspired Computational Approach to Missing Value Estimation

    Directory of Open Access Journals (Sweden)

    Israel Edem Agbehadji

    2018-01-01

    Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.

  1. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  2. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  3. SUSTAINABLE TOURISM AND ITS FORMS - A THEORETICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Bac Dorin

    2013-07-01

    Full Text Available From the second half of the twentieth century, the importance of the tourism industry to the world economy continued to grow, reaching today impressive figures: receipts of almost $ 1,000 billion and direct employment for over 70 million people (WTTC 2012, without taking into account the multiplier effect (according to the same statistics of WTTC, if considering the multiplier effect, the values are: $ 5,990 billion in tourism receipts, and 253.5 million jobs. We can say that tourism: has a higher capacity to generate and distribute incomes compared to other sectors; has a high multiplier effect; determines a high level of consumption of varied products and services. In this context, voices began to emerge, which presented the problems and challenges generated by the tourism activity. Many regions are facing real problems generated by tourism entrepreneurs and tourists who visit the community. Therefore, at the end of the last century, there were authors who sought to define a new form of tourism, which eliminated the negative impacts and increased the positive ones. As a generic term they used alternative tourism, but because of the ambiguity of the term, they tried to find a more precise term, which would define the concept easier. Thus emerged: ecotourism, rural tourism, Pro Poor Tourism etc.. All these forms have been introduced under the umbrella concept of sustainable tourism. In the present paper we will take a theoretical approach, in order to present some forms of sustainable tourism. During our research we covered the ideas and concepts promoted by several authors and academics but also some international organizations with focus on tourism. We considered these forms of tourism, as they respect all the rules of sustainable tourism and some of them have great potential to grow in both developed and emerging countries. The forms of sustainable tourism we identified are: ecotourism, pro-poor tourism, volunteer tourism and slow tourism. In

  4. Resilience or Flexibility– A Theoretical Approach on Romanian Development Regions

    Directory of Open Access Journals (Sweden)

    Roxana Voicu – Dorobanțu

    2015-09-01

    Full Text Available The paper describes a theoretical contextualization of flexibility, sustainability, durability and resilience, in the context of the sustainable development goals. The main purpose is to identify the theoretical handles that may be used in the creation of a flexibility indicator. Thus, research questions related to the theoretical differentiation between durable and sustainable, flexible and resilient are answered. Further on, the paper describes the situation of the Romanian regions in terms of development indicators, based on Eurostat data, as a premise for further research on the possibility of their leapfrogging. This work was financially supported through the project “Routes of academic excellence in doctoral and post-doctoral research- REACH” co-financed through the European Social Fund, by Sectoral Operational Programme Human Resources Development 2007-2013, contract no POSDRU/59/1.5/S/137926.

  5. Game-theoretic interference coordination approaches for dynamic spectrum access

    CERN Document Server

    Xu, Yuhua

    2016-01-01

    Written by experts in the field, this book is based on recent research findings in dynamic spectrum access for cognitive radio networks. It establishes a game-theoretic framework and presents cutting-edge technologies for distributed interference coordination. With game-theoretic formulation and the designed distributed learning algorithms, it provides insights into the interactions between multiple decision-makers and the converging stable states. Researchers, scientists and engineers in the field of cognitive radio networks will benefit from the book, which provides valuable information, useful methods and practical algorithms for use in emerging 5G wireless communication.

  6. Theoretical and methodological approaches to economic competitiveness (Part I

    Directory of Open Access Journals (Sweden)

    Macari Vadim

    2013-01-01

    Full Text Available The article is based on study of several representative bibliographical sources,it is tried to examine, to order from logical and scientific point of view some of the most common theoretical and methodological understandings of the essence, definition, phenomenon, types, haracteristics and indices of economic competitiveness.

  7. Theoretical and methodological approaches to economic competitiveness (part II

    Directory of Open Access Journals (Sweden)

    Macari Vadim

    2013-02-01

    Full Text Available This article, on the basis of the study of many representative bibliographic sources, examines and tries to order from logical and scientific point of view some of the most common theoretical and methodological treatments of the essence, definition, phenomenon, types, characteristics and indices of economic competitiveness.

  8. THEORETICAL AND METHODOLOGICAL APPROACHES TO ECONOMIC COMPETITIVENESS (Part II

    Directory of Open Access Journals (Sweden)

    Vadim MACARI

    2013-02-01

    Full Text Available This article, on the basis of the study of many representative bibliographic sources, examines and tries to order from logical and scientific point of view some of the most common theoretical and methodological treatments of the essence, definition, phenomenon, types, characteristics and indices of economic competitiveness.

  9. THEORETICAL AND METHODOLOGICAL APPROACHES TO ECONOMIC COMPETITIVENESS (Part I

    Directory of Open Access Journals (Sweden)

    Vadim MACARI

    2013-01-01

    Full Text Available The article is based on study of several representative bibliographical sources,it is tried to examine, to order from logical and scientific point of view some of the most common theoretical and methodological understandings of the essence, definition, phenomenon, types, characteristics and indices of economic competitiveness.

  10. Modeling the economic impact of medication adherence in type 2 diabetes: a theoretical approach.

    Science.gov (United States)

    Cobden, David S; Niessen, Louis W; Rutten, Frans Fh; Redekop, W Ken

    2010-09-07

    While strong correlations exist between medication adherence and health economic outcomes in type 2 diabetes, current economic analyses do not adequately consider them. We propose a new approach to incorporate adherence in cost-effectiveness analysis. We describe a theoretical approach to incorporating the effect of adherence when estimating the long-term costs and effectiveness of an antidiabetic medication. This approach was applied in a Markov model which includes common diabetic health states. We compared two treatments using hypothetical patient cohorts: injectable insulin (IDM) and oral (OAD) medications. Two analyses were performed, one which ignored adherence (analysis 1) and one which incorporated it (analysis 2). Results from the two analyses were then compared to explore the extent to which adherence may impact incremental cost-effectiveness ratios. In both analyses, IDM was more costly and more effective than OAD. When adherence was ignored, IDM generated an incremental cost-effectiveness of $12,097 per quality-adjusted life-year (QALY) gained versus OAD. Incorporation of adherence resulted in a slightly higher ratio ($16,241/QALY). This increase was primarily due to better adherence with OAD than with IDM, and the higher direct medical costs for IDM. Incorporating medication adherence into economic analyses can meaningfully influence the estimated cost-effectiveness of type 2 diabetes treatments, and should therefore be considered in health care decision-making. Future work on the impact of adherence on health economic outcomes, and validation of different approaches to modeling adherence, is warranted.

  11. Group theoretic approaches to nuclear and hadronic collective motion

    International Nuclear Information System (INIS)

    Biedenharn, L.C.

    1982-01-01

    Three approaches to nuclear and hadronic collective motion are reviewed, compared and contrasted: the standard symmetry approach as typified by the Interacting Boson Model, the kinematic symmetry group approach of Gell-Mann and Tomonaga, and the recent direct construction by Buck. 50 references

  12. Scientific-theoretical research approach to practical theology in ...

    African Journals Online (AJOL)

    All of them work with practical theological hermeneutics. The basic hermeneutic approach of Daniël Louw is widened with an integrated approach by Richard R. Osmer in which practical theology as a hermeneutic discipline also includes the empirical aspect which the action theory approach has contributed to the ...

  13. Group theoretic approaches to nuclear and hadronic collective motion

    Energy Technology Data Exchange (ETDEWEB)

    Biedenharn, L.C.

    1982-01-01

    Three approaches to nuclear and hadronic collective motion are reviewed, compared and contrasted: the standard symmetry approach as typified by the Interacting Boson Model, the kinematic symmetry group approach of Gell-Mann and Tomonaga, and the recent direct construction by Buck. 50 references.

  14. THEORETICAL AND METHODOLOGICAL APPROACHES TO REGIONAL COMPETITION INVESTIGATION

    Directory of Open Access Journals (Sweden)

    A.I. Tatarkin

    2006-03-01

    Full Text Available The article is dedicated to theoretical-methodological issues of regional economy competitiveness investigation. Economic essence of regional competitiveness is analyzed, its definition is given. The factors that determine relations of competition on medium and macrolevels are proved. The basic differences between world-economical and inter-regional communications are formulated. The specific features of globalization processes as form of competitive struggle are considered.

  15. Representing electrons a biographical approach to theoretical entities

    CERN Document Server

    Arabatzis, Theodore

    2006-01-01

    Both a history and a metahistory, Representing Electrons focuses on the development of various theoretical representations of electrons from the late 1890s to 1925 and the methodological problems associated with writing about unobservable scientific entities. Using the electron-or rather its representation-as a historical actor, Theodore Arabatzis illustrates the emergence and gradual consolidation of its representation in physics, its career throughout old quantum theory, and its appropriation and reinterpretation by chemists. As Arabatzis develops this novel biographical

  16. We need theoretical physics approaches to study living systems

    Science.gov (United States)

    Blagoev, Krastan B.; Shukla, Kamal; affil="3" >Herbert Levine,

    2013-08-01

    Living systems, as created initially by the transition from assemblies of large molecules to self-reproducing information-rich cells, have for centuries been studied via the empirical toolkit of biology. This has been a highly successful enterprise, bringing us from the vague non-scientific notions of vitalism to the modern appreciation of the biophysical and biochemical bases of life. Yet, the truly mind-boggling complexity of even the simplest self-sufficient cells, let alone the emergence of multicellular organisms, of brain and consciousness, and to ecological communities and human civilizations, calls out for a complementary approach. In this editorial, we propose that theoretical physics can play an essential role in making sense of living matter. When faced with a highly complex system, a physicist builds simplified models. Quoting Philip W Anderson's Nobel prize address, 'the art of model-building is the exclusion of real but irrelevant parts of the problem and entails hazards for the builder and the reader. The builder may leave out something genuinely relevant and the reader, armed with too sophisticated an experimental probe, may take literally a schematized model. Very often such a simplified model throws more light on the real working of nature....' In his formulation, the job of a theorist is to get at the crux of the system by ignoring details and yet to find a testable consequence of the resulting simple picture. This is rather different than the predilection of the applied mathematician who wants to include all the known details in the hope of a quantitative simulacrum of reality. These efforts may be practically useful, but do not usually lead to increased understanding. To illustrate how this works, we can look at a non-living example of complex behavior that was afforded by spatiotemporal patterning in the Belousov-Zhabotinsky reaction [1]. Physicists who worked on this system did not attempt to determine all the relevant chemical intermediates

  17. Strength of wood versus rate of testing - A theoretical approach

    DEFF Research Database (Denmark)

    Nielsen, Lauge Fuglsang

    2007-01-01

    Strength of wood is normally measured in ramp load experiments. Experience shows that strength increases with increasing rate of testing. This feature is considered theoretically in this paper. It is shown that the influence of testing rate is a phenomenon, which depends on the quality...... of the considered wood. Low quality wood shows lesser influence of testing rate. This observation agrees with the well-known statement made by Borg Madsen that weak wood subjected to a constant load, has a longer lifetime than strong wood. In general, the influence of testing rate on strength increases...

  18. Theoretical approaches to digital services and digital democracy

    DEFF Research Database (Denmark)

    Hoff, Jens Villiam; Scheele, Christian Elling

    2014-01-01

    The purpose of this paper is to develop a theoretical framework, which can be used in the analysis of all types of (political-administrative) web applications. Through a discussion and criticism of social construction of technology (SCOT), an earlier version of this model based on new medium theory...... are translated into specific (political-administrative) practices, and how these practices are produced through the interplay between discourses, actors and technology. However, the new version places practices more firmly at the centre of the model, as practices, following Reckwitz, is seen as the enactment...

  19. A Representation-Theoretic Approach to Reversible Computation with Applications

    DEFF Research Database (Denmark)

    Maniotis, Andreas Milton

    Reversible computing is a sub-discipline of computer science that helps to understand the foundations of the interplay between physics, algebra, and logic in the context of computation. Its subjects of study are computational devices and abstract models of computation that satisfy the constraint ......, there is still no uniform and consistent theory that is general in the sense of giving a model-independent account to the field....... of information conservation. Such machine models, which are known as reversible models of computation, have been examined both from a theoretical perspective and from an engineering perspective. While a bundle of many isolated successful findings and applications concerning reversible computing exists...

  20. The aggregation of climate change damages. A welfare theoretic approach

    International Nuclear Information System (INIS)

    Fankhauser, S.; Pearce, D.W.; Tol, R.S.J.

    1997-01-01

    The economic value of environmental goods is commonly determined using the concepts of willingness to pay (WTP) or willingness to accept (WTA). However, the WTP/WTA observed in different countries (or between individuals) will differ according to socio-economic characteristics, in particular income. This notion of differentiated values for otherwise identical goods (say, a given reduction in mortality risk) has been criticized as unethical, most recently in the context of the 'social cost' chapter of the IPCC Second Assessment Report. These critics argue that, being a function of income, WTP/WTA estimates reflect the unfairness in the current income distribution, and for equity reasons uniform per-unit values should therefore be applied across individuals and countries. This paper analyses the role of equity in the aggregation of climate change damage estimates, using basic tools of welfare economics. It shows one way of how WTP/WTA estimates can be corrected in aggregation if the underlying income distribution is considered unfair. It proposes that in the aggregation process individual estimates be weighted with an equity factor derived from the social welfare and utility functions. Equity weighting can significantly increase aggregate (global) damage figures, although some specifications of weighting functions also imply reduced estimates. The paper also shows that while the postulate of uniform per-unit values is compatible with a wide range of 'reasonable' utility and welfare specifications, there are also cases where the common-value notion is not compatible with defensible welfare concepts. 3 tabs., 32 refs

  1. The production of scientific videos: a theoretical approach

    Directory of Open Access Journals (Sweden)

    Carlos Ernesto Gavilondo Rodriguez

    2016-12-01

    Full Text Available The article presents the results of theoretical research on the production of scientific videos and its application to the teaching-learning process carried out in schools in the city of Guayaquil, Ecuador. It is located within the production line and Audiovisual Communication. Creation of scientific videos, from the Communication major with a concentration in audiovisual production and multimedia of the Salesian Polytechnic University. For the realization of the article it was necessary to use key terms that helped subsequently to data collection. used terms such as: audiovisual production, understood as the production of content for audiovisual media; the following term used audiovisual communication is recognized as the process in which there is an exchange of messages through an audible and / or visual system; and the last term we use is scientifically video, which is one that uses audiovisual resources to obtain relevant and reliable information.As part of the theoretical results a methodological proposal for the video production is presented for educational purposes. In conclusion set out, first, that from the communicative statement in recent times, current social relations, constitute a successful context of possibilities shown to education to generate meeting points between the world of the everyday and the knowledge. Another indicator validated as part of the investigation, is that teachers surveyed use the potential of the audiovisual media, and supported them, deploy alternatives for use. 

  2. Field-theoretic approach to fluctuation effects in neural networks

    International Nuclear Information System (INIS)

    Buice, Michael A.; Cowan, Jack D.

    2007-01-01

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governed by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience

  3. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  4. A new theoretical approach to adsorption desorption behavior of Ga on GaAs surfaces

    Science.gov (United States)

    Kangawa, Y.; Ito, T.; Taguchi, A.; Shiraishi, K.; Ohachi, T.

    2001-11-01

    We propose a new theoretical approach for studying adsorption-desorption behavior of atoms on semiconductor surfaces. The new theoretical approach based on the ab initio calculations incorporates the free energy of gas phase; therefore we can calculate how adsorption and desorption depends on growth temperature and beam equivalent pressure (BEP). The versatility of the new theoretical approach was confirmed by the calculation of Ga adsorption-desorption transition temperatures and transition BEPs on the GaAs(0 0 1)-(4×2)β2 Ga-rich surface. This new approach is feasible to predict how adsorption and desorption depend on the growth conditions.

  5. A Game-Theoretical Approach to Multimedia Social Networks Security

    Science.gov (United States)

    Liu, Enqiang; Liu, Zengliang; Shao, Fei; Zhang, Zhiyong

    2014-01-01

    The contents access and sharing in multimedia social networks (MSNs) mainly rely on access control models and mechanisms. Simple adoptions of security policies in the traditional access control model cannot effectively establish a trust relationship among parties. This paper proposed a novel two-party trust architecture (TPTA) to apply in a generic MSN scenario. According to the architecture, security policies are adopted through game-theoretic analyses and decisions. Based on formalized utilities of security policies and security rules, the choice of security policies in content access is described as a game between the content provider and the content requester. By the game method for the combination of security policies utility and its influences on each party's benefits, the Nash equilibrium is achieved, that is, an optimal and stable combination of security policies, to establish and enhance trust among stakeholders. PMID:24977226

  6. A theoretical approach for energy saving in industrial steam boilers

    International Nuclear Information System (INIS)

    Sabry, T.I.; Mohamed, N.H.; Elghonimy, A.M.

    1993-01-01

    Optimization of the performance characteristics of such a steam boiler has been analyzed theoretically. Suitable thermodynamic relations have been utilized here to construct a computer model that would carry out the boiler performance characteristics at different operating parameters (e.g.; amount of excess air, fuel type, rate of blowdown preheating of combustion air and flow gases temperature). The results demonstrate that this computer model is to be used successfully in selecting the different operating parameters of the steam boiler at variant loads considering the best economical operation. Besides, this model can be used to investigate the sensitivity of the performance characteristics to the deviation of the boiler operating parameters from their optimum values. It was found also that changing the operating parameters beside the type of fuel in a boiler affects its performance characteristics. 3 figs

  7. Theoretical information measurement in nonrelativistic time-dependent approach

    Science.gov (United States)

    Najafizade, S. A.; Hassanabadi, H.; Zarrinkamar, S.

    2018-02-01

    The information-theoretic measures of time-dependent Schrödinger equation are investigated via the Shannon information entropy, variance and local Fisher quantities. In our calculations, we consider the two first states n = 0,1 and obtain the position Sx (t) and momentum Sp (t) Shannon entropies as well as Fisher information Ix (t) in position and momentum Ip (t) spaces. Using the Fourier transformed wave function, we obtain the results in momentum space. Some interesting features of the information entropy densities ρs (x,t) and γs (p,t), as well as the probability densities ρ (x,t) and γ (p,t) for time-dependent states are demonstrated. We establish a general relation between variance and Fisher's information. The Bialynicki-Birula-Mycielski inequality is tested and verified for the states n = 0,1.

  8. A game-theoretic approach to donor kidney sharing.

    Science.gov (United States)

    O'Brien, B J

    1988-01-01

    Graft survival in renal transplantation is a function, amongst other things, of the degree of histocompatibility lymphocyte-A (HLA) tissue matching achieved between donor and recipient. Yet a donor procured at centre A might match a transplant candidate at centre B and vice versa. This raises the question of whether, and under what circumstances, surgeons will offer and exchange donor kidneys and gain from such trade in terms of graft survival. We analyse the problem in a game-theoretic framework where the choice of strategy 'to offer or not?' is evaluated in the context of the uncertainty of reciprocation by the other player(s) in the game. The equilibrium solution to a number of variations of the game is predicted to be non-cooperation resulting in collectively sub-optimal graft survival rates. Some policy options for improving cooperation are considered including exchange incentives and coercive measures.

  9. A theoretical approach on controlling agricultural pest by biological controls.

    Science.gov (United States)

    Mondal, Prasanta Kumar; Jana, Soovoojeet; Kar, T K

    2014-03-01

    In this paper we propose and analyze a prey-predator type dynamical system for pest control where prey population is treated as the pest. We consider two classes for the pest namely susceptible pest and infected pest and the predator population is the natural enemy of the pest. We also consider average delay for both the predation rate i.e. predation to the susceptible pest and infected pest. Considering a subsystem of original system in the absence of infection, we analyze the existence of all possible non-negative equilibria and their stability criteria for both the subsystem as well as the original system. We present the conditions for transcritical bifurcation and Hopf bifurcation in the disease free system. The theoretical evaluations are demonstrated through numerical simulations.

  10. A Theoretical Approach to Engineering a New Enzyme

    International Nuclear Information System (INIS)

    Anderson, Greg; Gomatam, Ravi; Behera, Raghu N.

    2016-01-01

    Density function theory, a subfield of quantum mechanics (QM), in combination with molecular mechanics (MM) has opened the way to engineer new artificial enzymes. Herein, we report theoretical calculations done using QM/MM to examine whether the regioselectivity and rate of chlorination of the enzyme chloroperoxidase can be improved by replacing the vanadium of this enzyme with niobium through dialysis. Our calculations show that a niobium substituted chloroperoxidase will be able to enter the initial steps of the catalytic cycle for chlorination. Although the protonation state of the niobium substituted enzyme is calculated to be different from than that of the natural vanadium substituted enzyme, our calculations show that the catalytic cycle can still proceed forward. Using natural bond orbitals, we analyse the electronic differences between the niobium substituted enzyme and the natural enzyme. We conclude by briefly examining how good of a model QM/MM provides for understanding the mechanism of catalysis of chloroperoxidase. (paper)

  11. Laser debonding of ceramic orthodontic brackets: a theoretical approach

    Science.gov (United States)

    Kearney, Kristine L.; Marangoni, Roy D.; Rickabaugh, Jeff L.

    1992-06-01

    Ceramic brackets are an esthetic substitute for conventional stainless steel brackets in orthodontic patients. However, ceramic brackets are more brittle and have higher bond strengths which can lead to bracket breakage and enamel damage during debonding. It has been demonstrated that various lasers can facilitate ceramic bracket removal. One mechanism with the laser is through the softening of the bracket adhesive. The high energy density from the laser on the bracket and adhesive can have a resultant deleterious thermal effect on the pulp of the tooth which may lead to pulpal death. A theoretical computer model of bracket, adhesive, enamel and dentin has been generated for predicting heat flow through this system. Heat fluxes at varying intensities and modes have been input into the program and the resultant temperatures at various points or nodes were determined. Further pursuit should lead to optimum parameters for laser debonding which would have minimal effects on the pulp.

  12. Tourism and international business: The theoretical approach and practical experiences

    Directory of Open Access Journals (Sweden)

    Jovičić Dobrica

    2011-01-01

    Full Text Available The paper discusses the relationships between tourism and international business. The research is based upon combining various theoretical concepts, significant empirical experiences and own attitudes of the authors. The key conclusion of the paper is that, despite of the partial progress in understanding tourism businesses, the relationships between tourism and international business need additional stimulus. In other words, more complete research in the related domains is needed in future. Any understanding of tourism is inadequate without appreciating the contributions that international business might bring, yet at the same time international business is incomplete in its coverage of international trade unless tourism is considered. The consumption-driven agenda of much tourism research has been favoured over supply-side discourses of the production process it self. That is why the role of major transnational companies, setting the trends that other types of firms in the tourism sector follow is focused in the paper.

  13. Theoretical and Experimental Estimations of Volumetric Inductive Phase Shift in Breast Cancer Tissue

    Science.gov (United States)

    González, C. A.; Lozano, L. M.; Uscanga, M. C.; Silva, J. G.; Polo, S. M.

    2013-04-01

    Impedance measurements based on magnetic induction for breast cancer detection has been proposed in some studies. This study evaluates theoretical and experimentally the use of a non-invasive technique based on magnetic induction for detection of patho-physiological conditions in breast cancer tissue associated to its volumetric electrical conductivity changes through inductive phase shift measurements. An induction coils-breast 3D pixel model was designed and tested. The model involves two circular coils coaxially centered and a human breast volume centrally placed with respect to the coils. A time-harmonic numerical simulation study addressed the effects of frequency-dependent electrical properties of tumoral tissue on the volumetric inductive phase shift of the breast model measured with the circular coils as inductor and sensor elements. Experimentally; five female volunteer patients with infiltrating ductal carcinoma previously diagnosed by the radiology and oncology departments of the Specialty Clinic for Women of the Mexican Army were measured by an experimental inductive spectrometer and the use of an ergonomic inductor-sensor coil designed to estimate the volumetric inductive phase shift in human breast tissue. Theoretical and experimental inductive phase shift estimations were developed at four frequencies: 0.01, 0.1, 1 and 10 MHz. The theoretical estimations were qualitatively in agreement with the experimental findings. Important increments in volumetric inductive phase shift measurements were evident at 0.01MHz in theoretical and experimental observations. The results suggest that the tested technique has the potential to detect pathological conditions in breast tissue associated to cancer by non-invasive monitoring. Further complementary studies are warranted to confirm the observations.

  14. A filtering approach to edge preserving MAP estimation of images.

    Science.gov (United States)

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  15. Social self-esteem: theoretical and methodological approaches to research

    Directory of Open Access Journals (Sweden)

    Usova E.N.

    2017-09-01

    Full Text Available The perspective of the analysis of the phenomenon of the social well-being of patients with chronic diseases from the standpoint of the sociology of medicine is proved. Modern approaches to the study of social well-being within the framework of sociological and psychological approaches are singled out. The importance of studying the levels of social well-being (institutional and individual is presented to explain the individual's chosen strategies of behavior in the disease situation. The vectors of operationalization of the category of social well-being within the sociology of medicine are indicated.

  16. Enhanced diffusion under alpha self-irradiation in spent nuclear fuel: Theoretical approaches

    International Nuclear Information System (INIS)

    Ferry, Cecile; Lovera, Patrick; Poinssot, Christophe; Garcia, Philippe

    2005-01-01

    Various theoretical approaches have been developed in order to estimate the enhanced diffusion coefficient of fission products under alpha self-irradiation in spent nuclear fuel. These simplified models calculate the effects of alpha particles and recoil atoms on mobility of uranium atoms in UO 2 . They lead to a diffusion coefficient which is proportional to the volume alpha activity with a proportionality factor of about 10 -44 (m 5 ). However, the same models applied for fission lead to a radiation-enhanced diffusion coefficient which is approximately two orders of magnitude lower than values reported in literature for U and Pu. Other models are based on an extrapolation of radiation-enhanced diffusion measured either in reactors or under heavy ion bombardment. These models lead to a proportionality factor between the alpha self-irradiation enhanced diffusion coefficient and the volume alpha activity of 2 x 10 -41 (m 5 )

  17. Photodynamic therapy: Theoretical and experimental approaches to dosimetry

    Science.gov (United States)

    Wang, Ken Kang-Hsin

    Singlet oxygen (1O2) is the major cytotoxic species generated during photodynamic therapy (PDT), and 1O 2 reactions with biological targets define the photodynamic dose at the most fundamental level. We have developed a theoretical model for rigorously describing the spatial and temporal dynamics of oxygen (3O 2) consumption and transport and microscopic 1O 2 dose deposition during PDT in vivo. Using experimentally established physiological and photophysical parameters, the mathematical model allows computation of the dynamic variation of hemoglobin-3O 2 saturation within vessels, irreversible photosensitizer degradation due to photobleaching, therapy-induced blood flow decrease and the microscopic distributions of 3O2 and 1O 2 dose deposition under various irradiation conditions. mTHPC, a promising photosensitizer for PDT, is approved in Europe for the palliative treatment of head and neck cancer. Using the theoretical model and informed by intratumor sensitizer concentrations and distributions, we calculated photodynamic dose depositions for mTHPC-PDT. Our results demonstrate that the 1O 2 dose to the tumor volume does not track even qualitatively with long-term tumor responses. Thus, in this evaluation of mTHPC-PDT, any PDT dose metric that is proportional to singlet oxygen creation and/or deposition would fail to predict the tumor response. In situations like this one, other reporters of biological response to therapy would be necessary. In addition to the case study of mTHPC-PDT, we also use the mathematical model to simulate clinical photobleaching data, informed by a possible blood flow reduction during treatment. In a recently completed clinical trial at Roswell Park Cancer Institute, patients with superficial basal cell carcinoma received topical application of 5-aminolevulinic acid (ALA) and were irradiated with 633 nm light at 10-150 mW cm-2 . Protoporphyrin IX (PpIX) photobleaching in the lesion and the adjacent perilesion normal margin was monitored by

  18. THEORETICAL APPROACHES TO THE DEFINITION OF THE "INFORMATION RESOURCE"

    OpenAIRE

    I. Netreba

    2014-01-01

    Existing approaches to determining the nature of the category "information resource" are detailed and systematized. The relationships between the categories "information resource", "information technology", "information management system" are revealed. Determined the importance of information resources for the production process at the enterprise.

  19. Game-theoretic approaches to optimal risk sharing

    NARCIS (Netherlands)

    Boonen, T.J.

    2014-01-01

    This Ph.D. thesis studies optimal risk capital allocation and optimal risk sharing. The first chapter deals with the problem of optimally allocating risk capital across divisions within a financial institution. To do so, an asymptotic approach is used to generalize the well-studied Aumann-Shapley

  20. Fuzzy set theoretic approach to fault tree analysis | Tyagi ...

    African Journals Online (AJOL)

    This approach can be widely used to improve the reliability and to reduce the operating cost of a system. The proposed techniques are discussed and illustrated by taking an example of a nuclear power plant. Keywords: Fault tree, Triangular and Trapezoidal fuzzy number, Fuzzy importance, Ranking of fuzzy numbers ...

  1. Towards a capability approach to child growth : A theoretical framework

    NARCIS (Netherlands)

    Haisma, Hinke; Yousefzadeh Faal Daghati, Sepideh; Boele Van Hensbroek, Pieter

    Child malnutrition is an important cause of under-five mortality and morbidity around the globe. Despite the partial success of (inter)national efforts to reduce child mortality, under-five mortality rates continue to be high. The multidimensional approaches of the Sustainable Development Goals may

  2. Hydrogen storage in lithium hydride: A theoretical approach

    Science.gov (United States)

    Banger, Suman; Nayak, Vikas; Verma, U. P.

    2018-04-01

    First principles calculations have been carried out to analyze structural stability of lithium hydride (LiH) in NaCl phase using the full potential linearized augmented plane wave (FP-LAPW) method within the framework of density functional theory (DFT). Calculations have been extended to physiosorbed H-atom compounds LiH·H2, LiH·3H2 and LiH·4H2. The obtained results are discussed in the paper. The results for LiH are in excellent agreement with earlier reported data. The obtained direct energy band gap of LiH is 3.0 eV which is in excellent agreement with earlier reported theoretical band gap. The electronic band structure plots of the hydrogen adsorbed compounds show metallic behavior. The elastic constants, anisotropy factor, shear modulus, Young's modulus, Poisson's ratio and cohesive energies of all the compounds are calculated. Calculation of the optical spectra such as the real and imaginary parts of dielectric function, optical reflectivity, absorption coefficient, optical conductivity, refractive index, extinction coefficient and electron energy loss are performed for the energy range 0-15 eV. The obtained results for LiH·H2, LiH·3H2 and LiH·4H2, are reported for the first time. This study has been made in search of materials for hydrogen storage. It is concluded that LiH is a promising material for hydrogen storage.

  3. Exploring Job Satisfaction of Nursing Faculty: Theoretical Approaches.

    Science.gov (United States)

    Wang, Yingchen; Liesveld, Judy

    2015-01-01

    The Future of Nursing report identified the shortage of nursing faculty as 1 of the barriers to nursing education. In light of this, it is becoming increasingly important to understand the work-life of nursing faculty. The current research focused on job satisfaction of nursing faculty from 4 theoretical perspectives: human capital theory, which emphasizes the expected monetary and nonmonetary returns for any career choices; structural theory, which emphasizes the impact of institutional features on job satisfaction; positive extrinsic environment by self-determination theory, which asserts that a positive extrinsic environment promotes competency and effective outcomes at work; and psychological theory, which emphasizes the proposed relationship between job performance and satisfaction. In addition to the measures for human capital theory, institutional variables (from structural theory and self-determination theory), and productivity measures (from psychological theory), the authors also selected sets of variables for personal characteristics to investigate their effects on job satisfaction. The results indicated that variables related to human capital theory, especially salary, contributed the most to job satisfaction, followed by those related to institutional variables. Personal variables and productivity variables as a whole contributed as well. The only other variable with marginal significance was faculty's perception of institutional support for teaching. Published by Elsevier Inc.

  4. A Game Theoretic Approach to Cyber Attack Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Peng Liu

    2005-11-28

    The area investigated by this project is cyber attack prediction. With a focus on correlation-based prediction, current attack prediction methodologies overlook the strategic nature of cyber attack-defense scenarios. As a result, current cyber attack prediction methodologies are very limited in predicting strategic behaviors of attackers in enforcing nontrivial cyber attacks such as DDoS attacks, and may result in low accuracy in correlation-based predictions. This project develops a game theoretic framework for cyber attack prediction, where an automatic game-theory-based attack prediction method is proposed. Being able to quantitatively predict the likelihood of (sequences of) attack actions, our attack prediction methodology can predict fine-grained strategic behaviors of attackers and may greatly improve the accuracy of correlation-based prediction. To our best knowledge, this project develops the first comprehensive framework for incentive-based modeling and inference of attack intent, objectives, and strategies; and this project develops the first method that can predict fine-grained strategic behaviors of attackers. The significance of this research and the benefit to the public can be demonstrated to certain extent by (a) the severe threat of cyber attacks to the critical infrastructures of the nation, including many infrastructures overseen by the Department of Energy, (b) the importance of cyber security to critical infrastructure protection, and (c) the importance of cyber attack prediction to achieving cyber security.

  5. A NETWORK-THEORETICAL APPROACH TO UNDERSTANDING INTERSTELLAR CHEMISTRY

    International Nuclear Information System (INIS)

    Jolley, Craig C.; Douglas, Trevor

    2010-01-01

    Recent years have seen dramatic advances in computational models of chemical processes in the interstellar medium (ISM). Typically, these models have been used to calculate changes in chemical abundances with time; the calculated abundances can then be compared with chemical abundances derived from observations. In this study, the output from an astrochemical simulation has been used to generate directed graphs with weighted edges; these have been analyzed with the tools of network theory to uncover whole-network properties of reaction systems in dark molecular clouds. The results allow the development of a model in which global network properties can be rationalized in terms of the basic physical properties of the reaction system. The ISM network exhibits an exponential degree distribution, which is likely to be a generic feature of chemical networks involving a broad range of reaction rate constants. While species abundances span several orders of magnitude, the formation and destruction rates for most species are approximately balanced-departures from this rule indicate species (such as CO) that play a critical role in shaping the dynamics of the system. Future theoretical or observational studies focusing on individual molecular species will be able to situate them in terms of their role in the complete system or quantify the degree to which they deviate from the typical system behavior.

  6. Conflicts about nuclear power safety: a decision theoretic approach

    International Nuclear Information System (INIS)

    Winterfeldt, D.V.; Rios, M.

    1980-01-01

    A series of psychological studies indicate that people's judgements of risks from energy production in general and nuclear power plants in particular deviate from technical and statistical estimates because social and psychological variables influence people's risk perception. After reviewing these studies a decision analytic methodology is outlined which incorporates such social and psychological variables in a formal analysis of the risks and benefits of nuclear energy production. The methodology is intended to identify groups with differing risk-benefit perceptions and to elicit and quantify their values and concerns. Such group and value structures are presented for the problem of choosing between a nuclear plant, a coal plant, and a conservation strategy

  7. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    Science.gov (United States)

    Siddharth, S.; Ali, A. S.; El-Sheimy, N.; Goodall, C. L.; Syed, Z. F.

    2012-02-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency.

  8. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    International Nuclear Information System (INIS)

    Siddharth, S; Ali, A S; El-Sheimy, N; Goodall, C L; Syed, Z F

    2012-01-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency. (paper)

  9. Success Determination by Innovation: A Theoretical Approach in Marketing

    OpenAIRE

    Raj Kumar Gautam

    2012-01-01

    The paper aims at to identify the main issues in the marketing which needs immediate attention of the marketers. The importance of innovation in the marketing has also been highlighted and marketing mix have been related to innovative and creative ideas. The study is based on the secondary data, various research papers, articles has been studied to develop a innovative approach in the marketing. Marketing innovative ideas relating to business lead generation, product, price, distribution, pro...

  10. Mapping between Classical Risk Management and Game Theoretical Approaches

    OpenAIRE

    Rajbhandari , Lisa; Snekkenes , Einar ,

    2011-01-01

    Part 2: Work in Progress; International audience; In a typical classical risk assessment approach, the probabilities are usually guessed and not much guidance is provided on how to get the probabilities right. When coming up with probabilities, people are generally not well calibrated. History may not always be a very good teacher. Hence, in this paper, we explain how game theory can be integrated into classical risk management. Game theory puts emphasis on collecting representative data on h...

  11. Theoretical approach of the seismic behaviour of a LMFBR core

    International Nuclear Information System (INIS)

    Parent, J.M.

    1983-01-01

    The vibratory behaviour of two one-degree-freedom mechanical systems, having known coupling and damping features, is investigated by transposition to two electrical circuits inductively coupled. This approach is carried out in two steps: the first one supposes that both circuits are identical, while the second one considers the general case where both circuits are unlike. It is shown that one or two resonant frequencies may occur, depending on the coupling and the damping conditions

  12. A measure theoretic approach to traffic flow optimization on networks

    OpenAIRE

    Cacace, Simone; Camilli, Fabio; De Maio, Raul; Tosin, Andrea

    2018-01-01

    We consider a class of optimal control problems for measure-valued nonlinear transport equations describing traffic flow problems on networks. The objective isto minimise/maximise macroscopic quantities, such as traffic volume or average speed,controlling few agents, for example smart traffic lights and automated cars. The measuretheoretic approach allows to study in a same setting local and nonlocal drivers interactionsand to consider the control variables as additional measures interacting ...

  13. The Economic Security of Bank: Theoretical Basis and Systemic Approach

    Directory of Open Access Journals (Sweden)

    Gavlovska Nataliia I.

    2017-07-01

    Full Text Available The article analyzes the existing approaches to interpreting the category of «economic security of bank». A author’s own definition of the concept of «economic security of bank» has been proposed, which should be understood as condition of protecting the vital interests of bank, achieved by harmonizing relationships with the entities of external influence and optimizing the internal system processes, thus enabling efficient function as well as development by means of an adaptation mechanism. A number of approaches to understanding the substance of the above concept has been allocated and their main characteristics have been provided. The need to study the specifics of interaction of banking institutions with the external environment in the context of interaction between the State agents and market actors has been underlined. Features of formation of the term of «system» have been defined, three main groups of approaches to interpretation of the term have been provided. A author’s own definition of the concept of «economic security system of bank» has been proposed. A concrete definition of principles for building an economic security system of bank has been provided.

  14. Theoretical approach of complex DNA lesions: from formation to repair

    International Nuclear Information System (INIS)

    Bignon, Emmanuelle

    2017-01-01

    This thesis work is focused on the theoretical modelling of DNA damages, from formation to repair. Several projects have been led in this framework, which can be sorted into three different parts. One on hand, we studied complex DNA reactivity. It included a study about 8-oxo-7,8-dihydro-guanine (8oxoG) mechanisms of formation, a project concerning the UV-induced pyrimidine 6-4 pyrimidone (6-4PP) endogenous photo-sensitizer features, and another one about DNA photo-sensitization by nonsteroidal anti-inflammatory drugs (i.e. ketoprofen and ibuprofen). On the other hand, we investigated mechanical properties of damaged DNA. The structural signature of a DNA lesion is of major importance for their repair, unfortunately only few NMR and X-ray structures of such systems are available. In order to gain insights into their dynamical structure, we investigated a series of complex damages: clustered abasic sites, interstrand cross-links, and the 6-4PP photo-lesion. Likewise, we studied the interaction modes DNA with several polyamines, which are well known to interact with the double helix, but also with the perspective to model DNA-protein cross-linking. The third part concerned the study of DNA interactions with repair enzymes. In line with the structural study about clustered abasic sites, we investigated the dynamics of the same system, but this time interacting with the APE1 endonuclease. We also studied interactions between the Fpg glycosylase with an oligonucleotides containing tandem 8-oxoG on one hand and 8-oxoG - abasic site as multiply damaged sites. Thus, we shed new lights on damaged DNA reactivity, structure and repair, which provides perspectives for biomedicine and life's mechanisms understanding as we begin to describe nucleosomal DNA. (author)

  15. Theoretical estimates of maximum fields in superconducting resonant radio frequency cavities: stability theory, disorder, and laminates

    Science.gov (United States)

    Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.

    2017-03-01

    Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.

  16. A representation theoretic approach to the WZW Verlinde formula

    CERN Document Server

    Fuchs, J

    1997-01-01

    By exploring the description of chiral blocks in terms of co-invariants, a proof of the Verlinde formula for WZW models is obtained which is entirely based on the representation theory of affine Lie algebras. In contrast to other proofs of the Verlinde formula, this approach works for all untwisted affine Lie algebras. As a by-product we obtain a homological interpretation of the Verlinde multiplicities, as Euler characteristics of complexes built from invariant tensors of finite-dimensional simple Lie algebras.

  17. Theoretical approach to the magnetocaloric effect with hysteresis

    International Nuclear Information System (INIS)

    Basso, V.; Bertotti, G.; LoBue, M.; Sasso, C.P.

    2005-01-01

    In this paper a thermodynamic model with internal variables is presented and applied to ferromagnetic hysteresis. The out-of-equilibrium Gibbs free energy of a magnetic system is expressed as a function of the internal state of the Preisach model. Expressions for the system entropy and the entropy production are derived. By this approach it is possible to reproduce the characteristic features of the experimentally observed temperature changes (of the order of 10 -4 K around room temperature) induced by the magnetic field along the hysteresis loop performed in iron under adiabatic condition

  18. Decision-theoretic approaches to non-knowledge in economics

    OpenAIRE

    Svetlova, Ekaterina; van Elst, Henk

    2014-01-01

    We review two strands of conceptual approaches to the formal representation of a decision maker's non-knowledge at the initial stage of a static one-person, one-shot decision problem in economic theory. One focuses on representations of non-knowledge in terms of probability measures over sets of mutually exclusive and exhaustive consequence-relevant states of Nature, the other deals with unawareness of potentially important events by means of sets of states that are less complete than the ful...

  19. Modeling the economic impact of medication adherence in type 2 diabetes: a theoretical approach

    Directory of Open Access Journals (Sweden)

    David S Cobden

    2010-08-01

    Full Text Available David S Cobden1, Louis W Niessen2, Frans FH Rutten1, W Ken Redekop11Department of Health Policy and Management, Section of Health Economics – Medical Technology Assessment (HE-MTA, Erasmus MC, Erasmus University Rotterdam, The Netherlands; 2Department of International Health, Johns Hopkins University School of Public Health, Johns Hopkins Medical Institutions, Baltimore, MD, USAAims: While strong correlations exist between medication adherence and health economic outcomes in type 2 diabetes, current economic analyses do not adequately consider them. We propose a new approach to incorporate adherence in cost-effectiveness analysis.Methods: We describe a theoretical approach to incorporating the effect of adherence when estimating the long-term costs and effectiveness of an antidiabetic medication. This approach was applied in a Markov model which includes common diabetic health states. We compared two treatments using hypothetical patient cohorts: injectable insulin (IDM and oral (OAD medications. Two analyses were performed, one which ignored adherence (analysis 1 and one which incorporated it (analysis 2. Results from the two analyses were then compared to explore the extent to which adherence may impact incremental cost-effectiveness ratios.Results: In both analyses, IDM was more costly and more effective than OAD. When adherence was ignored, IDM generated an incremental cost-effectiveness of $12,097 per quality-adjusted life-year (QALY gained versus OAD. Incorporation of adherence resulted in a slightly higher ratio ($16,241/QALY. This increase was primarily due to better adherence with OAD than with IDM, and the higher direct medical costs for IDM.Conclusions: Incorporating medication adherence into economic analyses can meaningfully influence the estimated cost-effectiveness of type 2 diabetes treatments, and should therefore be ­considered in health care decision-making. Future work on the impact of adherence on health

  20. Participatory approaches for environmental governance: theoretical justifications and practical effects

    International Nuclear Information System (INIS)

    Van den Hove, Sybille

    2003-01-01

    A key justification for the rapid development of participatory approaches for environment and sustainable development governance stems from the characteristics of environmental issues. Environmental issues - and radioactive waste disposal is a good example - typically present four important physical characteristics: complexity, uncertainty, large temporal and spatial scales, and irreversibility, which all have consequences on what can be called the social characteristics of environmental issues. These include: social complexity and conflicts of interests, transversality, diffuse responsibilities and impacts, no clear division between micro- and macro-levels, and short-term costs of dealing with the issue associated with benefits which might occur only in the long-term. In turn, these physical and social characteristics determine the type of problem-solving processes needed to tackle environmental issues. It appears that the problem-solving processes best suited to confront global environmental issues will be dynamic processes of capacity-building, - aiming at innovative, flexible and adjustable answers, - allowing for the progressive integration of information as it becomes available, and of different value judgements and logics, - involving various actors from different backgrounds and levels. In promoting more democratic practices, these processes additionally should supersede traditional politics and allow co-ordination across different policy areas. It is deemed that participatory approaches have the potential to meet these problem-solving requirements

  1. Theoretical estimation of Photons flow rate Production in quark gluon interaction at high energies

    Science.gov (United States)

    Al-Agealy, Hadi J. M.; Hamza Hussein, Hyder; Mustafa Hussein, Saba

    2018-05-01

    photons emitted from higher energetic collisions in quark-gluon system have been theoretical studied depending on color quantum theory. A simple model for photons emission at quark-gluon system have been investigated. In this model, we use a quantum consideration which enhances to describing the quark system. The photons current rate are estimation for two system at different fugacity coefficient. We discussion the behavior of photons rate and quark gluon system properties in different photons energies with Boltzmann model. The photons rate depending on anisotropic coefficient : strong constant, photons energy, color number, fugacity parameter, thermal energy and critical energy of system are also discussed.

  2. Theoretical triangulation as an approach for revealing the complexity of a classroom discussion

    NARCIS (Netherlands)

    van Drie, J.; Dekker, R.

    2013-01-01

    In this paper we explore the value of theoretical triangulation as a methodological approach for the analysis of classroom interaction. We analyze an excerpt of a whole-class discussion in history from three theoretical perspectives: interactivity of the discourse, conceptual level raising and

  3. Consumer Perception of Competitiveness – Theoretical-Instrumental Approach

    Directory of Open Access Journals (Sweden)

    Duralia Oana

    2016-04-01

    Full Text Available Behaviorist economic approach has recorded a quantum leap in a relatively short period of time, as studying the relationship between consumer behavior and companies’ strategic decisions based on market competitiveness are no longer an unknown area. However, this issue remains actual in view of the fact that during the decision process of purchase, consumers do not always behave rationally, as they are the only ones who can appreciate if the offer of the company, in terms of range, quality, price and auxiliary services meet their needs or not. In this context, this paper aims to deepen the existing interconnection between the market decisions of the enterprise and consumer behavior, as measure standard for the competitiveness of a firm on a certain market.

  4. Group-theoretical approach to relativistic eikonal physics

    Energy Technology Data Exchange (ETDEWEB)

    Leon, J; Quiros, M [Instituto de Enstructura de la Materia, C.S.I.C., Madrid (Spain); Departamento de Matematica, Universidad Complutense, Campus de Alcala (Spain)); Ramirez Mittelbrunn, J [Instituto de Estructura de la Materia, C.S.I.C., Madrid (Spain)

    1977-09-01

    A contraction of the Poincare group is performed leading to the eikonal approximation. Invariants, one-particle states, spinning particles and some interaction problems are studied with the following results: momenta of ultrarelativistic particles behave as lightlike, the little group being E/sub 2/, spin behaves as that of zero-mass particles, helicity being conserved in the presence of interactions. The full eikonal results are rederived for Green's functions, wave functions, etc. The way for computing corrections due to transverse momenta and spin-dependent interactions is outlined. A parallel analysis is made for the infinite-momentum frame, the similarities and differences between this formalism and the eikonal approach being disclosed.

  5. Prospecting theoretical approaches to understand internationalization of creative economy firms

    Directory of Open Access Journals (Sweden)

    Sílvio Luís de Vasconcellos

    2017-12-01

    Full Text Available We argue that the internationalization process of firms in the creative economy has particular aspects that distinguish it from internationalization of firms in traditional economic sectors. We explore ways in which the international business literature might be helpful for understanding how internationalization takes place in firms whose core business is creation of ideas. We conducted a case study using a focus group technique to investigate a creative economy firm specialized in computer graphics. The firm already does business internationally as a producer of electronic mockup models, but is transitioning to the computer-generated video production industry. Our results suggest that behavioral approaches to international business related to entrepreneurship, as well as country origin effects and networks theory could be useful to expanding knowledge about the internationalization process in such firms, in which creativity is a critical resource.

  6. Thermochemical study of cyanopyrazines: Experimental and theoretical approaches

    International Nuclear Information System (INIS)

    Miranda, Margarida S.; Morais, Victor M.F.; Matos, M. Agostinha R.

    2006-01-01

    The standard (p - bar =0.1MPa) molar energy of combustion, at T=298.15K, of crystalline 2,3-dicyanopyrazine was measured by static bomb calorimetry, in oxygen atmosphere. The standard molar enthalpy of sublimation, at T=298.15K, was obtained by Calvet Microcalorimetry, allowing the calculation of the standard molar enthalpy of formation of the compound, in the gas phase, at T=298.15K: Δ f H m - bar (g)=(518.7+/-3.4)kJ.mol -1 . In addition, the geometries of all cyanopyrazines were obtained using density functional theory with the B3LYP functional and two basis sets: 6-31G* and 6-311G**. These calculations were then used for a better understanding of the relation between structure and energetics of the cyanopyrazine systems. These calculations also reproduce measured standard molar enthalpies of formation with some accuracy and do provide estimates of this thermochemical parameter for those compounds that could not be studied experimentally, namely the tri- and tetracyanopyrazines: the strong electron withdrawing cyano group on the pyrazine ring makes cyanopyrazines highly destabilized compounds

  7. Mechanisms of Neurofeedback: A Computation-theoretic Approach.

    Science.gov (United States)

    Davelaar, Eddy J

    2018-05-15

    Neurofeedback training is a form of brain training in which information about a neural measure is fed back to the trainee who is instructed to increase or decrease the value of that particular measure. This paper focuses on electroencephalography (EEG) neurofeedback in which the neural measures of interest are the brain oscillations. To date, the neural mechanisms that underlie successful neurofeedback training are still unexplained. Such an understanding would benefit researchers, funding agencies, clinicians, regulatory bodies, and insurance firms. Based on recent empirical work, an emerging theory couched firmly within computational neuroscience is proposed that advocates a critical role of the striatum in modulating EEG frequencies. The theory is implemented as a computer simulation of peak alpha upregulation, but in principle any frequency band at one or more electrode sites could be addressed. The simulation successfully learns to increase its peak alpha frequency and demonstrates the influence of threshold setting - the threshold that determines whether positive or negative feedback is provided. Analyses of the model suggest that neurofeedback can be likened to a search process that uses importance sampling to estimate the posterior probability distribution over striatal representational space, with each representation being associated with a distribution of values of the target EEG band. The model provides an important proof of concept to address pertinent methodological questions about how to understand and improve EEG neurofeedback success. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. [Systemic inflammation: theoretical and methodological approaches to description of general pathological process model. Part 3. Backgroung for nonsyndromic approach].

    Science.gov (United States)

    Gusev, E Yu; Chereshnev, V A

    2013-01-01

    Theoretical and methodological approaches to description of systemic inflammation as general pathological process are discussed. It is shown, that there is a need of integration of wide range of types of researches to develop a model of systemic inflammation.

  9. Field-theoretic approach to gravity in the flat space-time

    Energy Technology Data Exchange (ETDEWEB)

    Cavalleri, G [Centro Informazioni Studi Esperienze, Milan (Italy); Milan Univ. (Italy). Ist. di Fisica); Spinelli, G [Istituto di Matematica del Politecnico di Milano, Milano (Italy)

    1980-01-01

    In this paper it is discussed how the field-theoretical approach to gravity starting from the flat space-time is wider than the Einstein approach. The flat approach is able to predict the structure of the observable space as a consequence of the behaviour of the particle proper masses. The field equations are formally equal to Einstein's equations without the cosmological term.

  10. Prevention of treatable infectious diseases: A game-theoretic approach.

    Science.gov (United States)

    Jijón, Sofía; Supervie, Virginie; Breban, Romulus

    2017-09-25

    We model outcomes of voluntary prevention using an imperfect vaccine, which confers protection only to a fraction of vaccinees for a limited duration. Our mathematical model combines a single-player game for the individual-level decision to get vaccinated, and a compartmental model for the epidemic dynamics. Mathematical analysis yields a characterization for the effective vaccination coverage, as a function of the relative cost of prevention versus treatment; note that cost may involve monetary as well as non-monetary aspects. Three behaviors are possible. First, the relative cost may be too high, so individuals do not get vaccinated. Second, the relative cost may be moderate, such that some individuals get vaccinated and voluntary vaccination alleviates the epidemic. In this case, the vaccination coverage grows steadily with decreasing relative cost of vaccination versus treatment. Unlike previous studies, we find a third case where relative cost is sufficiently low so epidemics may be averted through the use of prevention, even for an imperfect vaccine. However, we also found that disease elimination is only temporary-as no equilibrium exists for the individual strategy in this third case-and, with increasing perceived cost of vaccination versus treatment, the situation may be reversed toward the epidemic edge, where the effective reproductive number is 1. Thus, maintaining relative cost sufficiently low will be the main challenge to maintain disease elimination. Furthermore, our model offers insight on vaccine parameters, which are otherwise difficult to estimate. We apply our findings to the epidemiology of measles. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Theoretical estimates of spherical and chromatic aberration in photoemission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Fitzgerald, J.P.S., E-mail: fit@pdx.edu; Word, R.C.; Könenkamp, R.

    2016-01-15

    We present theoretical estimates of the mean coefficients of spherical and chromatic aberration for low energy photoemission electron microscopy (PEEM). Using simple analytic models, we find that the aberration coefficients depend primarily on the difference between the photon energy and the photoemission threshold, as expected. However, the shape of the photoelectron spectral distribution impacts the coefficients by up to 30%. These estimates should allow more precise correction of aberration in PEEM in experimental situations where the aberration coefficients and precise electron energy distribution cannot be readily measured. - Highlights: • Spherical and chromatic aberration coefficients of the accelerating field in PEEM. • Compact, analytic expressions for coefficients depending on two emission parameters. • Effect of an aperture stop on the distribution is also considered.

  12. Theoretical estimation and validation of radiation field in alkaline hydrolysis plant

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Sanjay; Krishnamohanan, T.; Gopalakrishnan, R.K., E-mail: singhs@barc.gov.in [Radiation Safety Systems Division, Bhabha Atomic Research Centre, Mumbai (India); Anand, S. [Health Physics Division, Bhabha Atomic Research Centre, Mumbai (India); Pancholi, K. C. [Waste Management Division, Bhabha Atomic Research Centre, Mumbai (India)

    2014-07-01

    Spent organic solvent (30% TBP + 70% n-Dodecane) from reprocessing facility is treated at ETP in Alkaline Hydrolysis Plant (AHP) and Organic Waste Incineration (ORWIN) Facility. In AHP-ORWIN, there are three horizontal cylindrical tanks having 2.0 m{sup 3} operating capacity used for waste storage and transfer. The three tanks are, Aqueous Waste Tank (AWT), Waste Receiving Tank (WRT) and Dodecane Waste Tank (DWT). These tanks are en-housed in a shielded room in this facility. Monte Carlo N-Particle (MCNP) radiation transport code was used to estimate ambient radiation field levels when the storage tanks are having hold up volumes of desired specific activity levels. In this paper the theoretically estimated values of radiation field is compared with the actual measured dose.

  13. Theoretical approaches to x-ray absorption fine structure

    International Nuclear Information System (INIS)

    Rehr, J. J.; Albers, R. C.

    2000-01-01

    Dramatic advances in the understanding of x-ray absorption fine structure (XAFS) have been made over the past few decades, which have led ultimately to a highly quantitative theory. This review covers these developments from a unified multiple-scattering viewpoint. The authors focus on extended x-ray absorption fine structure (EXAFS) well above an x-ray edge, and, to a lesser extent, on x-ray absorption near-edge structure (XANES) closer to an edge. The discussion includes both formal considerations, derived from a many-electron formulation, and practical computational methods based on independent-electron models, with many-body effects lumped into various inelastic losses and energy shifts. The main conceptual issues in XAFS theory are identified and their relative importance is assessed; these include the convergence of the multiple-scattering expansion, curved-wave effects, the scattering potential, inelastic losses, self-energy shifts, and vibrations and structural disorder. The advantages and limitations of current computational approaches are addressed, with particular regard to quantitative experimental comparisons. (c) 2000 The American Physical Society

  14. Relativistic kinematics and dynamics: a new group theoretical approach

    International Nuclear Information System (INIS)

    Giovannini, N.

    1983-01-01

    The author reanalyzes the relationships between physical states and space-time symmetries with a view to describing relativistic extended and interacting systems. For this description he proposes to introduce, in space-time, an additional observable, related to a natural notion of simultaneity. The introduction of this new observable is justified on the basis of the operational meaning of the relations between state descriptions and symmetries in this case. The Poincare transformations are correspondingly split into two parts: the first one, kinematical, related to the symmetries of the description of the states, the other one, dynamical, related to the possible forms for the evolution. It is shown that the kinematical symmetries lead in a straightforward way to the expected classical and quantal state spaces for single particles of arbitrary spin and the author shows how the remaining symmetries can be related to the derivation of the possible forms for the dynamics. He finds as a particular case the usual dynamics of single particles in external fields (with some satisfactory improvements due to the corresponding new interpretation) and extends the method to the dynamics of N interacting particles. He also shows why this new approach and interpretation of relativistic states is necessary and how it allows a covariant description in the problems raised by the (recently measured) quantum correlations at-a-distance concerning the Einstein-Podolsky-Rosen paradox, something which seems quite impossible in the usual frameworks. (Auth.)

  15. A new approach to estimate Angstrom coefficients

    International Nuclear Information System (INIS)

    Abdel Wahab, M.

    1991-09-01

    A simple quadratic equation to estimate global solar radiation with coefficients depending on some physical atmospheric parameters is presented. The importance of the second order and sensitivity to some climatic variations is discussed. (author). 8 refs, 4 figs, 2 tabs

  16. A theoretical approach to the re-suspension factor

    Directory of Open Access Journals (Sweden)

    Magnoni M.

    2012-04-01

    Full Text Available The atmospheric re-suspension of radionuclides is a well-known phenomenon that consists in the re-injection into the atmosphere of previously deposited radioactivity. The process is driven by the action of wind on surfaces and can act as an additional source of radiation exposure by inhalation, after the deposition has finished. It is thus defined as the re-suspension factor, a parameter K generally considered as a time depending function and defined as the ratio of Ca, the volumetric air activity concentration (Bq m−3 and I0 (Bq m−2, the radioactivity deposition at time zero. The re-suspension factor concept is very useful in radioprotection in order to estimate the inhalation of radionuclides re-suspended from contaminated surfaces when direct atmospheric measurements are lacking or difficult to perform. However, the choice of the proper values of K is usually not a simple task, being quite site-specific and related to the meteorological, géomorphologie and environmental characteristics of the area to be studied. Moreover, several investigations showed clearly that the values of K are a decreasing function of time. For that reason, K values span several orders of magnitude: typical values in the range 10−5–10−10 m−1 are reported in literature for different environmental conditions and time elapsed since the deposition event. The current available models for the re-suspension factor are based on empirical formulas whose parameters are highly site dependent and cannot easily be related to some physical quantity. In this paper a simple physical model for the re-suspension factor is proposed and tested with available environmental radioactivity data (137Cs, collected since 1986 (Chernobyl fallout. The new model not only allows a satisfactory description of the experimental data like even the current empirical models do, but it is also able to connect the K values to quantities with a physical meaning (such as, for example a diffusion

  17. An Information-Theoretic Approach to PMU Placement in Electric Power Systems

    OpenAIRE

    Li, Qiao; Cui, Tao; Weng, Yang; Negi, Rohit; Franchetti, Franz; Ilic, Marija D.

    2012-01-01

    This paper presents an information-theoretic approach to address the phasor measurement unit (PMU) placement problem in electric power systems. Different from the conventional 'topological observability' based approaches, this paper advocates a much more refined, information-theoretic criterion, namely the mutual information (MI) between the PMU measurements and the power system states. The proposed MI criterion can not only include the full system observability as a special case, but also ca...

  18. A reduced theoretical model for estimating condensation effects in combustion-heated hypersonic tunnel

    Science.gov (United States)

    Lin, L.; Luo, X.; Qin, F.; Yang, J.

    2018-03-01

    As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.

  19. A novel approach for absolute radar calibration: formulation and theoretical validation

    Directory of Open Access Journals (Sweden)

    C. Merker

    2015-06-01

    Full Text Available The theoretical framework of a novel approach for absolute radar calibration is presented and its potential analysed by means of synthetic data to lay out a solid basis for future practical application. The method presents the advantage of an absolute calibration with respect to the directly measured reflectivity, without needing a previously calibrated reference device. It requires a setup comprising three radars: two devices oriented towards each other, measuring reflectivity along the same horizontal beam and operating within a strongly attenuated frequency range (e.g. K or X band, and one vertical reflectivity and drop size distribution (DSD profiler below this connecting line, which is to be calibrated. The absolute determination of the calibration factor is based on attenuation estimates. Using synthetic, smooth and geometrically idealised data, calibration is found to perform best using homogeneous precipitation events with rain rates high enough to ensure a distinct attenuation signal (reflectivity above ca. 30 dBZ. Furthermore, the choice of the interval width (in measuring range gates around the vertically pointing radar, needed for attenuation estimation, is found to have an impact on the calibration results. Further analysis is done by means of synthetic data with realistic, inhomogeneous precipitation fields taken from measurements. A calibration factor is calculated for each considered case using the presented method. Based on the distribution of the calculated calibration factors, the most probable value is determined by estimating the mode of a fitted shifted logarithmic normal distribution function. After filtering the data set with respect to rain rate and inhomogeneity and choosing an appropriate length of the considered attenuation path, the estimated uncertainty of the calibration factor is of the order of 1 to 11 %, depending on the chosen interval width. Considering stability and accuracy of the method, an interval of

  20. Assessment of two theoretical methods to estimate potentiometric titration curves of peptides: comparison with experiment.

    Science.gov (United States)

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A

    2006-03-09

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences

  1. Theoretical epidemiology applied to health physics: estimation of the risk of radiation-induced breast cancer

    International Nuclear Information System (INIS)

    Sutherland, J.V.

    1983-01-01

    Indirect estimation of low-dose radiation hazards is possible using the multihit model of carcinogenesis. This model is based on cancer incidence data collected over many decades on tens of millions of people. Available data on human radiation effects can be introduced into the modeling process without the requirement that these data precisely define the model to be used. This reduction in the information demanded from the limited data on human radiation effects allows a more rational approach to estimation of low-dose radiation hazards and helps to focus attention on research directed towards understanding the process of carcinogenesis, rather than on repeating human or animal experiments that cannot provide sufficient data to resolve the low-dose estimation problem. Assessment of the risk of radiation-induced breast cancer provides an excellent example of the utility of multihit modeling procedures

  2. A Theoretical Model for Estimation of Yield Strength of Fiber Metal Laminate

    Science.gov (United States)

    Bhat, Sunil; Nagesh, Suresh; Umesh, C. K.; Narayanan, S.

    2017-08-01

    The paper presents a theoretical model for estimation of yield strength of fiber metal laminate. Principles of elasticity and formulation of residual stress are employed to determine the stress state in metal layer of the laminate that is found to be higher than the stress applied over the laminate resulting in reduced yield strength of the laminate in comparison with that of the metal layer. The model is tested over 4A-3/2 Glare laminate comprising three thin aerospace 2014-T6 aluminum alloy layers alternately bonded adhesively with two prepregs, each prepreg built up of three uni-directional glass fiber layers laid in longitudinal and transverse directions. Laminates with prepregs of E-Glass and S-Glass fibers are investigated separately under uni-axial tension. Yield strengths of both the Glare variants are found to be less than that of aluminum alloy with use of S-Glass fiber resulting in higher laminate yield strength than with the use of E-Glass fiber. Results from finite element analysis and tensile tests conducted over the laminates substantiate the theoretical model.

  3. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Directory of Open Access Journals (Sweden)

    Weiqiang Pan

    2015-03-01

    Full Text Available In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  4. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Science.gov (United States)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  5. BWR level estimation using Kalman Filtering approach

    International Nuclear Information System (INIS)

    Garner, G.; Divakaruni, S.M.; Meyer, J.E.

    1986-01-01

    Work is in progress on development of a system for Boiling Water Reactor (BWR) vessel level validation and failure detection. The levels validated include the liquid level both inside and outside the core shroud. This work is a major part of a larger effort to develop a complete system for BWR signal validation. The demonstration plant is the Oyster Creek BWR. Liquid level inside the core shroud is not directly measured during full power operation. This level must be validated using measurements of other quantities and analytic models. Given the available sensors, analytic models for level that are based on mass and energy balances can contain open integrators. When such a model is driven by noisy measurements, the model predicted level will deviate from the true level over time. To validate the level properly and to avoid false alarms, the open integrator must be stabilized. In addition, plant parameters will change slowly with time. The respective model must either account for these plant changes or be insensitive to them to avoid false alarms and maintain sensitivity to true failures of level instrumentation. Problems are addressed here by combining the extended Kalman Filter and Parity Space Decision/Estimator. The open integrator is stabilized by integrating from the validated estimate at the beginning of each sampling interval, rather than from the model predicted value. The model is adapted to slow plant/sensor changes by updating model parameters on-line

  6. Approaches to estimating humification indicators for peat

    Directory of Open Access Journals (Sweden)

    M. Klavins

    2008-10-01

    Full Text Available Degree of decomposition is an important property of the organic matter in soils and other deposits which contain fossil carbon. It describes the intensity of transformation, or the humification degree (HD, of the original living organic matter. In this article, approaches to the determination of HD are thoroughly described and 14C dated peat columns extracted from several bogs in Latvia are investigated and compared. A new humification indicator is suggested, namely the quantity of humic substances as a fraction of the total amount of organic matter in the peat.

  7. Theoretical and numerical investigations of TAP experiments. New approaches for variable pressure conditions

    Energy Technology Data Exchange (ETDEWEB)

    Senechal, U.; Breitkopf, C. [Technische Univ. Dresden (Germany). Inst. fuer Energietechnik

    2011-07-01

    Temporal analysis of products (TAP) is a valuable tool for characterization of porous catalytic structures. Established TAP-modeling requires a spatially constant diffusion coefficient and neglect convective flows, which is only valid in Knudsen diffusion regime. Therefore in experiments, the number of molecules per pulse must be chosen accordingly. New approaches for variable process conditions are highly required. Thus, a new theoretical model is developed for estimating the number of molecules per pulse to meet these requirements under any conditions and at any time. The void volume is calculated as the biggest sphere fitting between three pellets. The total number of pulsed molecules is assumed to fill the first void volume at the inlet immediately. Molecule numbers from these calculations can be understood as maximum possible molecules at any time in the reactor to be in Knudsen diffusion regime, i.e., above the Knudsen number of 2. Moreover, a new methodology for generating a full three-dimensional geometrical representation of beds is presented and used for numerical simulations to investigate spatial effects. Based on a freely available open-source game physics engine library (BULLET), beds of arbitrary-sized pellets can be generated and transformed to CFD-usable geometry. In CFD-software (ANSYS CFX registered) a transient diffusive transport equation with time-dependent inlet boundary conditions is solved. Three different pellet diameters were investigated with 1e18 molecules per pulse, which is higher than the limit from the theoretical calculation. Spatial and temporal distributions of transported species show regions inside the reactor, where non-Knudsen conditions exist. From this results, the distance from inlet can be calculated where the theoretical pressure limit (Knudsen number equals 2) is obtained, i.e., from this point to the end of the reactor Knudsen regime can be assumed. Due to linear dependency of pressure and concentration (assuming ideal

  8. A Markov game theoretic data fusion approach for cyber situational awareness

    Science.gov (United States)

    Shen, Dan; Chen, Genshe; Cruz, Jose B., Jr.; Haynes, Leonard; Kruger, Martin; Blasch, Erik

    2007-04-01

    This paper proposes an innovative data-fusion/ data-mining game theoretic situation awareness and impact assessment approach for cyber network defense. Alerts generated by Intrusion Detection Sensors (IDSs) or Intrusion Prevention Sensors (IPSs) are fed into the data refinement (Level 0) and object assessment (L1) data fusion components. High-level situation/threat assessment (L2/L3) data fusion based on Markov game model and Hierarchical Entity Aggregation (HEA) are proposed to refine the primitive prediction generated by adaptive feature/pattern recognition and capture new unknown features. A Markov (Stochastic) game method is used to estimate the belief of each possible cyber attack pattern. Game theory captures the nature of cyber conflicts: determination of the attacking-force strategies is tightly coupled to determination of the defense-force strategies and vice versa. Also, Markov game theory deals with uncertainty and incompleteness of available information. A software tool is developed to demonstrate the performance of the high level information fusion for cyber network defense situation and a simulation example shows the enhanced understating of cyber-network defense.

  9. A game theoretic approach to a finite-time disturbance attenuation problem

    Science.gov (United States)

    Rhee, Ihnseok; Speyer, Jason L.

    1991-01-01

    A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and the measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which expresses the controller as a linear combination of the measurement history. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H(infinity) norm bound.

  10. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  11. A theoretical model for estimating the vacancies produced in graphene by irradiation

    International Nuclear Information System (INIS)

    Codorniu Pujals, Daniel; Aguilera Corrales, Yuri

    2011-01-01

    The award of the Nobel Prize of Physics 2010 to the scientists that isolated graphene is a clear evidence of the great interest that this system has raised among the physicists. This quasi-two-dimensional material, whose electrons behave as massless Dirac particles, presents sui generis properties that seem very promising for diverse practical applications. At the same time, the system poses new theoretical challenges for the scientists of very different branches, from Material Science to Relativistic Quantum Mechanics. A topic of great actuality in graphene researches is the search of ways to control the number and distribution of the defects in its crystal lattice, in order to achieve certain physical properties. One of these ways can be the irradiation with different kind of particles. However, the irradiation processes in two-dimensional systems have been insufficiently studied. The classic models of interaction of the radiation with solids are based on three-dimensional structures, for what they should be modified to apply them to graphene. In the present work we discuss, from the theoretical point of view, the features of the processes that happen in the two-dimensional structure of monolayer graphene under irradiation with different kinds of particles. In that context, some mathematical expressions that allow to estimate the concentration of the vacancies created during these processes are presented. We also discuss the possible use of the information obtained from the model to design structures of topological defects with certain elastic deformation fields, as well as their influence in the electronic properties. (Author)

  12. A new theoretical approach to analyze complex processes in cytoskeleton proteins.

    Science.gov (United States)

    Li, Xin; Kolomeisky, Anatoly B

    2014-03-20

    Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.

  13. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  14. Theoretical Estimation of Thermal Effects in Drilling of Woven Carbon Fiber Composite

    Directory of Open Access Journals (Sweden)

    José Díaz-Álvarez

    2014-06-01

    Full Text Available Carbon Fiber Reinforced Polymer (CFRPs composites are extensively used in structural applications due to their attractive properties. Although the components are usually made near net shape, machining processes are needed to achieve dimensional tolerance and assembly requirements. Drilling is a common operation required for further mechanical joining of the components. CFRPs are vulnerable to processing induced damage; mainly delamination, fiber pull-out, and thermal degradation, drilling induced defects being one of the main causes of component rejection during manufacturing processes. Despite the importance of analyzing thermal phenomena involved in the machining of composites, only few authors have focused their attention on this problem, most of them using an experimental approach. The temperature at the workpiece could affect surface quality of the component and its measurement during processing is difficult. The estimation of the amount of heat generated during drilling is important; however, numerical modeling of drilling processes involves a high computational cost. This paper presents a combined approach to thermal analysis of composite drilling, using both an analytical estimation of heat generated during drilling and numerical modeling for heat propagation. Promising results for indirect detection of risk of thermal damage, through the measurement of thrust force and cutting torque, are obtained.

  15. Concept of the Cooling System of the ITS for ALICE: Technical Proposals, Theoretical Estimates, Experimental Results

    CERN Document Server

    Godisov, O N; Yudkin, M I; Gerasimov, S F; Feofilov, G A

    1994-01-01

    Contradictory demands raised by the application of different types of sensitive detectors in 5 layers of the Inner Tracking System (ITS) for ALICE stipulate the simultaneous use of different schemes of heat drain: gaseous cooling of the 1st layer (uniform heat production over the sensitive surface) and evaporative cooling for the 2nd-5th layers (localised heat production). The last system is also a must for the thermostabilization of Si-drift detectors within 0.1 degree C. Theoretical estimates of gaseous, evaporative and liquid cooling systems are done for all ITS layers. The results of the experiments done for evaporative and liquid heat drain systems are presented and discussed. The major technical problems of the evaporative systems' design are being considered: i) control of liquid supply; ii) vapour pressure control. Two concepts of the evaporative systems are proposed: 1) One channel system for joint transfer of two phases (liquid + gas); 2) Two channels system with separate transfer of phases. Both sy...

  16. EVOLUTION OF THEORETICAL APPROACHES TO THE DEFINITION OF THE CATEGORY “PERSONNEL POTENTIAL”

    Directory of Open Access Journals (Sweden)

    Аlexandra Deshchenko

    2016-02-01

    Full Text Available The article describes the evolution of theoretical approaches to definition of the category «personnel potential» based on the analysis of approaches to definition of the conceptual apparatus of labor Economics, including such categories as: labor force, labor resources, labor potential, human resources, human capital, human capital different authors. The analysis of the evolution of the terms in accordance with the stages of development of a society.

  17. The decays Psub(c) -> VP in the group theoretical and quark diagrammatic approaches

    International Nuclear Information System (INIS)

    Tuan, S.F.; Xiaoyuan Li.

    1983-08-01

    Decays of charmed meson into one vector meson and one pseudoscalar meson Psub(c) -> VP in both the group theoretical and quark diagrammatic approaches are considered. A complete decay amplitude analysis is given. The present available experimental data can be accomodated if the contributions from exotic final states and exotic piece of weak Hamiltonian are also taken into account. (orig.)

  18. THE REPURCHASE OF SHARES - ANOTHER FORM OF REWARDING INVESTORS - A THEORETICAL APPROACH

    Directory of Open Access Journals (Sweden)

    PRISACARIU Maria

    2013-06-01

    Full Text Available Among the shareholder remuneration policies, in recent years, share repurchases are gaining more and more ground. Like any other phenomenon or financial practice, repurchases lacked no theories to explain their motivation, effects and controversies. This paper proposes a theoretical approach to the subject by summarizing relevant research in order to highlight the motivations behind this decision and its implications.

  19. THEORETICAL AND METHODOLOGICAL APPROACHES TO THE STUDY OF PROTEST ACTIVITY IN THE WESTERN SOCIOLOGICAL THOUGHT

    OpenAIRE

    Купрєєва, Ю. О.

    2015-01-01

    In this article the author discusses the main theoretical and methodological approaches to the study of protest activity. Among them - the theory of collective behavior, the relative deprivation theory, the new social movements theory and the resource mobilization theory. Highlighted their strengths and weaknesses. Focused on the new direction of protest studies connected with the development of the Internet.

  20. Estimating the elasticity of trade: the trade share approach

    OpenAIRE

    Mauro Lanati

    2013-01-01

    Recent theoretical work on international trade emphasizes the importance of trade elasticity as the fundamental statistic needed to conduct welfare analysis. Eaton and Kortum (2002) proposed a two-step method to estimate this parameter, where exporter fixed effects are regressed on proxies for technology and wages. Within the same Ricardian model of trade, the trade share provides an alternative source of identication for the elasticity of trade. Following Santos Silva and Tenreyro (2006) bot...

  1. A Latent Class Approach to Estimating Test-Score Reliability

    Science.gov (United States)

    van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas

    2011-01-01

    This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…

  2. The person-oriented approach: A short theoretical and practical guide

    Directory of Open Access Journals (Sweden)

    Lars R. Bergman

    2014-05-01

    Full Text Available A short overview of the person-oriented approach is given as a guide to the researcher interested in carrying out person-oriented research. Theoretical, methodological, and practical considerations of the approach are discussed. First, some historical roots are traced, followed by a description of the holisticinteractionistic research paradigm, which provided the general framework for the development of the modern person-oriented approach. The approach has both a theoretical and a methodological facet and after presenting its key theoretical tenets, an overview is given of some common person-oriented methods. Central to the person-oriented approach is a system view with its components together forming a pattern regarded as indivisible. This pattern should be understood and studied as a whole, not broken up into pieces (variables that are studied as separate entities. Hence, usually methodological tools are used by which whole patterns are analysed (e.g. cluster analysis. An empirical example is given where the pattern development of school grades is studied.

  3. Theoretical Approaches in Evolutionary Ecology: Environmental Feedback as a Unifying Perspective.

    Science.gov (United States)

    Lion, Sébastien

    2018-01-01

    Evolutionary biology and ecology have a strong theoretical underpinning, and this has fostered a variety of modeling approaches. A major challenge of this theoretical work has been to unravel the tangled feedback loop between ecology and evolution. This has prompted the development of two main classes of models. While quantitative genetics models jointly consider the ecological and evolutionary dynamics of a focal population, a separation of timescales between ecology and evolution is assumed by evolutionary game theory, adaptive dynamics, and inclusive fitness theory. As a result, theoretical evolutionary ecology tends to be divided among different schools of thought, with different toolboxes and motivations. My aim in this synthesis is to highlight the connections between these different approaches and clarify the current state of theory in evolutionary ecology. Central to this approach is to make explicit the dependence on environmental dynamics of the population and evolutionary dynamics, thereby materializing the eco-evolutionary feedback loop. This perspective sheds light on the interplay between environmental feedback and the timescales of ecological and evolutionary processes. I conclude by discussing some potential extensions and challenges to our current theoretical understanding of eco-evolutionary dynamics.

  4. Understanding employee motivation and organizational performance: Arguments for a set-theoretic approach

    Directory of Open Access Journals (Sweden)

    Michael T. Lee

    2016-09-01

    Full Text Available Empirical evidence demonstrates that motivated employees mean better organizational performance. The objective of this conceptual paper is to articulate the progress that has been made in understanding employee motivation and organizational performance, and to suggest how the theory concerning employee motivation and organizational performance may be advanced. We acknowledge the existing limitations of theory development and suggest an alternative research approach. Current motivation theory development is based on conventional quantitative analysis (e.g., multiple regression analysis, structural equation modeling. Since researchers are interested in context and understanding of this social phenomena holistically, they think in terms of combinations and configurations of a set of pertinent variables. We suggest that researchers take a set-theoretic approach to complement existing conventional quantitative analysis. To advance current thinking, we propose a set-theoretic approach to leverage employee motivation for organizational performance.

  5. Topological charge on the lattice: a field theoretical view of the geometrical approach

    International Nuclear Information System (INIS)

    Rastelli, L.; Rossi, P.; Vicari, E.

    1997-01-01

    We construct sequences of ''field theoretical'' lattice topological charge density operators which formally approach geometrical definitions in 2D CP N-1 models and 4D SU(N) Yang-Mills theories. The analysis of these sequences of operators suggests a new way of looking at the geometrical method, showing that geometrical charges can be interpreted as limits of sequences of field theoretical (analytical) operators. In perturbation theory, renormalization effects formally tend to vanish along such sequences. But, since the perturbative expansion is asymptotic, this does not necessarily lead to well-behaved geometrical limits. It indeed leaves open the possibility that non-perturbative renormalizations survive. (orig.)

  6. Merged ontology for engineering design: Contrasting empirical and theoretical approaches to develop engineering ontologies

    DEFF Research Database (Denmark)

    Ahmed, Saeema; Storga, M

    2009-01-01

    to developing the ontology engineering design integrated taxonomies (EDIT) with a theoretical approach in which concepts and relations are elicited from engineering design theories ontology (DO) The limitations and advantages of each approach are discussed. The research methodology adopted is to map......This paper presents a comparison of two previous and separate efforts to develop an ontology in the engineering design domain, together with an ontology proposal from which ontologies for a specific application may be derived. The research contrasts an empirical, user-centered approach...

  7. Dimensional accuracy of ceramic self-ligating brackets and estimates of theoretical torsional play.

    Science.gov (United States)

    Lee, Youngran; Lee, Dong-Yul; Kim, Yoon-Ji R

    2016-09-01

    To ascertain the dimensional accuracies of some commonly used ceramic self-ligation brackets and the amount of torsional play in various bracket-archwire combinations. Four types of 0.022-inch slot ceramic self-ligating brackets (upper right central incisor), three types of 0.018-inch ceramic self-ligating brackets (upper right central incisor), and three types of rectangular archwires (0.016 × 0.022-inch beta-titanium [TMA] (Ormco, Orange, Calif), 0.016 × 0.022-inch stainless steel [SS] (Ortho Technology, Tampa, Fla), and 0.019 × 0.025-inch SS (Ortho Technology)) were measured using a stereomicroscope to determine slot widths and wire cross-sectional dimensions. The mean acquired dimensions of the brackets and wires were applied to an equation devised by Meling to estimate torsional play angle (γ). In all bracket systems, the slot tops were significantly wider than the slot bases (P brackets, and Clippy-Cs (Tomy, Futaba, Fukushima, Japan) among the 0.018-inch brackets. The Damon Clear (Ormco) bracket had the smallest dimensional error (0.542%), whereas the 0.022-inch Empower Clear (American Orthodontics, Sheboygan, Wis) bracket had the largest (3.585%). The largest amount of theoretical play is observed using the Empower Clear (American Orthodontics) 0.022-inch bracket combined with the 0.016 × 0.022-inch TMA wire (Ormco), whereas the least amount occurs using the 0.018 Clippy-C (Tomy) combined with 0.016 × 0.022-inch SS wire (Ortho Technology).

  8. Rock mechanics site descriptive model-theoretical approach. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Fredriksson, Anders; Olofsson, Isabelle [Golder Associates AB, Uppsala (Sweden)

    2005-12-15

    The present report summarises the theoretical approach to estimate the mechanical properties of the rock mass in relation to the Preliminary Site Descriptive Modelling, version 1.2 Forsmark. The theoretical approach is based on a discrete fracture network (DFN) description of the fracture system in the rock mass and on the results of mechanical testing of intact rock and on rock fractures. To estimate the mechanical properties of the rock mass a load test on a rock block with fractures is simulated with the numerical code 3DEC. The location and size of the fractures are given by DFN-realisations. The rock block was loaded in plain strain condition. From the calculated relationship between stresses and deformations the mechanical properties of the rock mass were determined. The influence of the geometrical properties of the fracture system on the mechanical properties of the rock mass was analysed by loading 20 blocks based on different DFN-realisations. The material properties of the intact rock and the fractures were kept constant. The properties are set equal to the mean value of each measured material property. The influence of the variation of the properties of the intact rock and variation of the mechanical properties of the fractures are estimated by analysing numerical load tests on one specific block (one DFN-realisation) with combinations of properties for intact rock and fractures. Each parameter varies from its lowest values to its highest values while the rest of the parameters are held constant, equal to the mean value. The resulting distribution was expressed as a variation around the value determined with mean values on all parameters. To estimate the resulting distribution of the mechanical properties of the rock mass a Monte-Carlo simulation was performed by generating values from the two distributions independent of each other. The two values were added and the statistical properties of the resulting distribution were determined.

  9. Rock mechanics site descriptive model-theoretical approach. Preliminary site description Forsmark area - version 1.2

    International Nuclear Information System (INIS)

    Fredriksson, Anders; Olofsson, Isabelle

    2005-12-01

    The present report summarises the theoretical approach to estimate the mechanical properties of the rock mass in relation to the Preliminary Site Descriptive Modelling, version 1.2 Forsmark. The theoretical approach is based on a discrete fracture network (DFN) description of the fracture system in the rock mass and on the results of mechanical testing of intact rock and on rock fractures. To estimate the mechanical properties of the rock mass a load test on a rock block with fractures is simulated with the numerical code 3DEC. The location and size of the fractures are given by DFN-realisations. The rock block was loaded in plain strain condition. From the calculated relationship between stresses and deformations the mechanical properties of the rock mass were determined. The influence of the geometrical properties of the fracture system on the mechanical properties of the rock mass was analysed by loading 20 blocks based on different DFN-realisations. The material properties of the intact rock and the fractures were kept constant. The properties are set equal to the mean value of each measured material property. The influence of the variation of the properties of the intact rock and variation of the mechanical properties of the fractures are estimated by analysing numerical load tests on one specific block (one DFN-realisation) with combinations of properties for intact rock and fractures. Each parameter varies from its lowest values to its highest values while the rest of the parameters are held constant, equal to the mean value. The resulting distribution was expressed as a variation around the value determined with mean values on all parameters. To estimate the resulting distribution of the mechanical properties of the rock mass a Monte-Carlo simulation was performed by generating values from the two distributions independent of each other. The two values were added and the statistical properties of the resulting distribution were determined

  10. A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems

    Science.gov (United States)

    2005-05-01

    Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT

  11. Field theoretical approach to proton-nucleus reactions: II-Multiple-step excitation process

    International Nuclear Information System (INIS)

    Eiras, A.; Kodama, T.; Nemes, M.

    1989-01-01

    A field theoretical formulation to multiple step excitation process in proton-nucleus collision within the context of a relativistic eikonal approach is presented. A closed form expression for the double differential cross section can be obtained whose structure is very simple and makes the physics transparent. Glauber's formulation of the same process is obtained as a limit of ours and the necessary approximations are studied and discussed. (author) [pt

  12. Theoretical approach to the destruction or sterilization of drugs in aqueous solution

    International Nuclear Information System (INIS)

    Slegers, Catherine; Tilquin, Bernard

    2005-01-01

    Two novel applications in the radiation processing of aqueous solutions of drugs are the sterilization of injectable drugs and the decontamination of hospital wastewaters by ionizing radiation. The parameters influencing the destruction of the drug in aqueous solutions are studied with a computer simulation program. This theoretical approach has revealed that the dose rate is the most important parameter that can be easily varied in order to optimize the destruction or the protection of the drug

  13. A THEORETICAL APPROACH TO THE TRANSITION FROM A RESOURCE BASED TO A KNOWLEDGE-ECONOMY

    Directory of Open Access Journals (Sweden)

    Diana GIOACASI

    2015-09-01

    Full Text Available Economic development and the emergence of new technologies have changed the optics on the factors that are generating added value. The transition from a resource-dependent economy to one focused on tangible non-financial factors has progressed in a gradual manner and took place under the influence of globalization and of the internet boom. The aim of this article is to provide a theoretical approach to this phenomenon from the perspective of the temporal evolution of enterprise resources.

  14. A Theoretical Assessment of the Formation of IT clusters in Kazakhstan: Approaches and Positive Effects

    OpenAIRE

    Anel A. Kireyeva

    2016-01-01

    Abstract The aim of this research is to develop new theoretical approaches of the formation of IT clusters in order to strengthen of trend of the innovative industrialization and competitiveness of the country. Keeping with the previous literature, this study determines by the novelty of the problem, concerning the formation of IT clusters, which can become a driving force of transformation due to the interaction, improving efficiency and introducing advanced technology. In this research,...

  15. A Balanced Theoretical and Empirical Approach for the Development of a Design Support Tool

    DEFF Research Database (Denmark)

    Jensen, Thomas Aakjær; Hansen, Claus Thorp

    1996-01-01

    The introduction of a new design support system may change the engineering designer's work situation. Therefore, it may not be possible to derive all the functionalities for a design support system from solely empirical studies of manual design work. Alternatively the design support system could ...... system, indicating a proposal for how to balance a theoretical and empirical approach. The result of this research will be utilized in the development of a Designer's Workbench to support the synthesis activity in mechanical design....

  16. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  17. Comparison of theoretical estimates and experimental measurements of fatigue crack growth under severe thermal shock conditions (part two - theoretical assessment and comparison with experiment)

    International Nuclear Information System (INIS)

    Green, D.; Marsh, D.; Parker, R.

    1984-01-01

    This paper reports the theoretical assessment of cracking which may occur when a severe cycle comprising alternate upshocks and downshocks is applied to an axisymmetric feature with an internal, partial penetration weld and crevice. The experimental observations of cracking are reported separately. A good agreement was noted even though extensive cycle plasticity occurred at the location of cracking. It is concluded that the LEFM solution has correlated with the experiment mainly because of the axisymmetric geometry which allows a large hydrostatic stress to exist at the internal weld crevice end. Thus the stress at the crevice can approach the singular solution required for LEFM correlations without contributing to yielding

  18. Decision support models for solid waste management: Review and game-theoretic approaches

    International Nuclear Information System (INIS)

    Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios

    2013-01-01

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed

  19. Decision support models for solid waste management: Review and game-theoretic approaches

    Energy Technology Data Exchange (ETDEWEB)

    Karmperis, Athanasios C., E-mail: athkarmp@mail.ntua.gr [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece); Army Corps of Engineers, Hellenic Army General Staff, Ministry of Defence (Greece); Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece)

    2013-05-15

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed.

  20. A game-theoretic approach to real-time system testing

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Li, Shuhao

    2008-01-01

    This paper presents a game-theoretic approach to the testing of uncontrollable real-time systems. By modelling the systems with Timed I/O Game Automata and specifying the test purposes as Timed CTL formulas, we employ a recently developed timed game solver UPPAAL-TIGA to synthesize winning...... strategies, and then use these strategies to conduct black-box conformance testing of the systems. The testing process is proved to be sound and complete with respect to the given test purposes. Case study and preliminary experimental results indicate that this is a viable approach to uncontrollable timed...... system testing....

  1. Estimating variability in functional images using a synthetic resampling approach

    International Nuclear Information System (INIS)

    Maitra, R.; O'Sullivan, F.

    1996-01-01

    Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

  2. A comparison of the Bayesian and frequentist approaches to estimation

    CERN Document Server

    Samaniego, Francisco J

    2010-01-01

    This monograph contributes to the area of comparative statistical inference. Attention is restricted to the important subfield of statistical estimation. The book is intended for an audience having a solid grounding in probability and statistics at the level of the year-long undergraduate course taken by statistics and mathematics majors. The necessary background on Decision Theory and the frequentist and Bayesian approaches to estimation is presented and carefully discussed in Chapters 1-3. The 'threshold problem' - identifying the boundary between Bayes estimators which tend to outperform st

  3. A variational approach to parameter estimation in ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Kaschek Daniel

    2012-08-01

    Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  4. A variational approach to parameter estimation in ordinary differential equations.

    Science.gov (United States)

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  5. ON THE APPLICATION OF PARTIAL BARRIERS FOR SPINNING MACHINE NOISE CONTROL: A THEORETICAL AND EXPERIMENTAL APPROACH

    Directory of Open Access Journals (Sweden)

    M. R. Monazzam, A. Nezafat

    2007-04-01

    Full Text Available Noise is one of the most serious challenges in modern community. In some specific industries, according to the nature of process, this challenge is more threatening. This paper describes a means of noise control for spinning machine based on experimental measurements. Also advantages and disadvantages of the control procedure are added. Different factors which may affect the performance of the barrier in this situation are also mentioned. To provide a good estimation of the control measure, a theoretical formula is also described and it is compared with the field data. Good agreement between the results of filed measurements and theoretical presented model was achieved. No obvious noise reduction was seen by partial indoor barriers in low absorbent enclosed spaces, since the reflection from multiple hard surfaces is the main dominated factor in the tested environment. At the end, the situation of the environment and standards, which are necessary in attaining the ideal results, are explained.

  6. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  7. Comparative study of approaches to estimate pipe break frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K.; Pulkkinen, U.; Talja, H.; Saarenheimo, A.; Karjalainen-Roikonen, P. [VTT Industrial Systems (Finland)

    2002-12-01

    The report describes the comparative study of two approaches to estimate pipe leak and rupture frequencies for piping. One method is based on a probabilistic fracture mechanistic (PFM) model while the other one is based on statistical estimation of rupture frequencies from a large database. In order to be able to compare the approaches and their results, the rupture frequencies of some selected welds have been estimated using both of these methods. This paper highlights the differences both in methods, input data, need and use of plant specific information and need of expert judgement. The study focuses on one specific degradation mechanism, namely the intergranular stress corrosion cracking (IGSCC). This is the major degradation mechanism in old stainless steel piping in BWR environment, and its growth is influenced by material properties, stresses and water chemistry. (au)

  8. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  9. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    Science.gov (United States)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution

  10. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

    Science.gov (United States)

    Zorzetto, E.; Marani, M.

    2017-12-01

    The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

  11. Reflective practice and vocational training: theoretical approaches in the field of Health and Nursing

    Directory of Open Access Journals (Sweden)

    Luciana Netto

    2018-02-01

    Full Text Available Abstract Objective: Theoretical reflection that uses Reflexivity as a theoretical reference and its objective is to approach Donald Schön's reflective thinking, interrelating it with the innovative curriculum. Method: The writings of Schön and other authors who addressed the themes in their works were used. Results: The innovative curriculum as an expression of dissatisfaction with the fragmentation paradigm may favor reflective practice, since it is necessary to mobilize reflexivity for actions and contexts that are unpredictable in the field of health promotion. Conclusions: The innovative curriculum favors and is favored by a reflective practice and the development of competencies for the promotion of health. Implications for practice: The findings apply to the practice of nurses to deal with the conditioning and determinants of the health-disease process.

  12. Forming Limits in Sheet Metal Forming for Non-Proportional Loading Conditions - Experimental and Theoretical Approach

    International Nuclear Information System (INIS)

    Ofenheimer, Aldo; Buchmayr, Bruno; Kolleck, Ralf; Merklein, Marion

    2005-01-01

    The influence of strain paths (loading history) on material formability is well known in sheet forming processes. Sophisticated experimental methods are used to determine the entire shape of strain paths of forming limits for aluminum AA6016-T4 alloy. Forming limits for sheet metal in as-received condition as well as for different pre-deformation are presented. A theoretical approach based on Arrieux's intrinsic Forming Limit Stress Curve (FLSC) concept is employed to numerically predict the influence of loading history on forming severity. The detailed experimental strain paths are used in the theoretical study instead of any linear or bilinear simplified loading histories to demonstrate the predictive quality of forming limits in the state of stress

  13. Alternative sources of power generation, incentives and regulatory mandates: a theoretical approach to the Colombian case

    International Nuclear Information System (INIS)

    Zapata, Carlos M; Zuluaga Monica M; Dyner, Isaac

    2005-01-01

    Alternative Energy Generation Sources are turning relevant in several countries worldwide because of technology improvement and the environmental treatment. In this paper, the most common problems of renewable energy sources are accomplished, different incentives and regulatory mandates from several countries are exposed, and a first theoretical approach to a renewable energies incentive system in Colombia is discussed. The paper is fundamentally in theoretical aspects and international experience in renewable energies incentives to accelerate their diffusion; features are analyzed towards a special incentive system for renewable energies in Colombia. As a conclusion, in Colombia will be apply indirect incentives like low interest rate, taxes exemptions and so on. But these incentives are applied to limit the support of electricity productivity in generating organizations.

  14. A short course in quantum information theory an approach from theoretical physics

    CERN Document Server

    Diosi, Lajos

    2011-01-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. For the second edition, the authors has succeeded in adding many new topics while sticking to the conciseness of the overall approach. A new chapter on qubit thermodynamics has been added, while new sections and subsections have been incorporated in various chapter to deal with weak and time-continuous measurements, period-finding quantum algorithms and quantum error corrections. From the reviews of the first edition...

  15. Exploring super-gaussianity towards robust information-theoretical time delay estimation

    DEFF Research Database (Denmark)

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos

    2013-01-01

    the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced...

  16. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    International Nuclear Information System (INIS)

    Shafii, Mohammad Ali; Meidianti, Rahma; Wildian,; Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-01-01

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation

  17. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    Energy Technology Data Exchange (ETDEWEB)

    Shafii, Mohammad Ali, E-mail: mashafii@fmipa.unand.ac.id; Meidianti, Rahma, E-mail: mashafii@fmipa.unand.ac.id; Wildian,, E-mail: mashafii@fmipa.unand.ac.id; Fitriyani, Dian, E-mail: mashafii@fmipa.unand.ac.id [Department of Physics, Andalas University Padang West Sumatera Indonesia (Indonesia); Tongkukut, Seni H. J. [Department of Physics, Sam Ratulangi University Manado North Sulawesi Indonesia (Indonesia); Arkundato, Artoto [Department of Physics, Jember University Jember East Java Indonesia (Indonesia)

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  18. What is the optimal value of the g-ratio for myelinated fibers in the rat CNS? A theoretical approach.

    Directory of Open Access Journals (Sweden)

    Taylor Chomiak

    2009-11-01

    Full Text Available The biological process underlying axonal myelination is complex and often prone to injury and disease. The ratio of the inner axonal diameter to the total outer diameter or g-ratio is widely utilized as a functional and structural index of optimal axonal myelination. Based on the speed of fiber conduction, Rushton was the first to derive a theoretical estimate of the optimal g-ratio of 0.6 [1]. This theoretical limit nicely explains the experimental data for myelinated axons obtained for some peripheral fibers but appears significantly lower than that found for CNS fibers. This is, however, hardly surprising given that in the CNS, axonal myelination must achieve multiple goals including reducing conduction delays, promoting conduction fidelity, lowering energy costs, and saving space.In this study we explore the notion that a balanced set-point can be achieved at a functional level as the micro-structure of individual axons becomes optimized, particularly for the central system where axons tend to be smaller and their myelin sheath thinner. We used an intuitive yet novel theoretical approach based on the fundamental biophysical properties describing axonal structure and function to show that an optimal g-ratio can be defined for the central nervous system (approximately 0.77. Furthermore, by reducing the influence of volume constraints on structural design by about 40%, this approach can also predict the g-ratio observed in some peripheral fibers (approximately 0.6.These results support the notion of optimization theory in nervous system design and construction and may also help explain why the central and peripheral systems have evolved different g-ratios as a result of volume constraints.

  19. What is the optimal value of the g-ratio for myelinated fibers in the rat CNS? A theoretical approach.

    Science.gov (United States)

    Chomiak, Taylor; Hu, Bin

    2009-11-13

    The biological process underlying axonal myelination is complex and often prone to injury and disease. The ratio of the inner axonal diameter to the total outer diameter or g-ratio is widely utilized as a functional and structural index of optimal axonal myelination. Based on the speed of fiber conduction, Rushton was the first to derive a theoretical estimate of the optimal g-ratio of 0.6 [1]. This theoretical limit nicely explains the experimental data for myelinated axons obtained for some peripheral fibers but appears significantly lower than that found for CNS fibers. This is, however, hardly surprising given that in the CNS, axonal myelination must achieve multiple goals including reducing conduction delays, promoting conduction fidelity, lowering energy costs, and saving space. In this study we explore the notion that a balanced set-point can be achieved at a functional level as the micro-structure of individual axons becomes optimized, particularly for the central system where axons tend to be smaller and their myelin sheath thinner. We used an intuitive yet novel theoretical approach based on the fundamental biophysical properties describing axonal structure and function to show that an optimal g-ratio can be defined for the central nervous system (approximately 0.77). Furthermore, by reducing the influence of volume constraints on structural design by about 40%, this approach can also predict the g-ratio observed in some peripheral fibers (approximately 0.6). These results support the notion of optimization theory in nervous system design and construction and may also help explain why the central and peripheral systems have evolved different g-ratios as a result of volume constraints.

  20. Theoretical implications for the estimation of dinitrogen fixation by large perennial plant species using isotope dilution

    Science.gov (United States)

    Dwight D. Baker; Maurice Fried; John A. Parrotta

    1995-01-01

    Estimation of symbiotic N2 fixation associated with large perennial plant species, especially trees, poses special problems because the process must be followed over a potentially long period of time to integrate the total amount of fixation. Estimations using isotope dilution methodology have begun to be used for trees in field studies. Because...

  1. A mechanical model of wing and theoretical estimate of taper factor ...

    Indian Academy of Sciences (India)

    Likewise, by using the data linear regression and curve estimation method, as well as estimating the taper factors and the angle between the humerus and the body, we calculated the relationship between wingspan, wing area and the speed necessary to meet the aerodynamic requirements of sustained flight. In addition ...

  2. Approaches to estimating the universe of natural history collections data

    Directory of Open Access Journals (Sweden)

    Arturo H. Ariño

    2010-10-01

    Full Text Available This contribution explores the problem of recognizing and measuring the universe of specimen-level data existing in Natural History Collections around the world, in absence of a complete, world-wide census or register. Estimates of size seem necessary to plan for resource allocation for digitization or data capture, and may help represent how many vouchered primary biodiversity data (in terms of collections, specimens or curatorial units might remain to be mobilized. Three general approaches are proposed for further development, and initial estimates are given. Probabilistic models involve crossing data from a set of biodiversity datasets, finding commonalities and estimating the likelihood of totally obscure data from the fraction of known data missing from specific datasets in the set. Distribution models aim to find the underlying distribution of collections’ compositions, figuring out the occult sector of the distributions. Finally, case studies seek to compare digitized data from collections known to the world to the amount of data known to exist in the collection but not generally available or not digitized. Preliminary estimates range from 1.2 to 2.1 gigaunits, of which a mere 3% at most is currently web-accessible through GBIF’s mobilization efforts. However, further data and analyses, along with other approaches relying more heavily on surveys, might change the picture and possibly help narrow the estimate. In particular, unknown collections not having emerged through literature are the major source of uncertainty.

  3. A Game Theoretical Approach to Hacktivism: Is Attack Likelihood a Product of Risks and Payoffs?

    Science.gov (United States)

    Bodford, Jessica E; Kwan, Virginia S Y

    2018-02-01

    The current study examines hacktivism (i.e., hacking to convey a moral, ethical, or social justice message) through a general game theoretic framework-that is, as a product of costs and benefits. Given the inherent risk of carrying out a hacktivist attack (e.g., legal action, imprisonment), it would be rational for the user to weigh these risks against perceived benefits of carrying out the attack. As such, we examined computer science students' estimations of risks, payoffs, and attack likelihood through a game theoretic design. Furthermore, this study aims at constructing a descriptive profile of potential hacktivists, exploring two predicted covariates of attack decision making, namely, peer prevalence of hacking and sex differences. Contrary to expectations, results suggest that participants' estimations of attack likelihood stemmed solely from expected payoffs, rather than subjective risks. Peer prevalence significantly predicted increased payoffs and attack likelihood, suggesting an underlying descriptive norm in social networks. Notably, we observed no sex differences in the decision to attack, nor in the factors predicting attack likelihood. Implications for policymakers and the understanding and prevention of hacktivism are discussed, as are the possible ramifications of widely communicated payoffs over potential risks in hacking communities.

  4. When the Mannequin Dies, Creation and Exploration of a Theoretical Framework Using a Mixed Methods Approach.

    Science.gov (United States)

    Tripathy, Shreepada; Miller, Karen H; Berkenbosch, John W; McKinley, Tara F; Boland, Kimberly A; Brown, Seth A; Calhoun, Aaron W

    2016-06-01

    Controversy exists in the simulation community as to the emotional and educational ramifications of mannequin death due to learner action or inaction. No theoretical framework to guide future investigations of learner actions currently exists. The purpose of our study was to generate a model of the learner experience of mannequin death using a mixed methods approach. The study consisted of an initial focus group phase composed of 11 learners who had previously experienced mannequin death due to action or inaction on the part of learners as defined by Leighton (Clin Simul Nurs. 2009;5(2):e59-e62). Transcripts were analyzed using grounded theory to generate a list of relevant themes that were further organized into a theoretical framework. With the use of this framework, a survey was generated and distributed to additional learners who had experienced mannequin death due to action or inaction. Results were analyzed using a mixed methods approach. Forty-one clinicians completed the survey. A correlation was found between the emotional experience of mannequin death and degree of presession anxiety (P framework. Using the previous approach, we created a model of the effect of mannequin death on the educational and psychological state of learners. We offer the final model as a guide to future research regarding the learner experience of mannequin death.

  5. A review of the nurtured heart approach to parenting: evaluation of its theoretical and empirical foundations.

    Science.gov (United States)

    Hektner, Joel M; Brennan, Alison L; Brotherson, Sean E

    2013-09-01

    The Nurtured Heart Approach to parenting (NHA; Glasser & Easley, 2008) is summarized and evaluated in terms of its alignment with current theoretical perspectives and empirical evidence in family studies and developmental science. Originally conceived and promoted as a behavior management approach for parents of difficult children (i.e., with behavior disorders), NHA is increasingly offered as a valuable strategy for parents of any children, despite a lack of published empirical support. Parents using NHA are trained to minimize attention to undesired behaviors, provide positive attention and praise for compliance with rules, help children be successful by scaffolding and shaping desired behavior, and establish a set of clear rules and consequences. Many elements of the approach have strong support in the theoretical and empirical literature; however, some of the assumptions are more questionable, such as that negative child behavior can always be attributed to unintentional positive reinforcement by parents responding with negative attention. On balance, NHA appears to promote effective and validated parenting practices, but its effectiveness now needs to be tested empirically. © FPI, Inc.

  6. Evolution of the Theoretical Approaches to Disclosing the Economic Substance of Accumulation of Capital

    Directory of Open Access Journals (Sweden)

    Yemets Vadym V.

    2016-05-01

    Full Text Available The article proposes a classification for periods of evolution of theoretical approaches to disclosing the economic substance of accumulation of capital, taking into account the civilizational approach to the development of society. The author has proposed five stages in the evolution of theoretical approaches, which are closely related to the development of economy and stipulate dominance of a certain form of accumulation of capital. So, the first stage (time period B.C. – the 5th Century is referred to as Individual-social significance of accumulation of capital; the second stage (from the 6th century to the 16th century – Accumulation of monetary capitals; the third stage (from the mid-17th century until the end of the 18th century – Industrialproduction accumulation of capital; the fourth stage (from the mid-19th century until the 70s of the 20th century – Investment-oriented accumulation of capital; the fifth stage (from the 70s of the 20th century up to the current period – Globally-intensive accumulation of capital.

  7. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  8. Theoretical analysis of the distribution of isolated particles in totally asymmetric exclusion processes: Application to mRNA translation rate estimation

    Science.gov (United States)

    Dao Duc, Khanh; Saleem, Zain H.; Song, Yun S.

    2018-01-01

    The Totally Asymmetric Exclusion Process (TASEP) is a classical stochastic model for describing the transport of interacting particles, such as ribosomes moving along the messenger ribonucleic acid (mRNA) during translation. Although this model has been widely studied in the past, the extent of collision between particles and the average distance between a particle to its nearest neighbor have not been quantified explicitly. We provide here a theoretical analysis of such quantities via the distribution of isolated particles. In the classical form of the model in which each particle occupies only a single site, we obtain an exact analytic solution using the matrix ansatz. We then employ a refined mean-field approach to extend the analysis to a generalized TASEP with particles of an arbitrary size. Our theoretical study has direct applications in mRNA translation and the interpretation of experimental ribosome profiling data. In particular, our analysis of data from Saccharomyces cerevisiae suggests a potential bias against the detection of nearby ribosomes with a gap distance of less than approximately three codons, which leads to some ambiguity in estimating the initiation rate and protein production flux for a substantial fraction of genes. Despite such ambiguity, however, we demonstrate theoretically that the interference rate associated with collisions can be robustly estimated and show that approximately 1% of the translating ribosomes get obstructed.

  9. Theoretical approach for plasma series resonance effect in geometrically symmetric dual radio frequency plasma

    International Nuclear Information System (INIS)

    Bora, B.; Bhuyan, H.; Favre, M.; Wyndham, E.; Chuaqui, H.

    2012-01-01

    Plasma series resonance (PSR) effect is well known in geometrically asymmetric capacitively couple radio frequency plasma. However, plasma series resonance effect in geometrically symmetric plasma has not been properly investigated. In this work, a theoretical approach is made to investigate the plasma series resonance effect and its influence on Ohmic and stochastic heating in geometrically symmetric discharge. Electrical asymmetry effect by means of dual frequency voltage waveform is applied to excite the plasma series resonance. The results show considerable variation in heating with phase difference between the voltage waveforms, which may be applicable in controlling the plasma parameters in such plasma.

  10. Group theoretic approach for solving the problem of diffusion of a drug through a thin membrane

    Science.gov (United States)

    Abd-El-Malek, Mina B.; Kassem, Magda M.; Meky, Mohammed L. M.

    2002-03-01

    The transformation group theoretic approach is applied to study the diffusion process of a drug through a skin-like membrane which tends to partially absorb the drug. Two cases are considered for the diffusion coefficient. The application of one parameter group reduces the number of independent variables by one, and consequently the partial differential equation governing the diffusion process with the boundary and initial conditions is transformed into an ordinary differential equation with the corresponding conditions. The obtained differential equation is solved numerically using the shooting method, and the results are illustrated graphically and in tables.

  11. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    Science.gov (United States)

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  12. Theoretical and Experimental Investigation of Force Estimation Errors Using Active Magnetic Bearings with Embedded Hall Sensors

    DEFF Research Database (Denmark)

    Voigt, Andreas Jauernik; Santos, Ilmar

    2012-01-01

    to ∼ 20% of the nominal air gap the force estimation error is found to be reduced by the linearized force equation as compared to the quadratic force equation, which is supported by experimental results. Additionally the FE model is employed in a comparative study of the force estimation error behavior...... of AMBs by embedding Hall sensors instead of mounting these directly on the pole surfaces, force estimation errors are investigated both numerically and experimentally. A linearized version of the conventionally applied quadratic correspondence between measured Hall voltage and applied AMB force...

  13. A theoretical approach to medication adherence for children and youth with psychiatric disorders.

    Science.gov (United States)

    Charach, Alice; Volpe, Tiziana; Boydell, Katherine M; Gearing, Robin E

    2008-01-01

    This article provides a theoretical review of treatment adherence for children and youth with psychiatric disorders where pharmacological agents are first-line interventions. Four empirically based models of health behavior are reviewed and applied to the sparse literature about medication adherence for children with attention-deficit/hyperactivity disorder and young people with first-episode psychosis. Three qualitative studies of medication use are summarized, and details from the first-person narratives are used to illustrate the theoretical models. These studies indicate, when taken together, that the clinical approach to addressing poor medication adherence in children and youth with psychiatric disorders should be guided by more than one theoretical model. Mental health experts should clarify beliefs, address misconceptions, and support exploration of alternative treatment options unless contraindicated. Recognizing the larger context of the family, allowing time for parents and children to change their attitudes, and offering opportunities for easy access to medication in the future are important ways of respecting patient preferences, while steering them toward best-evidence interventions. Future research using qualitative methods of inquiry to investigate parent, child, and youth experiences of mental health interventions should identify effective ways to improve treatment adherence.

  14. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    Science.gov (United States)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  15. Quantum chemical approach to estimating the thermodynamics of metabolic reactions.

    Science.gov (United States)

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-11-12

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.

  16. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  17. Estimation of mean-reverting oil prices: a laboratory approach

    International Nuclear Information System (INIS)

    Bjerksund, P.; Stensland, G.

    1993-12-01

    Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs

  18. Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions

    OpenAIRE

    Adrian Jinich; Dmitrij Rappoport; Ian Dunn; Benjamin Sanchez-Lengeling; Roberto Olivares-Amaya; Elad Noor; Arren Bar Even; Alán Aspuru-Guzik

    2014-01-01

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfe...

  19. An approach of parameter estimation for non-synchronous systems

    International Nuclear Information System (INIS)

    Xu Daolin; Lu Fangfang

    2005-01-01

    Synchronization-based parameter estimation is simple and effective but only available to synchronous systems. To come over this limitation, we propose a technique that the parameters of an unknown physical process (possibly a non-synchronous system) can be identified from a time series via a minimization procedure based on a synchronization control. The feasibility of this approach is illustrated in several chaotic systems

  20. Quantum noise in the mirror–field system: A field theoretic approach

    International Nuclear Information System (INIS)

    Hsiang, Jen-Tsung; Wu, Tai-Hung; Lee, Da-Shin; King, Sun-Kun; Wu, Chun-Hsien

    2013-01-01

    We revisit the quantum noise problem in the mirror–field system by a field-theoretic approach. Here a perfectly reflecting mirror is illuminated by a single-mode coherent state of the massless scalar field. The associated radiation pressure is described by a surface integral of the stress-tensor of the field. The read-out field is measured by a monopole detector, from which the effective distance between the detector and mirror can be obtained. In the slow-motion limit of the mirror, this field-theoretic approach allows to identify various sources of quantum noise that all in all leads to uncertainty of the read-out measurement. In addition to well-known sources from shot noise and radiation pressure fluctuations, a new source of noise is found from field fluctuations modified by the mirror’s displacement. Correlation between different sources of noise can be established in the read-out measurement as the consequence of interference between the incident field and the field reflected off the mirror. In the case of negative correlation, we found that the uncertainty can be lowered than the value predicted by the standard quantum limit. Since the particle-number approach is often used in quantum optics, we compared results obtained by both approaches and examine its validity. We also derive a Langevin equation that describes the stochastic dynamics of the mirror. The underlying fluctuation–dissipation relation is briefly mentioned. Finally we discuss the backreaction induced by the radiation pressure. It will alter the mean displacement of the mirror, but we argue this backreaction can be ignored for a slowly moving mirror. - Highlights: ► The quantum noise problem in the mirror–field system is re-visited by a field-theoretic approach. ► Other than the shot noise and radiation pressure noise, we show there are new sources of noise and correlation between them. ► The noise correlations can be used to suppress the overall quantum noise on the mirror.

  1. Quantum noise in the mirror-field system: A field theoretic approach

    Energy Technology Data Exchange (ETDEWEB)

    Hsiang, Jen-Tsung, E-mail: cosmology@gmail.com [Department of Physics, National Dong-Hwa University, Hua-lien, Taiwan, ROC (China); Wu, Tai-Hung [Department of Physics, National Dong-Hwa University, Hua-lien, Taiwan, ROC (China); Lee, Da-Shin, E-mail: dslee@mail.ndhu.edu.tw [Department of Physics, National Dong-Hwa University, Hua-lien, Taiwan, ROC (China); King, Sun-Kun [Institutes of Astronomy and Astrophysics, Academia Sinica, Taipei, Taiwan, ROC (China); Wu, Chun-Hsien [Department of Physics, Soochow University, Taipei, Taiwan, ROC (China)

    2013-02-15

    We revisit the quantum noise problem in the mirror-field system by a field-theoretic approach. Here a perfectly reflecting mirror is illuminated by a single-mode coherent state of the massless scalar field. The associated radiation pressure is described by a surface integral of the stress-tensor of the field. The read-out field is measured by a monopole detector, from which the effective distance between the detector and mirror can be obtained. In the slow-motion limit of the mirror, this field-theoretic approach allows to identify various sources of quantum noise that all in all leads to uncertainty of the read-out measurement. In addition to well-known sources from shot noise and radiation pressure fluctuations, a new source of noise is found from field fluctuations modified by the mirror's displacement. Correlation between different sources of noise can be established in the read-out measurement as the consequence of interference between the incident field and the field reflected off the mirror. In the case of negative correlation, we found that the uncertainty can be lowered than the value predicted by the standard quantum limit. Since the particle-number approach is often used in quantum optics, we compared results obtained by both approaches and examine its validity. We also derive a Langevin equation that describes the stochastic dynamics of the mirror. The underlying fluctuation-dissipation relation is briefly mentioned. Finally we discuss the backreaction induced by the radiation pressure. It will alter the mean displacement of the mirror, but we argue this backreaction can be ignored for a slowly moving mirror. - Highlights: Black-Right-Pointing-Pointer The quantum noise problem in the mirror-field system is re-visited by a field-theoretic approach. Black-Right-Pointing-Pointer Other than the shot noise and radiation pressure noise, we show there are new sources of noise and correlation between them. Black-Right-Pointing-Pointer The noise

  2. A Game Theoretic Approach to Nuclear Security Analysis against Insider Threat

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyonam; Kim, So Young; Yim, Mansung [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Schneider, Erich [Univ. of Texas at Austin, Texas (United States)

    2014-05-15

    As individuals with authorized access to a facility and system who use their trusted position for unauthorized purposes, insiders are able to take advantage of their access rights and knowledge of a facility to bypass dedicated security measures. They can also capitalize on their knowledge to exploit any vulnerabilities in safety-related systems, with cyber security of safety-critical information technology systems offering an important example of the 3S interface. While this Probabilistic Risk Assessment (PRA) approach is appropriate for describing fundamentally random events like component failure of a safety system, it does not capture the adversary's intentions, nor does it account for adversarial response and adaptation to defensive investments. To address these issues of intentionality and interactions, this study adopts a game theoretic approach. The interaction between defender and adversary is modeled as a two-person Stackelberg game. The optimal strategy of both players is found from the equilibrium of this game. A defender strategy consists of a set of design modifications and/or post-construction security upgrades. An attacker strategy involves selection of a target as well as a pathway to that target. In this study, application of the game theoretic approach is demonstrated using a simplified test case problem. Novel to our approach is the modeling of insider threat that affects the non-detection probability of an adversary. The game-theoretic approach has the advantage of modelling an intelligent adversary who has an intention and complete knowledge of the facility. In this study, we analyzed the expected adversarial path and security upgrades with a limited budget with insider threat modeled as increasing the non-detection probability. Our test case problem categorized three groups of adversary paths assisted by insiders and derived the largest insider threat in terms of the budget for security upgrades. Certainly more work needs to be done to

  3. A Game Theoretic Approach to Nuclear Security Analysis against Insider Threat

    International Nuclear Information System (INIS)

    Kim, Kyonam; Kim, So Young; Yim, Mansung; Schneider, Erich

    2014-01-01

    As individuals with authorized access to a facility and system who use their trusted position for unauthorized purposes, insiders are able to take advantage of their access rights and knowledge of a facility to bypass dedicated security measures. They can also capitalize on their knowledge to exploit any vulnerabilities in safety-related systems, with cyber security of safety-critical information technology systems offering an important example of the 3S interface. While this Probabilistic Risk Assessment (PRA) approach is appropriate for describing fundamentally random events like component failure of a safety system, it does not capture the adversary's intentions, nor does it account for adversarial response and adaptation to defensive investments. To address these issues of intentionality and interactions, this study adopts a game theoretic approach. The interaction between defender and adversary is modeled as a two-person Stackelberg game. The optimal strategy of both players is found from the equilibrium of this game. A defender strategy consists of a set of design modifications and/or post-construction security upgrades. An attacker strategy involves selection of a target as well as a pathway to that target. In this study, application of the game theoretic approach is demonstrated using a simplified test case problem. Novel to our approach is the modeling of insider threat that affects the non-detection probability of an adversary. The game-theoretic approach has the advantage of modelling an intelligent adversary who has an intention and complete knowledge of the facility. In this study, we analyzed the expected adversarial path and security upgrades with a limited budget with insider threat modeled as increasing the non-detection probability. Our test case problem categorized three groups of adversary paths assisted by insiders and derived the largest insider threat in terms of the budget for security upgrades. Certainly more work needs to be done to

  4. Modified economic order quantity (EOQ model for items with imperfect quality: Game-theoretical approaches

    Directory of Open Access Journals (Sweden)

    Milad Elyasi

    2014-04-01

    Full Text Available In the recent decade, studying the economic order quantity (EOQ models with imperfect quality has appealed to many researchers. Only few papers are published discussing EOQ models with imperfect items in a supply chain. In this paper, a two-echelon decentralized supply chain consisting of a manufacture and a supplier that both face just in time (JIT inventory problem is considered. It is sought to find the optimal number of the shipments and the quantity of each shipment in a way that minimizes the both manufacturer’s and the supplier’s cost functions. To the authors’ best knowledge, this is the first paper that deals with imperfect items in a decentralized supply chain. Thereby, three different game theoretical solution approaches consisting of two non-cooperative games and a cooperative game are proposed. Comparing the results of three different scenarios with those of the centralized model, the conclusions are drawn to obtain the best approach.

  5. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  6. Theoretical and empirical approaches to using films as a means to increase communication efficiency.

    Directory of Open Access Journals (Sweden)

    Kiselnikova, N.V.

    2016-07-01

    Full Text Available The theoretical framework of this analytic study is based on studies in the field of film perception. Films are considered as a communicative system that is encrypted in an ordered series of shots, and decoding proceeds during perception. The shots are the elements of a cinematic message that must be “read” by viewer. The objective of this work is to analyze the existing theoretical approaches to using films in psychotherapy and education. An original approach to film therapy that is based on teaching clients to use new communicative sets and psychotherapeutic patterns through watching films is presented. The article specifies the main emphasized points in theories of film therapy and education. It considers the specifics of film therapy in the process of increasing the effectiveness of communication. It discusses the advantages and limitations of the proposed method. The contemporary forms of film therapy and the formats of cinema clubs are criticized. The theoretical assumptions and empirical research that could be used as a basis for a method of developing effective communication by means of films are discussed. Our studies demonstrate that the usage of film therapy must include an educational stage for more effective and stable results. This means teaching viewers how to recognize certain psychotherapeutic and communicative patterns in the material of films, to practice the skill of finding as many examples as possible for each pattern and to transfer the acquired schemes of analyzing and recognizing patterns into one’s own life circumstances. The four stages of the film therapeutic process as well as the effects that are achieved at each stage are described in detail. In conclusion, the conditions under which the usage of the film therapy method would be the most effective are observed. Various properties of client groups and psychotherapeutic scenarios for using the method of active film therapy are described.

  7. Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors

    Energy Technology Data Exchange (ETDEWEB)

    Bomble, Yannick J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); St. John, Peter C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Crowley, Michael F [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-18

    A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.

  8. Supply chain collaboration: A Game-theoretic approach to profit allocation

    Energy Technology Data Exchange (ETDEWEB)

    Ponte, B.; Fernández, I.; Rosillo, R.; Parreño, J.; García, N.

    2016-07-01

    Purpose: This paper aims to develop a theoretical framework for profit allocation, as a mechanism for aligning incentives, in collaborative supply chains. Design/methodology/approach: The issue of profit distribution is approached from a game-theoretic perspective. We use the nucleolus concept. The framework is illustrated through a numerical example based on the Beer Game scenario. Findings: The nucleolus offers a powerful perspective to tackle this problem, as it takes into consideration the bargaining power of the different echelons. We show that this framework outperforms classical alternatives. Research limitations/implications: The allocation of the overall supply chain profit is analyzed from a static perspective. Considering the dynamic nature of the problem would be an interesting next step. Practical implications: We provide evidence of drawbacks derived from classical solutions to the profit allocation problem. Real-world collaborative supply chains need of robust mechanisms like the one tackled in this work to align incentives from the various actors. Originality/value: Adopting an efficient collaborative solution is a major challenge for supply chains, since it is a wide and complex process that requires an appropriate scheme. Within this framework, profit allocation is essential.

  9. A Game-theoretical Approach for Distributed Cooperative Control of Autonomous Underwater Vehicles

    KAUST Repository

    Lu, Yimeng

    2018-05-01

    This thesis explores a game-theoretical approach for underwater environmental monitoring applications. We first apply game-theoretical algorithm to multi-agent resource coverage problem in drifting environments. Furthermore, existing utility design and learning process of the algorithm are modified to fit specific constraints of underwater exploration/monitoring tasks. The revised approach can take the real scenario of underwater monitoring applications such as the effect of sea current, previous knowledge of the resource and occasional communications between agents into account, and adapt to them to reach better performance. As the motivation of this thesis is from real applications, in this work we emphasize highly on implementation phase. A ROS-Gazebo simulation environment was created for preparation of actual tests. The algorithms are implemented in simulating both the dynamics of vehicles and the environment. After that, a multi-agent underwater autonomous robotic system was developed for hardware test in real settings with local controllers to make their own decisions. These systems are used for testing above mentioned algorithms and future development of other underwater projects. After that, other works related to robotics during this thesis will be briefly mentioned, including contributions in MBZIRC robotics competition and distributed control of UAVs in an adversarial environment.

  10. A study of brain networks associated with swallowing using graph-theoretical approaches.

    Directory of Open Access Journals (Sweden)

    Bo Luan

    Full Text Available Functional connectivity between brain regions during swallowing tasks is still not well understood. Understanding these complex interactions is of great interest from both a scientific and a clinical perspective. In this study, functional magnetic resonance imaging (fMRI was utilized to study brain functional networks during voluntary saliva swallowing in twenty-two adult healthy subjects (all females, [Formula: see text] years of age. To construct these functional connections, we computed mean partial correlation matrices over ninety brain regions for each participant. Two regions were determined to be functionally connected if their correlation was above a certain threshold. These correlation matrices were then analyzed using graph-theoretical approaches. In particular, we considered several network measures for the whole brain and for swallowing-related brain regions. The results have shown that significant pairwise functional connections were, mostly, either local and intra-hemispheric or symmetrically inter-hemispheric. Furthermore, we showed that all human brain functional network, although varying in some degree, had typical small-world properties as compared to regular networks and random networks. These properties allow information transfer within the network at a relatively high efficiency. Swallowing-related brain regions also had higher values for some of the network measures in comparison to when these measures were calculated for the whole brain. The current results warrant further investigation of graph-theoretical approaches as a potential tool for understanding the neural basis of dysphagia.

  11. Supply chain collaboration: A Game-theoretic approach to profit allocation

    International Nuclear Information System (INIS)

    Ponte, B.; Fernández, I.; Rosillo, R.; Parreño, J.; García, N.

    2016-01-01

    Purpose: This paper aims to develop a theoretical framework for profit allocation, as a mechanism for aligning incentives, in collaborative supply chains. Design/methodology/approach: The issue of profit distribution is approached from a game-theoretic perspective. We use the nucleolus concept. The framework is illustrated through a numerical example based on the Beer Game scenario. Findings: The nucleolus offers a powerful perspective to tackle this problem, as it takes into consideration the bargaining power of the different echelons. We show that this framework outperforms classical alternatives. Research limitations/implications: The allocation of the overall supply chain profit is analyzed from a static perspective. Considering the dynamic nature of the problem would be an interesting next step. Practical implications: We provide evidence of drawbacks derived from classical solutions to the profit allocation problem. Real-world collaborative supply chains need of robust mechanisms like the one tackled in this work to align incentives from the various actors. Originality/value: Adopting an efficient collaborative solution is a major challenge for supply chains, since it is a wide and complex process that requires an appropriate scheme. Within this framework, profit allocation is essential.

  12. Annual Gross Primary Production from Vegetation Indices: A Theoretically Sound Approach

    Directory of Open Access Journals (Sweden)

    María Amparo Gilabert

    2017-02-01

    Full Text Available A linear relationship between the annual gross primary production (GPP and a PAR-weighted vegetation index is theoretically derived from the Monteith equation. A semi-empirical model is then proposed to estimate the annual GPP from commonly available vegetation indices images and a representative PAR, which does not require actual meteorological data. A cross validation procedure is used to calibrate and validate the model predictions against reference data. As the calibration/validation process depends on the reference GPP product, the higher the quality of the reference GPP, the better the performance of the semi-empirical model. The annual GPP has been estimated at 1-km scale from MODIS NDVI and EVI images for eight years. Two reference data sets have been used: an optimized GPP product for the study area previously obtained and the MOD17A3 product. Different statistics show a good agreement between the estimates and the reference GPP data, with correlation coefficient around 0.9 and relative RMSE around 20%. The annual GPP is overestimated in semiarid areas and slightly underestimated in dense forest areas. With the above limitations, the model provides an excellent compromise between simplicity and accuracy for the calculation of long time series of annual GPP.

  13. Different approaches to estimation of reactor pressure vessel material embrittlement

    Directory of Open Access Journals (Sweden)

    V. M. Revka

    2013-03-01

    Full Text Available The surveillance test data for the nuclear power plant which is under operation in Ukraine have been used to estimate WWER-1000 reactor pressure vessel (RPV material embrittlement. The beltline materials (base and weld metal were characterized using Charpy impact and fracture toughness test methods. The fracture toughness test data were analyzed according to the standard ASTM 1921-05. The pre-cracked Charpy specimens were tested to estimate a shift of reference temperature T0 due to neutron irradiation. The maximum shift of reference temperature T0 is 84 °C. A radiation embrittlement rate AF for the RPV material was estimated using fracture toughness test data. In addition the AF factor based on the Charpy curve shift (ΔTF has been evaluated. A comparison of the AF values estimated according to different approaches has shown there is a good agreement between the radiation shift of Charpy impact and fracture toughness curves for weld metal with high nickel content (1,88 % wt. Therefore Charpy impact test data can be successfully applied to estimate the fracture toughness curve shift and therefore embrittlement rate. Furthermore it was revealed that radiation embrittlement rate for weld metal is higher than predicted by a design relationship. The enhanced embrittlement is most probably related to simultaneously high nickel and high manganese content in weld metal.

  14. An integrated approach to sensor FDI and signal reconstruction in HTGRs – Part I: Theoretical framework

    International Nuclear Information System (INIS)

    Uren, Kenneth R.; Schoor, George van; Rand, Carel P. du; Botha, Anrika

    2016-01-01

    Highlights: • An integrated sensor fault detection and isolation method for nuclear power plants. • Utilise techniques such as non-temporal parity space and principal component analysis. • Utilise statistical methods and fuzzy systems for sensor fault isolation. • Allow the detection of multiple sensor faults. • Proposed methodology suitable for online implementation. - Abstract: Sensor fault detection and isolation (FDI) is an important element in modern nuclear power plant (NPP) diagnostic systems. In this respect, sensor FDI of generation II and III water-cooled nuclear energy systems has become an active research topic to continually improve levels of reliability, safety, and operation. However, evolutionary advances in reactor and component technology together with different energy conversion methodologies support the investigation of alternative approaches to sensor FDI. Within this context, the basic aim of this two part series is to propose, implement and evaluate an integrated approach for sensor FDI and signal reconstruction in generation IV nuclear high temperature gas-cooled reactors (HTGRs). In part I of this two part series, the methodology and theoretical background of the integrated sensor FDI and signal reconstruction approach are given. This approach combines techniques such as non-temporal parity space analysis (PSA), principal component analysis (PCA), sensor fusion and fuzzy decision systems to form a more powerful sensor FDI methodology that exploits the strengths of the individual techniques. An illustrative example of the PCA algorithm is given making use of actual data retrieved from a pilot plant called the pebble bed micro model (PBMM). This is a prototype gas turbine power plant based on the first design configuration of the pebble bed modular reactor (PBMR). In part II, the described integrated sensor fault detection approach will be evaluated by means of two case studies. In the first case study the approach will be evaluated

  15. Lateral Load Capacity of Piles: A Comparative Study Between Indian Standards and Theoretical Approach

    Science.gov (United States)

    Jayasree, P. K.; Arun, K. V.; Oormila, R.; Sreelakshmi, H.

    2018-05-01

    As per Indian Standards, laterally loaded piles are usually analysed using the method adopted by IS 2911-2010 (Part 1/Section 2). But the practising engineers are of the opinion that the IS method is very conservative in design. This work aims at determining the extent to which the conventional IS design approach is conservative. This is done through a comparative study between IS approach and the theoretical model based on Vesic's equation. Bore log details for six different bridges were collected from the Kerala Public Works Department. Cast in situ fixed head piles embedded in three soil conditions both end bearing as well as friction piles were considered and analyzed separately. Piles were also modelled in STAAD.Pro software based on IS approach and the results were validated using Matlock and Reese (In Proceedings of fifth international conference on soil mechanics and foundation engineering, 1961) equation. The results were presented as the percentage variation in values of bending moment and deflection obtained by different methods. The results obtained from the mathematical model based on Vesic's equation and that obtained as per the IS approach were compared and the IS method was found to be uneconomical and conservative.

  16. Theoretical approach in optimization of stability of the multicomponent solid waste form

    International Nuclear Information System (INIS)

    Raicevic, S.; Plecas, I.; Mandic, M.

    1998-01-01

    Chemical precipitation of radionuclides and their immobilization into the solid matrix represents an important approach in the radioactive wastewater treatment. Unfortunately, because of the complexity of the system, optimization of this process in terms of its efficacy and safety represents a serious practical problem, even in treatment of the monocomponent nuclear waste. This situation is additionally complicated in the case of the polycomponent nuclear waste because of the synergic effects of interactions between the radioactive components and the solid matrix. Recently, we have proposed a general theoretical approach for optimization of the process of precipitation and immobilization of metal impurities by the solid matrix. One of the main advantages of this approach represents the possibility of treatment of the multicomponent liquid waste, immobilized by the solid matrix. This approach was used here for investigation of the stability of the system hydroxyapatite (HAP) - Pb/Cd, which was selected as a model multicomponent waste system. In this analysis, we have used a structurally dependent term of the cohesive energy as a stability criterion. (author)

  17. Strategy for a numerical Rock Mechanics Site Descriptive Model. Further development of the theoretical/numerical approach

    International Nuclear Information System (INIS)

    Olofsson, Isabelle; Fredriksson, Anders

    2005-05-01

    The Swedish Nuclear and Fuel Management Company (SKB) is conducting Preliminary Site Investigations at two different locations in Sweden in order to study the possibility of a Deep Repository for spent fuel. In the frame of these Site Investigations, Site Descriptive Models are achieved. These products are the result of an interaction of several disciplines such as geology, hydrogeology, and meteorology. The Rock Mechanics Site Descriptive Model constitutes one of these models. Before the start of the Site Investigations a numerical method using Discrete Fracture Network (DFN) models and the 2D numerical software UDEC was developed. Numerical simulations were the tool chosen for applying the theoretical approach for characterising the mechanical rock mass properties. Some shortcomings were identified when developing the methodology. Their impacts on the modelling (in term of time and quality assurance of results) were estimated to be so important that the improvement of the methodology with another numerical tool was investigated. The theoretical approach is still based on DFN models but the numerical software used is 3DEC. The main assets of the programme compared to UDEC are an optimised algorithm for the generation of fractures in the model and for the assignment of mechanical fracture properties. Due to some numerical constraints the test conditions were set-up in order to simulate 2D plane strain tests. Numerical simulations were conducted on the same data set as used previously for the UDEC modelling in order to estimate and validate the results from the new methodology. A real 3D simulation was also conducted in order to assess the effect of the '2D' conditions in the 3DEC model. Based on the quality of the results it was decided to update the theoretical model and introduce the new methodology based on DFN models and 3DEC simulations for the establishment of the Rock Mechanics Site Descriptive Model. By separating the spatial variability into two parts, one

  18. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  19. A Note on the Effect of Data Clustering on the Multiple-Imputation Variance Estimator: A Theoretical Addendum to the Lewis et al. article in JOS 2014

    Directory of Open Access Journals (Sweden)

    He Yulei

    2016-03-01

    Full Text Available Multiple imputation is a popular approach to handling missing data. Although it was originally motivated by survey nonresponse problems, it has been readily applied to other data settings. However, its general behavior still remains unclear when applied to survey data with complex sample designs, including clustering. Recently, Lewis et al. (2014 compared single- and multiple-imputation analyses for certain incomplete variables in the 2008 National Ambulatory Medicare Care Survey, which has a nationally representative, multistage, and clustered sampling design. Their study results suggested that the increase of the variance estimate due to multiple imputation compared with single imputation largely disappears for estimates with large design effects. We complement their empirical research by providing some theoretical reasoning. We consider data sampled from an equally weighted, single-stage cluster design and characterize the process using a balanced, one-way normal random-effects model. Assuming that the missingness is completely at random, we derive analytic expressions for the within- and between-multiple-imputation variance estimators for the mean estimator, and thus conveniently reveal the impact of design effects on these variance estimators. We propose approximations for the fraction of missing information in clustered samples, extending previous results for simple random samples. We discuss some generalizations of this research and its practical implications for data release by statistical agencies.

  20. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    Science.gov (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  1. Estimation of bone mineral density by digital X-ray radiogrammetry: theoretical background and clinical testing

    DEFF Research Database (Denmark)

    Rosholm, A; Hyldstrup, L; Backsgaard, L

    2002-01-01

    A new automated radiogrammetric method to estimate bone mineral density (BMD) from a single radiograph of the hand and forearm is described. Five regions of interest in radius, ulna and the three middle metacarpal bones are identified and approximately 1800 geometrical measurements from these bones......-ray absoptiometry (r = 0.86, p Relative to this age-related loss, the reported short...... sites and a precision that potentially allows for relatively short observation intervals. Udgivelsesdato: 2001-null...

  2. A holistic approach to age estimation in refugee children.

    Science.gov (United States)

    Sypek, Scott A; Benson, Jill; Spanner, Kate A; Williams, Jan L

    2016-06-01

    Many refugee children arriving in Australia have an inaccurately documented date of birth (DOB). A medical assessment of a child's age is often requested when there is a concern that their documented DOB is incorrect. This study's aim was to assess the accuracy a holistic age assessment tool (AAT) in estimating the age of refugee children newly settled in Australia. A holistic AAT that combines medical and non-medical approaches was used to estimate the ages of 60 refugee children with a known DOB. The tool used four components to assess age: an oral narrative, developmental assessment, anthropometric measures and pubertal assessment. Assessors were blinded to the true age of the child. Correlation coefficients for the actual and estimated age were calculated for the tool overall and individual components. The correlation coefficient between the actual and estimated age from the AAT was very strong at 0.9802 (boys 0.9748, girls 0.9876). The oral narrative component of the tool performed best (R = 0.9603). Overall, 86.7% of age estimates were within 1 year of the true age. The range of differences was -1.43 to 3.92 years with a standard deviation of 0.77 years (9.24 months). The AAT is a holistic, simple and safe instrument that can be used to estimate age in refugee children with results comparable with radiological methods currently used. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  3. METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE

    Directory of Open Access Journals (Sweden)

    Татьяна Александровна Коркина

    2013-08-01

    Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11

  4. Theoretical approaches to maternal-infant interaction: which approach best discriminates between mothers with and without postpartum depression?

    Science.gov (United States)

    Logsdon, M Cynthia; Mittelberg, Meghan; Morrison, David; Robertson, Ashley; Luther, James F; Wisniewski, Stephen R; Confer, Andrea; Eng, Heather; Sit, Dorothy K Y; Wisner, Katherine L

    2014-12-01

    The purpose of this study was to determine which of the four common approaches to coding maternal-infant interaction best discriminates between mothers with and without postpartum depression. After extensive training, four research assistants coded 83 three minute videotapes of maternal infant interaction at 12month postpartum visits. Four theoretical approaches to coding (Maternal Behavior Q-Sort, the Dyadic Mini Code, Ainsworth Maternal Sensitivity Scale, and the Child-Caregiver Mutual Regulation Scale) were used. Twelve month data were chosen to allow the maximum possible exposure of the infant to maternal depression during the first postpartum year. The videotapes were created in a laboratory with standard procedures. Inter-rater reliabilities for each coding method ranged from .7 to .9. The coders were blind to depression status of the mother. Twenty-seven of the women had major depressive disorder during the 12month postpartum period. Receiver operating characteristics analysis indicated that none of the four methods of analyzing maternal infant interaction discriminated between mothers with and without major depressive disorder. Limitations of the study include the cross-sectional design and the low number of women with major depressive disorder. Further analysis should include data from videotapes at earlier postpartum time periods, and alternative coding approaches should be considered. Nurses should continue to examine culturally appropriate ways in which new mothers can be supported in how to best nurture their babies. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Theoretical, observational, and isotopic estimates of the lifetime of the solar nebula

    Science.gov (United States)

    Podosek, Frank A.; Cassen, Patrick

    1994-01-01

    There are a variety of isotopic data for meteorites which suggest that the protostellar nebula existed and was involved in making planetary materials for some 10(exp 7) yr or more. Many cosmochemists, however, advocate alternative interpretations of such data in order to comply with a perceived constraint, from theoretical considerations, that the nebula existed only for a much shorter time, usually stated as less than or equal to 10(exp 6) yr. In this paper, we review evidence relevant to solar nebula duration which is available through three different disciplines: theoretical modeling of star formation, isotopic data from meteorites, and astronomical observations of T Tauri stars. Theoretical models based on observations of present star-forming regions indicate that stars like the Sun form by dynamical gravitational collapse of dense cores of cold molcular clouds in the interstellar clouds in the interstellar medium. The collapse to a star and disk occurs rapidly, on a time scale of the order 10(exp 5) yr. Disks evolve by dissipating energy while redistributing angular momentum, but it is difficult to predict the rate of evolution, particularly for low mass (compared to the star) disks which nonetheless still contain enough material to account for the observed planetary system. There is no compelling evidence, from available theories of disk structure and evolution, that the solar nebula must have evolved rapidly and could not have persisted for more than 1 Ma. In considering chronoloically relevant isotopic data for meteorites, we focus on three methodologies: absolute ages by U-Pb/Pb-Pb, and relative ages by short-lived radionuclides (especially Al-26) and by evolution of Sr-87/Sr-86. Two kinds of meteoritic materials-refractory inclusions such as CAIs and differential meteorites (eucrites and augrites) -- appear to have experienced potentially dateable nebular events. In both cases, the most straightforward interpretations of the available data indicate

  6. Reflections on the conceptualization and operationalization of a set-theoretic approach to employee motivation and performance research

    Directory of Open Access Journals (Sweden)

    James Christopher Ryan

    2017-01-01

    Full Text Available The current commentary offers a reflection on the conceptualizations of Lee and Raschke's (2016 proposal for a set-theoretic approach to employee motivation and organizational performance. The commentary is informed by the current author's operationalization of set-theoretic research on employee motivation which occurred contemporaneously to the work of Lee and Raschke. Observations on the state of current research on employee motivation, development of motivation theory and future directions of set-theoretic approaches to employee motivation and performance are offered.

  7. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  8. A short course in quantum information theory. An approach from theoretical physics. 2. ed.

    International Nuclear Information System (INIS)

    Diosi, Lajos

    2011-01-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. For the second edition, the authors has succeeded in adding many new topics while sticking to the conciseness of the overall approach. A new chapter on qubit thermodynamics has been added, while new sections and subsections have been incorporated in various chapter to deal with weak and time-continuous measurements, period-finding quantum algorithms and quantum error corrections. From the reviews of the first edition: ''The best things about this book are its brevity and clarity. In around 100 pages it provides a tutorial introduction to quantum information theory, including problems and solutions.. it's worth a look if you want to quickly get up to speed with the language and central concepts of quantum information theory, including the background classical information theory.'' (Craig Savage, Australian Physics, Vol. 44 (2), 2007). (orig.)

  9. NON-TERRITORIAL AUTONOMY IN RUSSIA: PRACTICAL IMPLICATIONS OF THEORETICAL APPROACHES

    Directory of Open Access Journals (Sweden)

    Tatiana RUDNEVA

    2012-06-01

    Full Text Available Despite the theoretical possibility to use non-territorial autonomy as a mechanism through which ethnic groups can fulfil their right to selfdetermination along with other minority rights, not many states have been willing to put theory into practice. The article offers an explanation why wider applicability of NTA is problematic by arguing that the theory itself is not yet polished enough to be implemented. The study includes examination of both theoretical approaches and empirical data from a case study of an attempt to establish NTAs in the Russian Federation. The findings suggest that inconsistencies and unclarities in the theory do correlate with practical flaws of NTAs, which allows to suggest that when the theory is tested empirically, the reality reveals all the flaws of the theory. The results indicate that the concept of NTA needs further refinement and development to make it more practice-oriented and applicable. As the problem of minority rights is still to be dealt with, we also propose a model of global union of NTAs where each ethnic group is represented by a non-governmental organisation, which seems to be more applicable than the others, alongside a number of other mechanisms that are even more essential and universal and focus on defending basic human rights

  10. Optimization of rootkit revealing system resources – A game theoretic approach

    Directory of Open Access Journals (Sweden)

    K. Muthumanickam

    2015-10-01

    Full Text Available Malicious rootkit is a collection of programs designed with the intent of infecting and monitoring the victim computer without the user’s permission. After the victim has been compromised, the remote attacker can easily cause further damage. In order to infect, compromise and monitor, rootkits adopt Native Application Programming Interface (API hooking technique. To reveal the hidden rootkits, current rootkit detection techniques check different data structures which hold reference to Native APIs. To verify these data structures, a large amount of system resources are required. This is because of the number of APIs in these data structures being quite large. Game theoretic approach is a useful mathematical tool to simulate network attacks. In this paper, a mathematical model is framed to optimize resource consumption using game-theory. To the best of our knowledge, this is the first work to be proposed for optimizing resource consumption while revealing rootkit presence using game theory. Non-cooperative game model is taken to discuss the problem. Analysis and simulation results show that our game theoretic model can effectively reduce the resource consumption by selectively monitoring the number of APIs in windows platform.

  11. The Formation of Instruments of Management of Industrial Enterprises According to the Theoretical and Functional Approaches

    Directory of Open Access Journals (Sweden)

    Raiko Diana V.

    2018-03-01

    Full Text Available The article is aimed at the substantiation based on the analysis of the company theories of the basic theoretical provisions on the formation of industrial enterprise management instruments. The article determines that the subject of research in theories is enterprise, the object is the process of management of potential according to the forms of business organization and technology of partnership relations, the goal is high financial results, stabilization of the activity, and social responsibility. The publication carries out an analysis of enterprise theories on the determining of its essence as a socio-economic system in the following directions: technical preparation of production, economic theory and law, theory of systems, marketing-management. As a result of the research, the general set of functions has been identified – the socio-economic functions of enterprise by groups: information-legal, production, marketing-management, social responsibility. When building management instruments, it is suggested to take into consideration the direct and inverse relationships of enterprise at all levels of management – micro, meso and macro. On this ground, the authors have developed provisions on formation of instruments of management of industrial enterprises according to two approachestheoretical and functional.

  12. A Game Theoretic Approach for Balancing Energy Consumption in Clustered Wireless Sensor Networks.

    Science.gov (United States)

    Yang, Liu; Lu, Yinzhi; Xiong, Lian; Tao, Yang; Zhong, Yuanchang

    2017-11-17

    Clustering is an effective topology control method in wireless sensor networks (WSNs), since it can enhance the network lifetime and scalability. To prolong the network lifetime in clustered WSNs, an efficient cluster head (CH) optimization policy is essential to distribute the energy among sensor nodes. Recently, game theory has been introduced to model clustering. Each sensor node is considered as a rational and selfish player which will play a clustering game with an equilibrium strategy. Then it decides whether to act as the CH according to this strategy for a tradeoff between providing required services and energy conservation. However, how to get the equilibrium strategy while maximizing the payoff of sensor nodes has rarely been addressed to date. In this paper, we present a game theoretic approach for balancing energy consumption in clustered WSNs. With our novel payoff function, realistic sensor behaviors can be captured well. The energy heterogeneity of nodes is considered by incorporating a penalty mechanism in the payoff function, so the nodes with more energy will compete for CHs more actively. We have obtained the Nash equilibrium (NE) strategy of the clustering game through convex optimization. Specifically, each sensor node can achieve its own maximal payoff when it makes the decision according to this strategy. Through plenty of simulations, our proposed game theoretic clustering is proved to have a good energy balancing performance and consequently the network lifetime is greatly enhanced.

  13. Numerical Methods Application for Reinforced Concrete Elements-Theoretical Approach for Direct Stiffness Matrix Method

    Directory of Open Access Journals (Sweden)

    Sergiu Ciprian Catinas

    2015-07-01

    Full Text Available A detailed theoretical and practical investigation of the reinforced concrete elements is due to recent techniques and method that are implemented in the construction market. More over a theoretical study is a demand for a better and faster approach nowadays due to rapid development of the calculus technique. The paper above will present a study for implementing in a static calculus the direct stiffness matrix method in order capable to address phenomena related to different stages of loading, rapid change of cross section area and physical properties. The method is a demand due to the fact that in our days the FEM (Finite Element Method is the only alternative to such a calculus and FEM are considered as expensive methods from the time and calculus resources point of view. The main goal in such a method is to create the moment-curvature diagram in the cross section that is analyzed. The paper above will express some of the most important techniques and new ideas as well in order to create the moment curvature graphic in the cross sections considered.

  14. A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing

    Directory of Open Access Journals (Sweden)

    Gustavo Miranda da Silva

    2015-09-01

    Full Text Available This work addresses an important issue regarding the performance of simultaneous test procedures: the construction of multiple tests that at the same time are optimal from a statistical perspective and that also yield logically-consistent results that are easy to communicate to practitioners of statistical methods. For instance, if hypothesis A implies hypothesis B, is it possible to create optimal testing procedures that reject A whenever they reject B? Unfortunately, several standard testing procedures fail in having such logical consistency. Although this has been deeply investigated under a frequentist perspective, the literature lacks analyses under a Bayesian paradigm. In this work, we contribute to the discussion by investigating three rational relationships under a Bayesian decision-theoretic standpoint: coherence, invertibility and union consonance. We characterize and illustrate through simple examples optimal Bayes tests that fulfill each of these requisites separately. We also explore how far one can go by putting these requirements together. We show that although fairly intuitive tests satisfy both coherence and invertibility, no Bayesian testing scheme meets the desiderata as a whole, strengthening the understanding that logical consistency cannot be combined with statistical optimality in general. Finally, we associate Bayesian hypothesis testing with Bayes point estimation procedures. We prove the performance of logically-consistent hypothesis testing by means of a Bayes point estimator to be optimal only under very restrictive conditions.

  15. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    Science.gov (United States)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  16. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    Science.gov (United States)

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  17. Model-free information-theoretic approach to infer leadership in pairs of zebrafish.

    Science.gov (United States)

    Butail, Sachit; Mwaffo, Violet; Porfiri, Maurizio

    2016-04-01

    Collective behavior affords several advantages to fish in avoiding predators, foraging, mating, and swimming. Although fish schools have been traditionally considered egalitarian superorganisms, a number of empirical observations suggest the emergence of leadership in gregarious groups. Detecting and classifying leader-follower relationships is central to elucidate the behavioral and physiological causes of leadership and understand its consequences. Here, we demonstrate an information-theoretic approach to infer leadership from positional data of fish swimming. In this framework, we measure social interactions between fish pairs through the mathematical construct of transfer entropy, which quantifies the predictive power of a time series to anticipate another, possibly coupled, time series. We focus on the zebrafish model organism, which is rapidly emerging as a species of choice in preclinical research for its genetic similarity to humans and reduced neurobiological complexity with respect to mammals. To overcome experimental confounds and generate test data sets on which we can thoroughly assess our approach, we adapt and calibrate a data-driven stochastic model of zebrafish motion for the simulation of a coupled dynamical system of zebrafish pairs. In this synthetic data set, the extent and direction of the coupling between the fish are systematically varied across a wide parameter range to demonstrate the accuracy and reliability of transfer entropy in inferring leadership. Our approach is expected to aid in the analysis of collective behavior, providing a data-driven perspective to understand social interactions.

  18. A Narrative Approach to Both Teaching and Learning About Democracy with Young Children: A Theoretical Exploration

    Directory of Open Access Journals (Sweden)

    maila dinia husni rahim

    2016-03-01

    Full Text Available As adults, we often believe that children are only interested with games and children’s ‘stuff’. However research has shown that children do indeed show a greater interest in the world around them, including about politics, elections, and democracy. If we need to teach children about democracy, what are the best methods of teaching democracy to young children? Narrative is considered as an effective medium to convey messages to children and discuss hard subjects. This paper is a theoretical exploration that looks at the narrative approach to teaching and learning about democracy with young children. The researchers has used a literature review to look at why narratives should be used, what narratives should be used and how to use narratives.

  19. Analysing Buyers' and Sellers' Strategic Interactions in Marketplaces: An Evolutionary Game Theoretic Approach

    Science.gov (United States)

    Vytelingum, Perukrishnen; Cliff, Dave; Jennings, Nicholas R.

    We develop a new model to analyse the strategic behaviour of buyers and sellers in market mechanisms. In particular, we wish to understand how the different strategies they adopt affect their economic efficiency in the market and to understand the impact of these choices on the overall efficiency of the marketplace. To this end, we adopt a two-population evolutionary game theoretic approach, where we consider how the behaviours of both buyers and sellers evolve in marketplaces. In so doing, we address the shortcomings of the previous state-of-the-art analytical model that assumes that buyers and sellers have to adopt the same mixed strategy in the market. Finally, we apply our model in one of the most common market mechanisms, the Continuous Double Auction, and demonstrate how it allows us to provide new insights into the strategic interactions of such trading agents.

  20. A game-theoretical approach for reciprocal security-related prevention investment decisions

    International Nuclear Information System (INIS)

    Reniers, Genserik; Soudan, Karel

    2010-01-01

    Every company situated within a chemical cluster faces important security risks from neighbouring companies. Investing in reciprocal security preventive measures is therefore necessary to avoid major accidents. These investments do not, however, provide a direct return on investment for the investor-company and thus plants are hesitative to invest. Moreover, there is likelihood that even if a company has fully invested in reciprocal security prevention, its neighbour has not, and as a result the company can experience a major accident caused by an initial (minor or major) accident that occurred in an adjacent chemical enterprise. In this article we employ a game-theoretic approach to interpret and model behaviour of two neighbouring chemical plants while negotiating and deciding on reciprocal security prevention investments.

  1. The game as strategy for approach to sexuality with adolescents: theoretical-methodological reflections.

    Science.gov (United States)

    Souza, Vânia de; Gazzinelli, Maria Flávia; Soares, Amanda Nathale; Fernandes, Marconi Moura; Oliveira, Rebeca Nunes Guedes de; Fonseca, Rosa Maria Godoy Serpa da

    2017-04-01

    To describe the Papo Reto [Straight Talk] game and reflect on its theoretical-methodological basis. Analytical study on the process of elaboration of the Papo Reto online game, destined to adolescents aged 15-18 years, with access to the Game between 2014 and 2015. the interactions of 60 adolescents from Belo Horizonte and São Paulo constituted examples of the potentialities of the Game to favor the approach to sexuality with adolescents through simulation of reality, invention and interaction. Based on those potentialities, four thinking categories were discussed: the game as pedagogic device; the game as simulation of realities; the game as device for inventive learning; and the game empowering the interaction. By permitting that the adolescents take risks on new ways, the Game allows them to become creative and active in the production of senses, in the creation of their discourses and in the ways of thinking, feeling and acting in the sexuality field.

  2. Dynamic Load on a Pipe Caused by Acetylene Detonations – Experiments and Theoretical Approaches

    Directory of Open Access Journals (Sweden)

    Axel Sperber

    1999-01-01

    Full Text Available The load acting on the wall of a pipe by a detonation, which is travelling through, is not yet well characterized. The main reasons are the limited amount of sufficiently accurate pressure time history data and the requirement of considering the dynamics of the system. Laser vibrometry measurements were performed to determine the dynamic response of the pipe wall on a detonation. Different modelling approaches were used to quantify, theoretically, the radial displacements of the pipe wall. There is good agreement between measured and predicted values of vibration frequencies and the propagation velocities of transverse waves. Discrepancies mainly due to wave propagation effects were found in the amplitudes of the radial velocities. They might be overcome by the use of a dynamic load factor or improved modelling methods.

  3. Combined action of ionizing radiation with another factor: common rules and theoretical approach

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Roh, Changhyun; Komarova, Ludmila N.; Petin, Vladislav G.

    2013-01-01

    Two or more factors can simultaneously make their combined effects on the biological objects. This study has focused on theoretical approach to synergistic interaction due to the combined action of radiation and another factor on cell inactivation. A mathematical model for the synergistic interaction of different environmental agents was suggested for quantitative prediction of irreversibly damaged cells after combined exposures. The model takes into account the synergistic interaction of agents and based on the supposition that additional effective damages responsible for the synergy are irreversible and originated from an interaction of ineffective sub lesions. The experimental results regarding the irreversible component of radiation damage of diploid yeast cells simultaneous exposed to heat with ionizing radiation or UV light are presented. A good agreement of experimental results with model predictions was demonstrated. The importance of the results obtained for the interpretation of the mechanism of synergistic interaction of various environmental factors is discussed. (author)

  4. Cross-cultural undergraduate medical education in North America: theoretical concepts and educational approaches.

    Science.gov (United States)

    Reitmanova, Sylvia

    2011-04-01

    Cross-cultural undergraduate medical education in North America lacks conceptual clarity. Consequently, school curricula are unsystematic, nonuniform, and fragmented. This article provides a literature review about available conceptual models of cross-cultural medical education. The clarification of these models may inform the development of effective educational programs to enable students to provide better quality care to patients from diverse sociocultural backgrounds. The approaches to cross-cultural health education can be organized under the rubric of two specific conceptual models: cultural competence and critical culturalism. The variation in the conception of culture adopted in these two models results in differences in all curricular components: learning outcomes, content, educational strategies, teaching methods, student assessment, and program evaluation. Medical schools could benefit from more theoretical guidance on the learning outcomes, content, and educational strategies provided to them by governing and licensing bodies. More student assessments and program evaluations are needed in order to appraise the effectiveness of cross-cultural undergraduate medical education.

  5. Theoretical approaches to creation of robotic coal mines based on the synthesis of simulation technologies

    Science.gov (United States)

    Fryanov, V. N.; Pavlova, L. D.; Temlyantsev, M. V.

    2017-09-01

    Methodological approaches to theoretical substantiation of the structure and parameters of robotic coal mines are outlined. The results of mathematical and numerical modeling revealed the features of manifestation of geomechanical and gas dynamic processes in the conditions of robotic mines. Technological solutions for the design and manufacture of technical means for robotic mine are adopted using the method of economic and mathematical modeling and in accordance with the current regulatory documents. For a comparative performance evaluation of technological schemes of traditional and robotic mines, methods of cognitive modeling and matrix search for subsystem elements in the synthesis of a complex geotechnological system are applied. It is substantiated that the process of technical re-equipment of a traditional mine with a phased transition to a robotic mine will reduce unit costs by almost 1.5 times with a significant social effect due to a reduction in the number of personnel engaged in hazardous work.

  6. A Theoretical Analysis of the Mission Statement Based on the Axiological Approach

    Directory of Open Access Journals (Sweden)

    Marius-Costel EŞI

    2016-12-01

    Full Text Available The aim of this work is focused on a theoretical analysis of formulating the mission statement of business organizations in relation to the idea of the organizational axiological core. On one hand, we consider the CSR-Corporate Social Responsibility which, in our view, must be brought into direct connection both with the moral entrepreneurship (which should support the philosophical perspective of the statement of business organizations mission and the purely economic entrepreneurship based on profit maximization (which should support the pragmatic perspective. On the other hand, an analysis of the moral concepts which should underpin business is becoming fundamental, in our view, as far as the idea of the social specific value of the social entrepreneurship is evidenced. Therefore, our approach highlights a number of epistemic explanations in relation to the actual practice dimension.

  7. The self-schema model: a theoretical approach to the self-concept in eating disorders.

    Science.gov (United States)

    Stein, K F

    1996-04-01

    Over the last several decades, the self-concept has been implicated as a important determinant of eating disorders (ED). Although considerable progress has been made, questions remain unanswered about the properties of self-concept that distinguish women with an ED from other populations, and mechanisms that link the self-concept to the disordered behaviors. Markus's self-schema model is presented as a theoretical approach to explore the role of the self-concept in ED. To show how the schema model can be integrated with existing work on the self-concept in ED, a framework is proposed that addresses the number, content, and accessibility of the self-schemas. More specifically, it is posited that a limited collection of positive self-schemas available in memory, in combination with a chronically and inflexibly accessible body-weight self-schema, lead to the disordered behaviors associated with anorexia nervosa and bulimia nervosa.

  8. Combined action of ionizing radiation with another factor: common rules and theoretical approach

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Kyu; Roh, Changhyun, E-mail: jkkim@kaeri.re.kr [Korea Atomic Energy Research Institute, Jeongeup (Korea, Republic of); Komarova, Ludmila N.; Petin, Vladislav G., E-mail: vgpetin@yahoo.com [Medical Radiological Research Center, Obninsk (Russian Federation)

    2013-07-01

    Two or more factors can simultaneously make their combined effects on the biological objects. This study has focused on theoretical approach to synergistic interaction due to the combined action of radiation and another factor on cell inactivation. A mathematical model for the synergistic interaction of different environmental agents was suggested for quantitative prediction of irreversibly damaged cells after combined exposures. The model takes into account the synergistic interaction of agents and based on the supposition that additional effective damages responsible for the synergy are irreversible and originated from an interaction of ineffective sub lesions. The experimental results regarding the irreversible component of radiation damage of diploid yeast cells simultaneous exposed to heat with ionizing radiation or UV light are presented. A good agreement of experimental results with model predictions was demonstrated. The importance of the results obtained for the interpretation of the mechanism of synergistic interaction of various environmental factors is discussed. (author)

  9. Region innovation and investment development: conceptual theoretical approach and business solutions

    Directory of Open Access Journals (Sweden)

    Zozulya D.M.

    2017-01-01

    Full Text Available The article describes essential problems of the region business innovation and investment development under current conditions, issues of crisis restrictions negotiation and innovation-driven economy formation. The relevance of the research is defined by the need of effective tools creation for business innovation and investment development and support, which can be applied, first, to increase efficiency of the region industrial activity, then improve production competitiveness on the innovative basis, overcome existing problems and provide sustainable innovation development in the region. The results of conducted research are represented in the article including region innovation and investment development concept model made up by the authors on the basis of system theoretical approach. The tools of the region innovation development defined in the concept model are briefly reviewed in the article. The most important of them include engineering marketing (marketing of scientific and technical innovations, strategic planning, benchmarking, place marketing and business process modeling.

  10. WOMEN, FOOTBALL AND EUROPEAN INTEGRATION. AIMS AND QUESTIONS, METHODOLOGICAL AND THEORETICAL APPROACHES

    Directory of Open Access Journals (Sweden)

    Gertrud Pfister

    2013-12-01

    Full Text Available The aim of this article is to introduce a new research topic and provide information about a European research project focusing on football as a means of European integration. Using the results of available studies of the author and other scholars, it is to be discussed whether and how women can participate in football cultures and contribute to a European identity. Based on theoretical approaches to national identity, gender and socialization, as well as and on the analysis of various intersections between gender, football and fandom, it can be concluded that women are still outsiders in the world of football and that it is doubtful whether female players and fans will contribute decisively to Europeanization processes.

  11. A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process

    Science.gov (United States)

    Wang, Yi; Tamai, Tetsuo

    2009-01-01

    Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.

  12. Estimating a theoretical model of state banking competition using a dynamic panel: the Brazilian case

    Directory of Open Access Journals (Sweden)

    Fábio A. Miessi Sanches

    2009-03-01

    Full Text Available In this paper we set up a model of regional banking competition based on Bresnahan (1982, Lau (1982 and Nakane (2002. The structural model is estimated using data from eight Brazilian states and a dynamic panel. The results show that on average the level of competition in the Brazilian banking system is high, even tough the null of perfect competition can be rejected at the usual significance levels. This result also prevails at the state level: Rio Grande do Sul, São Paulo, Rio de Janeiro, Pernambuco and Minas Gerais have high degree of competition.

  13. Simplified approach for estimating large early release frequency

    International Nuclear Information System (INIS)

    Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.

    1998-04-01

    The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB

  14. Stress transfer from pile group in saturated and unsaturated soil using theoretical and experimental approaches

    Directory of Open Access Journals (Sweden)

    al-Omari Raid R.

    2017-01-01

    Full Text Available Piles are often used in groups, and the behavior of pile groups under the applied loads is generally different from that of single pile due to the interaction of neighboring piles, therefore, one of the main objectives of this paper is to investigate the influence of pile group (bearing capacity, load transfer sharing for pile shaft and tip in comparison to that of single piles. Determination of the influence of load transfer from the pile group to the surrounding soil and the mechanism of this transfer with increasing the load increment on the tip and pile shaft for the soil in saturated and unsaturated state (when there is a negative pore water pressure. Different basic properties are used that is (S = 90%, γd = 15 kN / m3, S = 90%, γd = 17 kN / m3 and S = 60%, γd =15 kN / m3. Seven model piles were tested, these was: single pile (compression and pull out test, 2×1, 3×1, 2×2, 3×2 and 3×3 group. The stress was measured with 5 cm diameter soil pressure transducer positioned at a depth of 5 cm below the pile tip for all pile groups. The measured stresses below the pile tip using a soil pressure transducer positioned at a depth of 0.25L (where L is the pile length below the pile tip are compared with those calculated using theoretical and conventional approaches. These methods are: the conventional 2V:1H method and the method used the theory of elasticity. The results showed that the method of measuring the soil stresses with soil pressure transducer adopted in this study, gives in general, good results of stress transfer compared with the results obtained from the theoretical and conventional approaches.

  15. An Information-Theoretic Approach for Indirect Train Traffic Monitoring Using Building Vibration

    Directory of Open Access Journals (Sweden)

    Susu Xu

    2017-05-01

    Full Text Available This paper introduces an indirect train traffic monitoring method to detect and infer real-time train events based on the vibration response of a nearby building. Monitoring and characterizing traffic events are important for cities to improve the efficiency of transportation systems (e.g., train passing, heavy trucks, and traffic. Most prior work falls into two categories: (1 methods that require intensive labor to manually record events or (2 systems that require deployment of dedicated sensors. These approaches are difficult and costly to execute and maintain. In addition, most prior work uses dedicated sensors designed for a single purpose, resulting in deployment of multiple sensor systems. This further increases costs. Meanwhile, with the increasing demands of structural health monitoring, many vibration sensors are being deployed in commercial buildings. Traffic events create ground vibration that propagates to nearby building structures inducing noisy vibration responses. We present an information-theoretic method for train event monitoring using commonly existing vibration sensors deployed for building health monitoring. The key idea is to represent the wave propagation in a building induced by train traffic as information conveyed in noisy measurement signals. Our technique first uses wavelet analysis to detect train events. Then, by analyzing information exchange patterns of building vibration signals, we infer the category of the events (i.e., southbound or northbound train. Our algorithm is evaluated with an 11-story building where trains pass by frequently. The results show that the method can robustly achieve a train event detection accuracy of up to a 93% true positive rate and an 80% true negative rate. For direction categorization, compared with the traditional signal processing method, our information-theoretic approach reduces categorization error from 32.1 to 12.1%, which is a 2.5× improvement.

  16. Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems

    Energy Technology Data Exchange (ETDEWEB)

    Omitaomu, Olufemi A [ORNL; Ganguly, Auroop R [ORNL; Jiao, Yu [ORNL

    2009-01-01

    In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.

  17. Theoretical approach to the institutionalization of forms of governance resource provision of innovative activity

    Directory of Open Access Journals (Sweden)

    M. S. Asmolova

    2016-01-01

    Full Text Available Knowledge economy research due to the actualization of the role of knowledge and information. Management, its impact and the institutionalization of management resource provision designed to overcome the problems inherent in the present stage of development. An important research direction is to carry out theoretical analysis of economic resources in the context of their occurrence, development and improvement. This assertion has identified the need to consider the theoretical approach to the institutionalization of forms of resource management software innovation and analysis and typology of approaches by different parameters on the basis of analysis of a large number of sources. The features of the concept of institutionalization as defined phenomenon in a time perspective. In an analysis conducted by scientists used studies from different periods in the development of economic science. The analysis of numerous professional and scientific research led to the conclusion that knowledge and information should be dis-regarded as a new type of economic production factors. Separately, analyzed the impact of globalization processes that have affected the scientific and innovative sphere. Allocated to a separate study by side issues of innovative development of the Russian economy, which prevents the unresolved improve the competitiveness of the national economic and inhibits the formation of regional and national innovation system, restraining the transition to an innovative model of development. Citing as evidence of the deepening of economic globalization, the role of new information technologies and the formation of a single information space. Noting the fact that if the earlier science developed to deepen knowledge on the basis of the social division of Sciences, in the coming century should happen deepening of knowledge on the basis of their socialization.

  18. Photophysical characteristics of three novel benzanthrone derivatives: Experimental and theoretical estimation of dipole moments

    International Nuclear Information System (INIS)

    Siddlingeshwar, B.; Hanagodimath, S.M.; Kirilova, E.M.; Kirilov, Georgii K.

    2011-01-01

    The effect of solvents on absorption and fluorescence spectra and dipole moments of novel benzanthrone derivatives such as 3-N-(N',N'-Dimethylformamidino) benzanthrone (1), 3-N-(N',N'-Diethylacetamidino) benzanthrone (2) and 3-morpholinobenzanthrone (3) have been studied in various solvents. The fluorescence lifetime of the dyes (1-3) in chloroform were also recorded. Bathochromic shift observed in the absorption and fluorescence spectra of these molecules with increasing solvent polarity indicates that the transitions involved are π→π * . Using the theory of solvatochromism, the difference in the excited-state (μ e ) and the ground-state (μ e ) dipole moments was estimated from Lippert-Mataga, Bakhshiev, Kawski-Chamma-Viallet, and McRae equations by using the variation of Stokes shift with the solvent's relative permittivity and refractive index. AM1 and PM6 semiempirical molecular calculations using MOPAC and ab-initio calculations at B3LYP/6-31 G * level of theory using Gaussian 03 software were carried out to estimate the ground-state dipole moments and some other physicochemical properties. Further, the change in dipole moment value (Δμ) was also calculated by using the variation of Stokes shift with the molecular-microscopic empirical solvent polarity parameter (E T N ). The excited-state dipole moments observed are larger than their ground-state counterparts, indicating a substantial redistribution of the π-electron densities in a more polar excited state for all the systems investigated.

  19. A new approach for estimating the density of liquids.

    Science.gov (United States)

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-05

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  20. An efficient algebraic approach to observability analysis in state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)

    2010-03-15

    An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)

  1. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    An integrated approach to estimate the storage reliability is proposed. • A non-parametric measure to estimate the number of failures and the reliability at each testing time is presented. • E-Baysian method to estimate the failure probability is introduced. • The possible initial failures in storage are introduced. • The non-parametric estimates of failure numbers can be used into the parametric models.

  2. Theoretical model estimation of guest diffusion in Metal-Organic Frameworks (MOFs)

    KAUST Repository

    Zheng, Bin

    2015-08-11

    Characterizing molecule diffusion in nanoporous matrices is critical to understanding the novel chemical and physical properties of metal-organic frameworks (MOFs). In this paper, we developed a theoretical model to fastly and accurately compute the diffusion rate of guest molecules in a zeolitic imidazolate framework-8 (ZIF-8). The ideal gas or equilibrium solution diffusion model was modified to contain the effect of periodical media via introducing the possibility of guests passing through the framework gate. The only input in our model is the energy barrier of guests passing through the MOF’s gate. Molecular dynamics (MD) methods were employed to gather the guest density profile, which then was used to deduce the energy barrier values. This produced reliable results that require a simulation time of 5 picoseconds, which is much shorter when using pure MD methods (in the billisecond scale) . Also, we used density functional theory (DFT) methods to obtain the energy profile of guests passing through gates, as this does not require specification of a force field for the MOF degrees of freedom. In the DFT calculation, we only considered one gate of MOFs each time; as this greatly reduced the computational cost. Based on the obtained energy barrier values we computed the diffusion rate of alkane and alcohol in ZIF-8 using our model, which was in good agreement with experimental test results and the calculation values from standard MD model. Our model shows the advantage of obtaining accurate diffusion rates for guests in MOFs for a lower computational cost and shorter calculation time. Thus, our analytic model calculation is especially attractive for high-throughput computational screening of the dynamic performance of guests in a framework.

  3. Adjusting Estimates of the Expected Value of Information for Implementation: Theoretical Framework and Practical Application.

    Science.gov (United States)

    Andronis, Lazaros; Barton, Pelham M

    2016-04-01

    Value of information (VoI) calculations give the expected benefits of decision making under perfect information (EVPI) or sample information (EVSI), typically on the premise that any treatment recommendations made in light of this information will be implemented instantly and fully. This assumption is unlikely to hold in health care; evidence shows that obtaining further information typically leads to "improved" rather than "perfect" implementation. To present a method of calculating the expected value of further research that accounts for the reality of improved implementation. This work extends an existing conceptual framework by introducing additional states of the world regarding information (sample information, in addition to current and perfect information) and implementation (improved implementation, in addition to current and optimal implementation). The extension allows calculating the "implementation-adjusted" EVSI (IA-EVSI), a measure that accounts for different degrees of implementation. Calculations of implementation-adjusted estimates are illustrated under different scenarios through a stylized case study in non-small cell lung cancer. In the particular case study, the population values for EVSI and IA-EVSI were £ 25 million and £ 8 million, respectively; thus, a decision assuming perfect implementation would have overestimated the expected value of research by about £ 17 million. IA-EVSI was driven by the assumed time horizon and, importantly, the specified rate of change in implementation: the higher the rate, the greater the IA-EVSI and the lower the difference between IA-EVSI and EVSI. Traditionally calculated measures of population VoI rely on unrealistic assumptions about implementation. This article provides a simple framework that accounts for improved, rather than perfect, implementation and offers more realistic estimates of the expected value of research. © The Author(s) 2015.

  4. Theoretical estimation of adiabatic temperature rise from the heat flow data obtained from a reaction calorimeter

    International Nuclear Information System (INIS)

    Das, Parichay K.

    2012-01-01

    Highlights: ► This method for estimating ΔT ad (t) against time in a semi-batch reactor is distinctively pioneer and novel. ► It has established uniquely a direct correspondence between the evolution of ΔT ad (t) in RC and C A (t) in a semi-batch reactor. ► Through a unique reaction scheme, the independent effects of heat of mixing and reaction on ΔT ad (t) has been demonstrated quantitatively. ► This work will help to build a thermally safe corridor of a thermally hazard reaction. ► This manuscript, the author believes will open a new vista for further research in Adiabatic Calorimetry. - Abstract: A novel method for estimating the transient profile of adiabatic rise in temperature has been developed from the heat flow data for exothermic chemical reactions that are conducted in reaction calorimeter (RC). It has also been mathematically demonstrated by the present design that there exists a direct qualitative equivalence between the temporal evolution of the adiabatic temperature rise and the concentration of the limiting reactant for an exothermic chemical reaction, carried out in semi batch mode. The proposed procedure shows that the adiabatic temperature rise will always be less than that of the reaction executed at batch mode thereby affording a thermally safe corridor. Moreover, a unique reaction scheme has been designed to establish the independent heat effect of dissolution and reaction quantitatively. It is hoped that the testimony of the transient adiabatic temperature rise that can be prepared by the proposed method, may provide ample scope for further research.

  5. Unsteady force estimation using a Lagrangian drift-volume approach

    Science.gov (United States)

    McPhaden, Cameron J.; Rival, David E.

    2018-04-01

    A novel Lagrangian force estimation technique for unsteady fluid flows has been developed, using the concept of a Darwinian drift volume to measure unsteady forces on accelerating bodies. The construct of added mass in viscous flows, calculated from a series of drift volumes, is used to calculate the reaction force on an accelerating circular flat plate, containing highly-separated, vortical flow. The net displacement of fluid contained within the drift volumes is, through Darwin's drift-volume added-mass proposition, equal to the added mass of the plate and provides the reaction force of the fluid on the body. The resultant unsteady force estimates from the proposed technique are shown to align with the measured drag force associated with a rapid acceleration. The critical aspects of understanding unsteady flows, relating to peak and time-resolved forces, often lie within the acceleration phase of the motions, which are well-captured by the drift-volume approach. Therefore, this Lagrangian added-mass estimation technique opens the door to fluid-dynamic analyses in areas that, until now, were inaccessible by conventional means.

  6. Estimation of stature from hand impression: a nonconventional approach.

    Science.gov (United States)

    Ahemad, Nasir; Purkait, Ruma

    2011-05-01

    Stature is used for constructing a biological profile that assists with the identification of an individual. So far, little attention has been paid to the fact that stature can be estimated from hand impressions left at scene of crime. The present study based on practical observations adopted a new methodology of measuring hand length from the depressed area between hypothenar and thenar region on the proximal surface of the palm. Stature and bilateral hand impressions were obtained from 503 men of central India. Seventeen dimensions of hand were measured on the impression. Linear regression equations derived showed hand length followed by palm length are best estimates of stature. Testing the practical utility of the suggested method on latent prints of 137 subjects, a statistically insignificant result was obtained when known and estimated stature derived from latent prints was compared. The suggested approach points to a strong possibility of its usage in crime scene investigation, albeit the fact that validation studies in real-life scenarios are performed. © 2011 American Academy of Forensic Sciences.

  7. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    OpenAIRE

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  8. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    Science.gov (United States)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  9. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

    Science.gov (United States)

    Rodrigo, Marianito R

    2016-01-01

    The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.

  10. Resource Allocation for Multicell Device-to-Device Communications in Cellular Network: A Game Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Jun Huang

    2015-08-01

    Full Text Available Device-to-Device (D2D communication has recently emerged as a promising technology to improve the capacity and coverage of cellular systems. To successfully implement D2D communications underlaying a cellular network, resource allocation for D2D links plays a critical role. While most of prior resource allocation mechanisms for D2D communications have focused on interference within a single-cell system, this paper investigates the resource allocation problem for a multicell cellular network in which a D2D link reuses available spectrum resources of multiple cells. A repeated game theoretic approach is proposed to address the problem. In this game, the base stations (BSs act as players that compete for resource supply of D2D, and the utility of each player is formulated as revenue collected from both cellular and D2D users using resources. Extensive simulations are conducted to verify the proposed approach and the results show that it can considerably enhance the system performance in terms of sum rate and sum rate gain.

  11. Molecular acidity: An accurate description with information-theoretic approach in density functional reactivity theory.

    Science.gov (United States)

    Cao, Xiaofang; Rong, Chunying; Zhong, Aiguo; Lu, Tian; Liu, Shubin

    2018-01-15

    Molecular acidity is one of the important physiochemical properties of a molecular system, yet its accurate calculation and prediction are still an unresolved problem in the literature. In this work, we propose to make use of the quantities from the information-theoretic (IT) approach in density functional reactivity theory and provide an accurate description of molecular acidity from a completely new perspective. To illustrate our point, five different categories of acidic series, singly and doubly substituted benzoic acids, singly substituted benzenesulfinic acids, benzeneseleninic acids, phenols, and alkyl carboxylic acids, have been thoroughly examined. We show that using IT quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Rényi entropy, one is able to simultaneously predict experimental pKa values of these different categories of compounds. Because of the universality of the quantities employed in this work, which are all density dependent, our approach should be general and be applicable to other systems as well. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Game Theoretic Approach for Systematic Feature Selection; Application in False Alarm Detection in Intensive Care Units

    Directory of Open Access Journals (Sweden)

    Fatemeh Afghah

    2018-03-01

    Full Text Available Intensive Care Units (ICUs are equipped with many sophisticated sensors and monitoring devices to provide the highest quality of care for critically ill patients. However, these devices might generate false alarms that reduce standard of care and result in desensitization of caregivers to alarms. Therefore, reducing the number of false alarms is of great importance. Many approaches such as signal processing and machine learning, and designing more accurate sensors have been developed for this purpose. However, the significant intrinsic correlation among the extracted features from different sensors has been mostly overlooked. A majority of current data mining techniques fail to capture such correlation among the collected signals from different sensors that limits their alarm recognition capabilities. Here, we propose a novel information-theoretic predictive modeling technique based on the idea of coalition game theory to enhance the accuracy of false alarm detection in ICUs by accounting for the synergistic power of signal attributes in the feature selection stage. This approach brings together techniques from information theory and game theory to account for inter-features mutual information in determining the most correlated predictors with respect to false alarm by calculating Banzhaf power of each feature. The numerical results show that the proposed method can enhance classification accuracy and improve the area under the ROC (receiver operating characteristic curve compared to other feature selection techniques, when integrated in classifiers such as Bayes-Net that consider inter-features dependencies.

  13. A Theoretical Approach to Understanding Population Dynamics with Seasonal Developmental Durations

    Science.gov (United States)

    Lou, Yijun; Zhao, Xiao-Qiang

    2017-04-01

    There is a growing body of biological investigations to understand impacts of seasonally changing environmental conditions on population dynamics in various research fields such as single population growth and disease transmission. On the other side, understanding the population dynamics subject to seasonally changing weather conditions plays a fundamental role in predicting the trends of population patterns and disease transmission risks under the scenarios of climate change. With the host-macroparasite interaction as a motivating example, we propose a synthesized approach for investigating the population dynamics subject to seasonal environmental variations from theoretical point of view, where the model development, basic reproduction ratio formulation and computation, and rigorous mathematical analysis are involved. The resultant model with periodic delay presents a novel term related to the rate of change of the developmental duration, bringing new challenges to dynamics analysis. By investigating a periodic semiflow on a suitably chosen phase space, the global dynamics of a threshold type is established: all solutions either go to zero when basic reproduction ratio is less than one, or stabilize at a positive periodic state when the reproduction ratio is greater than one. The synthesized approach developed here is applicable to broader contexts of investigating biological systems with seasonal developmental durations.

  14. Developing TheoreticalMethodological Approaches to Assessment of Export Potential of Ukrainian Enterprises

    Directory of Open Access Journals (Sweden)

    Matyushenko Igor Yu.

    2016-02-01

    Full Text Available The article is aimed at studying the existing theoretical-methodological approaches to the analysis and assessment of export potential. The opinions by scientists regarding the disclosure of the categorial content of the concept of «export potential» have been considered, an own definition of the indicated economic category has been suggested. The main types of analytical procedures for assessment have been classified, some authorial methodical approaches to determine the level of export potential have been analyzed. The export potential of a hypothetical enterprise has been calculated by the selected methodologies of assessment. The urgency of improving and refining existing methods to implement more detailed and quantitative analysis has been substantiated. It has been suggested to implement a prognosis assessment of export potential of enterprises by combining the results of several methodologies in the aggregate indicator of export potential efficiency. A prognosis model for the dynamics of export potential of a hypothetical enterprise has been built, value of the aggregate indicator has been calculated on the basis of three selected valuation methodologies.

  15. The strong coupling constant: its theoretical derivation from a geometric approach to hadron structure

    International Nuclear Information System (INIS)

    Recami, E.; Tonin-Zanchin, V.

    1991-01-01

    Since more than a decade, a bi-scale, unified approach to strong and gravitational interactions has been proposed, that uses the geometrical methods of general relativity, and yielded results similar to strong gravity theory's. We fix our attention, in this note, on hadron structure, and show that also the strong interaction strength α s, ordinarily called the (perturbative) coupling-constant square, can be evaluated within our theory, and found to decrease (increase) as the distance r decreases (increases). This yields both the confinement of the hadron constituents for large values of r, and their asymptotic freedom [for small values of r inside the hadron]: in qualitative agreement with the experimental evidence. In other words, our approach leads us, on a purely theoretical ground, to a dependence of α s on r which had been previously found only on phenomenological and heuristical grounds. We expect the above agreement to be also quantitative, on the basis of a few checks performed in this paper, and of further work of ours about calculating meson mass-spectra. (author)

  16. Cartography, new technologies and geographic education: theoretical approaches to research the field

    Science.gov (United States)

    Seneme do Canto, Tânia

    2018-05-01

    In order to understand the roles that digital mapping can play in cartographic and geographic education, this paper discusses the theoretical and methodological approach used in a research that is undertaking in the education of geography teachers. To develop the study, we found in the works of Lankshear and Knobel (2013) a notion of new literacies that allows us looking at the practices within digital mapping in a sociocultural perspective. From them, we conclude that in order to understand the changes that digital cartography is able to foment in geography teaching, it is necessary to go beyond the substitution of means in the classroom and being able to explore what makes the new mapping practices different from others already consolidated in geography teaching. Therefore, we comment on some features of new forms of cartographic literacy that are in full development with digital technologies, but which are not determined solely by their use. The ideas of Kitchin and Dodge (2007) and Del Casino Junior and Hanna (2006) are also an important reference for the research. Methodologically, this approach helps us to understand that in the seek to comprehend maps and their meanings, irrespective of the medium used, we are dealing with a process of literacy that is very particular and emergent because it involves not only the characteristics of the map artifact and of the individual that produces or consumes it, but depends mainly on a diversity of interconnections that are being built between them (map and individual) and the world.

  17. The prediction of candidate genes for cervix related cancer through gene ontology and graph theoretical approach.

    Science.gov (United States)

    Hindumathi, V; Kranthi, T; Rao, S B; Manimaran, P

    2014-06-01

    With rapidly changing technology, prediction of candidate genes has become an indispensable task in recent years mainly in the field of biological research. The empirical methods for candidate gene prioritization that succors to explore the potential pathway between genetic determinants and complex diseases are highly cumbersome and labor intensive. In such a scenario predicting potential targets for a disease state through in silico approaches are of researcher's interest. The prodigious availability of protein interaction data coupled with gene annotation renders an ease in the accurate determination of disease specific candidate genes. In our work we have prioritized the cervix related cancer candidate genes by employing Csaba Ortutay and his co-workers approach of identifying the candidate genes through graph theoretical centrality measures and gene ontology. With the advantage of the human protein interaction data, cervical cancer gene sets and the ontological terms, we were able to predict 15 novel candidates for cervical carcinogenesis. The disease relevance of the anticipated candidate genes was corroborated through a literature survey. Also the presence of the drugs for these candidates was detected through Therapeutic Target Database (TTD) and DrugMap Central (DMC) which affirms that they may be endowed as potential drug targets for cervical cancer.

  18. An effectiveness analysis of healthcare systems using a systems theoretic approach

    Directory of Open Access Journals (Sweden)

    Inder Kerry

    2009-10-01

    surveyors is developed that provides a systematic search for improving the impact of accreditation on quality of care and hence on the accreditation/performance correlation. Conclusion There is clear value in developing a theoretical systems approach to achieving quality in health care. The introduction of the systematic surveyor-based search for improvements creates an adaptive-control system to optimize health care quality. It is hoped that these outcomes will stimulate further research in the development of strategic planning using systems theoretic approach for the improvement of quality in health care.

  19. An effectiveness analysis of healthcare systems using a systems theoretic approach.

    Science.gov (United States)

    Chuang, Sheuwen; Inder, Kerry

    2009-10-24

    improving the impact of accreditation on quality of care and hence on the accreditation/performance correlation. There is clear value in developing a theoretical systems approach to achieving quality in health care. The introduction of the systematic surveyor-based search for improvements creates an adaptive-control system to optimize health care quality. It is hoped that these outcomes will stimulate further research in the development of strategic planning using systems theoretic approach for the improvement of quality in health care.

  20. Beyond the Cognitive and the Virtue Approaches to Moral Education: Some Theoretical Foundations for an Integrated Account of Moral Education.

    Science.gov (United States)

    Roh, Young-Ran

    2000-01-01

    Explores theoretical foundation for integrated approach to moral education; discusses rational choice and moral action within human reflective structure; investigates moral values required for integrative approach to moral education; discusses content of moral motivation, including role of emotion and reason. (Contains 15 references.) (PKP)

  1. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    Science.gov (United States)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  2. A theoretical framework for Ångström equation. Its virtues and liabilities in solar energy estimation

    International Nuclear Information System (INIS)

    Stefu, Nicoleta; Paulescu, Marius; Blaga, Robert; Calinoiu, Delia; Pop, Nicolina; Boata, Remus; Paulescu, Eugenia

    2016-01-01

    Highlights: • A self-consistent derivation of the Ångström equation is carried out. • The theoretical assessment on its performance is well supported by the measured data. • The variability in cloud transmittance is a major source of uncertainty for estimates. • The degradation in time and space of the empirical equations calibration is assessed. - Abstract: The relation between solar irradiation and sunshine duration was investigated from the very beginning of solar radiation measurements. Many studies were devoted to this topic aiming to include the complex influence of clouds on solar irradiation into equations. This study is focused on the linear relationship between the clear sky index and the relative sunshine proposed by the pioneering work of Ångström. A full semi-empirical derivation of the equation, highlighting its virtues and liabilities, is presented. Specific Ångström – type equations for beam and diffuse solar irradiation were derived separately. The sum of the two components recovers the traditional form of the Ångström equation. The physical meaning of the Ångström parameter, as the average of the clouds transmittance, emerges naturally. The theoretical results on the Ångström equation performance are well supported by the tests against measured data. Using long-term records of global solar irradiation and sunshine duration from thirteen European radiometric stations, the influence of the Ångström constraint (slope equals one minus intercept) on the accuracy of the estimates is analyzed. Another focus is on the assessment of the degradation of the equation calibration. The temporal variability in cloud transmittance (both long-term trend and fluctuations) is a major source of uncertainty for Ångström equation estimates.

  3. Nuclear energy policy analysis under uncertainties : applications of new utility theoretic approaches

    International Nuclear Information System (INIS)

    Ra, Ki Yong

    1992-02-01

    For the purpose of analyzing the nuclear energy policy under uncertainties, new utility theoretic approaches were applied. The main discoveries of new utility theories are that, firstly, the consequences can affect the perceived probabilities, secondly, the utilities are not fixed but can change, and finally, utilities and probabilities thus should be combined dependently to determine the overall worth of risky option. These conclusions were applied to develop the modified expected utility model and to establish the probabilistic nuclear safety criterion. The modified expected utility model was developed in order to resolve the inconsistencies between the expected utility model and the actual decision behaviors. Based on information theory and Bayesian inference, the modified probabilities were obtained as the stated probabilities times substitutional factors. The model theoretically predicts that the extreme value outcomes are perceived as to be more likely to occur than medium value outcomes. This prediction is consistent with the first finding of new utility theories that the consequences can after the perceived probabilities. And further with this theoretical prediction, the decision behavior of buying lottery ticket, of paying for insurance and of nuclear catastrophic risk aversion can well be explained. Through the numerical application, it is shown that the developed model can well explain the common consequence effect, common ratio effect and reflection effect. The probabilistic nuclear safety criterion for core melt frequency was established: Firstly, the distribution of the public's safety goal (DPSG) was proposed for representing the public's group preference under risk. Secondly, a new probabilistic safety criterion (PSC) was established, in which the DPSG was used as a benchmark for evaluating the results of probabilistic safety assessment. Thirdly, a log-normal distribution was proposed as the appropriate DPSG for core melt frequency using the

  4. PREFACE: The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches

    Science.gov (United States)

    Cheng, Jin; Hon, Yiu-Chung; Seo, Jin Keun; Yamamoto, Masahiro

    2005-01-01

    The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches was held at Fudan University, Shanghai from 16-21 June 2004. The first conference in this series was held at the City University of Hong Kong in January 2002 and it was agreed to hold the conference once every two years in a Pan-Pacific Asian country. The next conference is scheduled to be held at Hokkaido University, Sapporo, Japan in July 2006. The purpose of this series of biennial conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries. In recent decades, interest in inverse problems has been flourishing all over the globe because of both the theoretical interest and practical requirements. In particular, in Asian countries, one is witnessing remarkable new trends of research in inverse problems as well as the participation of many young talents. Considering these trends, the second conference was organized with the chairperson Professor Li Tat-tsien (Fudan University), in order to provide forums for developing research cooperation and to promote activities in the field of inverse problems. Because solutions to inverse problems are needed in various applied fields, we entertained a total of 92 participants at the second conference and arranged various talks which ranged from mathematical analyses to solutions of concrete inverse problems in the real world. This volume contains 18 selected papers, all of which have undergone peer review. The 18 papers are classified as follows: Surveys: four papers give reviews of specific inverse problems. Theoretical aspects: six papers investigate the uniqueness, stability, and reconstruction schemes. Numerical methods: four papers devise new numerical methods and their applications to inverse problems. Solutions to applied inverse problems: four papers discuss concrete inverse problems such as scattering problems and inverse problems in

  5. An information-theoretic approach to assess practical identifiability of parametric dynamical systems.

    Science.gov (United States)

    Pant, Sanjay; Lombardi, Damiano

    2015-10-01

    A new approach for assessing parameter identifiability of dynamical systems in a Bayesian setting is presented. The concept of Shannon entropy is employed to measure the inherent uncertainty in the parameters. The expected reduction in this uncertainty is seen as the amount of information one expects to gain about the parameters due to the availability of noisy measurements of the dynamical system. Such expected information gain is interpreted in terms of the variance of a hypothetical measurement device that can measure the parameters directly, and is related to practical identifiability of the parameters. If the individual parameters are unidentifiable, correlation between parameter combinations is assessed through conditional mutual information to determine which sets of parameters can be identified together. The information theoretic quantities of entropy and information are evaluated numerically through a combination of Monte Carlo and k-nearest neighbour methods in a non-parametric fashion. Unlike many methods to evaluate identifiability proposed in the literature, the proposed approach takes the measurement-noise into account and is not restricted to any particular noise-structure. Whilst computationally intensive for large dynamical systems, it is easily parallelisable and is non-intrusive as it does not necessitate re-writing of the numerical solvers of the dynamical system. The application of such an approach is presented for a variety of dynamical systems--ranging from systems governed by ordinary differential equations to partial differential equations--and, where possible, validated against results previously published in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. THEORETICAL APPROACHES TO THE DEFINITION OF MOTIVATION OF PROFESSIONAL ACTIVITY OF PUBLIC SERVANTS

    Directory of Open Access Journals (Sweden)

    E. V. Vashalomidze

    2016-01-01

    Full Text Available The relevance of the topic chosen due to the presence of performance motivation development problems of civil servants, including their motivation for continuous professional development, as one of the main directions of development of the civil service in general, approved by the relevant Presidential Decree on 2016–2018 years. In the first part of this article provides a brief analytical overview and an assessment of content and process of theoretical and methodological approaches to solving the problems of motivation of the personnel of socio-economic systems. In the second part of the article on the basis of the research proposed motivating factors in the development of the approaches set out in the first part of the article.The purpose / goal. The aim of the article is to provide methodological assistance to academic institutions involved in the solution of scientific and practical problems of motivation of civil servants to the continuous professional development in accordance with the Presidential Decree of 11 August 2016 № 408.Methodology. The methodological basis of this article are: a comprehensive analysis of normative legal provision of state of the Russian Federation; systematic approach and historical analysis of the theory and methodology of solving problems of staff motivation; method of expert evaluations; the scientific method of analogies.Conclusions / relevance. The practical significance of the article is in the operational delivery of the scientific and methodological assistance to the implementation of the Russian Federation "On the main directions of the state civil service of the Russian Federation in the years 2016–2018" Presidential Decree of 11 August number 403 regarding the establishment of mechanisms to motivate civil servants to continuous professional development.

  7. Theoretical approach to study the light particles induced production routes of 22Na

    International Nuclear Information System (INIS)

    Eslami, M.; Kakavand, T.; Mirzaii, M.

    2015-01-01

    Highlights: • Excitation function of 22 Na via thirty-three various reactions. • Various theoretical frameworks along with adjustments are employed in the calculations. • The results are given at energy range from the threshold up to 100 MeV. • The results are compared with each other and corresponding experimental data. - Abstract: To create a roadmap for the industrial-scale production of sodium-22, various production routes of this radioisotope involving light charged-particle-induced reactions at the bombarding energy range of threshold to a maximum of 100 MeV have been calculated. The excitation functions are calculated by using various nuclear models. Reaction pre-equilibrium process calculations have been made in the framework of the hybrid and geometry dependent hybrid models using ALICE/ASH code, and in the framework of the exciton model using TALYS-1.4 code. To calculate the compound nucleus evaporation process, both Weisskopf–Ewing and Hauser–Feshbach theories have been employed. The cross sections have also separately been estimated with five different level density models at the whole projectile energies. A comparison with calculations based on the codes, on one hand, and experimental data, on the other hand, is arranged and discussed

  8. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the theoretical approach

    Energy Technology Data Exchange (ETDEWEB)

    Staub, Isabelle; Fredriksson, Anders; Outters, Nils [Golder Associates AB, Uppsala (Sweden)

    2002-05-01

    In the purpose of studying the possibilities of a Deep Repository for spent fuel, the Swedish Nuclear and Fuel Management Company (SKB) is currently planning for Site Investigations. Data collected from these Site Investigations are interpreted and analysed to achieve the full Site Description, which is built up of models from all the disciplines that are considered of importance for the Site Description. One of these models is the Rock Mechanical Descriptive Model,which would be developed for any site in hard crystalline rock, and is a combination and evaluation of the characterisation of rock mass by means of empirical relationships and a theoretical approach based on numerical modelling. The present report describes the theoretical approach. The characterisation of the mechanical properties of the rock mass, viewed as a unit consisting of intact rock and fractures, is achieved by numerical simulations with following input parameters: initial stresses, fracture geometry, distribution of rock mechanical properties, such as deformation and strength parameters, for the intact rock and for the fractures. The numerical modelling was performed with the two-dimensional code UDEC, and the rock block models were generated from 2D trace sections extracted from the 3D Discrete Fracture Network (DFN) model. Assumptions and uncertainties related to the set-up of the model are considered. The numerical model was set-up to simulate a plain strain-loading test. Different boundary conditions were applied on the model for simulating stress conditions (I) in the undisturbed rock mass, and (II) at the proximity of a tunnel. In order to assess the reliability of the model sensitivity analyses have been conducted on some rock block models for defining the dependency of mechanical properties to in situ stresses, the influence of boundary conditions, rock material and joint constitutive models used to simulate the behaviour of intact rock and fractures, domain size and anisotropy. To

  9. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the theoretical approach

    International Nuclear Information System (INIS)

    Staub, Isabelle; Fredriksson, Anders; Outters, Nils

    2002-05-01

    In the purpose of studying the possibilities of a Deep Repository for spent fuel, the Swedish Nuclear and Fuel Management Company (SKB) is currently planning for Site Investigations. Data collected from these Site Investigations are interpreted and analysed to achieve the full Site Description, which is built up of models from all the disciplines that are considered of importance for the Site Description. One of these models is the Rock Mechanical Descriptive Model,which would be developed for any site in hard crystalline rock, and is a combination and evaluation of the characterisation of rock mass by means of empirical relationships and a theoretical approach based on numerical modelling. The present report describes the theoretical approach. The characterisation of the mechanical properties of the rock mass, viewed as a unit consisting of intact rock and fractures, is achieved by numerical simulations with following input parameters: initial stresses, fracture geometry, distribution of rock mechanical properties, such as deformation and strength parameters, for the intact rock and for the fractures. The numerical modelling was performed with the two-dimensional code UDEC, and the rock block models were generated from 2D trace sections extracted from the 3D Discrete Fracture Network (DFN) model. Assumptions and uncertainties related to the set-up of the model are considered. The numerical model was set-up to simulate a plain strain-loading test. Different boundary conditions were applied on the model for simulating stress conditions (I) in the undisturbed rock mass, and (II) at the proximity of a tunnel. In order to assess the reliability of the model sensitivity analyses have been conducted on some rock block models for defining the dependency of mechanical properties to in situ stresses, the influence of boundary conditions, rock material and joint constitutive models used to simulate the behaviour of intact rock and fractures, domain size and anisotropy. To

  10. Theoretical estimation of absorbed dose to organs in radioimmunotherapy using radionuclides with multiple unstable daughters

    International Nuclear Information System (INIS)

    Hamacher, K.A.; Sgouros, G.

    2001-01-01

    The toxicity and clinical utility of long-lived alpha emitters such as Ac-225 and Ra-223 will depend upon the fate of alpha-particle emitting unstable intermediates generated after decay of the conjugated parent. For example, decay of Ac-225 to a stable element yields four alpha particles and seven radionuclides. Each of these progeny has its own free-state biodistribution and characteristic half-life. Therefore, their inclusion for a more accurate prediction of absorbed dose and potential toxicity requires a formalism that takes these factors into consideration as well. To facilitate the incorporation of such intermediates into the dose calculation, a previously developed methodology (model 1) has been extended. Two new models (models 2 and 3) for allocation of daughter products are introduced and are compared with the previously developed model. Model 1 restricts the transport to a function that yields either the place of origin or the place(s) of biodistribution depending on the half-life of the parent radionuclide. Model 2 includes the transient time within the bloodstream and model 3 incorporates additional binding at or within the tumor. This means that model 2 also allows for radionuclide decay and further daughter production while moving from one location to the next and that model 3 relaxes the constraint that the residence time within the tumor is solely based on the half-life of the parent. The models are used to estimate normal organ absorbed doses for the following parent radionuclides: Ac-225, Pb-212, At-211, Ra-223, and Bi-213. Model simulations are for a 0.1 g rapidly accessible tumor and a 10 g solid tumor. Additionally, the effects of varying radiolabled carrier molecule purity and amount of carrier molecules, as well as tumor cell antigen saturation are examined. The results indicate that there is a distinct advantage in using parent radionuclides such as Ac-225 or Ra-223, each having a half-life of more than 10 days and yielding four alpha

  11. Estimating plant root water uptake using a neural network approach

    DEFF Research Database (Denmark)

    Qiao, D M; Shi, H B; Pang, H B

    2010-01-01

    but has not yet been addressed. This paper presents and tests such an approach. The method is based on a neural network model, estimating the water uptake using different types of data that are easy to measure in the field. Sunflower grown in a sandy loam subjected to water stress and salinity was taken......Water uptake by plant roots is an important process in the hydrological cycle, not only for plant growth but also for the role it plays in shaping microbial community and bringing in physical and biochemical changes to soils. The ability of roots to extract water is determined by combined soil...... and plant characteristics, and how to model it has been of interest for many years. Most macroscopic models for water uptake operate at soil profile scale under the assumption that the uptake rate depends on root density and soil moisture. Whilst proved appropriate, these models need spatio-temporal root...

  12. THEORETICAL AND METHODICAL APPROACHES TO THE FORMATION AND EVALUATION OF THE QUALITY OF TOURIST SERVICES

    Directory of Open Access Journals (Sweden)

    Nataliya Vasylykha

    2017-12-01

    Full Text Available The study, the results of which are described in the article, is devoted to analysing and substantiating approaches to the assessment and quality assurance of tourism services, which form their competitiveness, namely factors and indicators of quality. After all, the integration and globalization of the world society determine the development of tourism as a catalyst for these global processes, and world practice has proved that tourism can be an effective way to solve many socio-economic problems. The subject of the study is the peculiarities of assessing the quality of tourist services. Methodology. The methodological basis of the work is a system of general scientific and special scientific methods, mainly, in the process of research, there are used such methods as system-analytical and dialectical methods – for the theoretical generalization of the investigated material; structural and logical method – in systematizing factors and indicators of the quality of tourist services. The purpose of the article is a theoretical justification of approaches to the quality of tourist services and optimization of their quality assessment. In the research, approaches to the interpretation of the concept of quality are presented and analysed, features of services in general and tourism in particular are concentrated, and it is suggested to group and classify factors and indicators of their quality. The interpretation of the notion of quality is ambiguous, both in Ukrainian and in foreign literary sources, and depends on the point of view on this notion. In our opinion, the most thorough definition characterizes the quality of products and services as a complex feature that determines their suitability to the needs of the consumer. Taking into account the specificity of the term “service”, peculiarities determining the approaches to their evaluation are studied, such a service can be considered a product dominated by intangible elements and also

  13. On the electric dipole moments of small sodium clusters from different theoretical approaches

    International Nuclear Information System (INIS)

    Aguado, Andrés; Largo, Antonio; Vega, Andrés; Balbás, Luis Carlos

    2012-01-01

    Graphical abstract: The dipole moments and polarizabilities of a few isomers of sodium clusters of selected sizes (n = 13, 14, 16) are calculated using density functional theory methods as well as ab initio MP2, CASSCF, and MR-CI methods. Among the density functional approaches, we consider the usual local density and generalized gradient approximations, as well as a recent van der Waals self-consistent functional accounting for non-local dispersion interactions. Highlights: ► Dipole moment and polarizability of sodium clusters from DFT and ab initio methods. ► New van der Waals selfconsistent implementation of non-local dispersion interactions. ► New starting isomeric geometries from extensive search of global minimum structures. ► Good agreement with recent experiments at cryogenic temperatures. - Abstract: The dipole moments of Na n clusters in the size range 10 n clusters of selected sizes (n = 13, 14, 16), obtained recently through an extensive unbiased search of the global minimum structures, and using density functional theory methods as well as ab initio MP2, CASSCF, and MR-CI methods. Among the density functional approaches, we consider the usual local density and generalized gradient approximations, as well as a recent van der Waals self-consistent functional accounting for non-local dispersion interactions. Both non-local pseudopotentials and all-electron implementations are employed and compared in order to assess the possible contribution of the core electrons to the electric dipole moments. Our new geometries possess significantly smaller electric dipole moments than previous density functional results, mostly when combined with the van der Waals exchange–correlation functional. However, although the agreement with experiment clearly improves upon previous calculations, the theoretical dipole moments are still about one order of magnitude larger than the experimental values, suggesting that the correct global minimum structures have not been

  14. Improving the interpersonal competences of head nurses through Peplau's theoretical active learning approach.

    Science.gov (United States)

    Suhariyanto; Hariyati, Rr Tutik Sri; Ungsianik, Titin

    2018-02-01

    Effective interpersonal skills are essential for head nurses in governing and managing their work units. Therefore, an active learning strategy could be the key to enhance the interpersonal competences of head nurses. This study aimed to investigate the effects of Peplau's theoretical approach of active learning on the improvement of head nurses' interpersonal skills. This study used a pre-experimental design with one group having pretests and posttests, without control group. A total sample of 25 head nurses from inpatient units of a wellknown private hospital in Jakarta was involved in the study. Data were analyzed using the paired t-test. The results showed a significant increase in head nurses' knowledge following the training to strengthen their interpersonal roles (P=.003). The results also revealed significant increases in the head nurses' skills in playing the roles of leader (P=.006), guardian (P=.014), and teacher/speaker (P=.015). Nonetheless, the results showed no significant increases in the head nurses' skills in playing the roles of counselor (P=.092) and stranger (P=.182). Training in strengthening the interpersonal roles of head nurses significantly increased the head nurses' knowledge and skills. The results of the study suggested the continuation of active learning strategies to improve the interpersonal abilities of head nurses. Furthermore, these strategies could be used to build the abilities of head nurses in other managerial fields. Copyright © 2018 Elsevier España, S.L.U. All rights reserved.

  15. Does Macaulay Duration Provide The Most Cost-Effective Immunization Method – A Theoretical Approach

    Directory of Open Access Journals (Sweden)

    Zaremba Leszek

    2017-02-01

    Full Text Available In the following, we offer a theoretical approach that attempts to explain (Comments 1-3 why and when the Macaulay duration concept happens to be a good approximation of a bond’s price sensitivity. We are concerned with the basic immunization problem with a single liability to be discharged at a future time q. Our idea is to divide the class K of all shifts a(t of a term structure of interest rates s(t into many classes and then to find a sufficient and necessary condition a given bond portfolio, dependent on a class of shifts, must satisfy to secure immunization at time q against all shifts a(t from that class. For this purpose, we introduce the notions of dedicated duration and dedicated convexity. For each class of shifts, we show how to choose from a bond market under consideration a portfolio with maximal dedicated convexity among all immunizing portfolios. We demonstrate that the portfolio yields the maximal unanticipated rate of return and appears to be uniquely determined as a barbell strategy (portfolio built up with 2 zero-coupon bearing bonds with maximal and respective minimal dedicated durations. Finally, an open problem addressed to researchers performing empirical studies is formulated.

  16. Game Theoretical Approaches for Transport-Aware Channel Selection in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Chen Shih-Ho

    2010-01-01

    Full Text Available Effectively sharing channels among secondary users (SUs is one of the greatest challenges in cognitive radio network (CRN. In the past, many studies have proposed channel selection schemes at the physical or the MAC layer that allow SUs swiftly respond to the spectrum states. However, they may not lead to enhance performance due to slow response of the transport layer flow control mechanism. This paper presents a cross-layer design framework called Transport Aware Channel Selection (TACS scheme to optimize the transport throughput based on states, such as RTT and congestion window size, of TCP flow control mechanism. We formulate the TACS problem as two different game theoretic approaches: Selfish Spectrum Sharing Game (SSSG and Cooperative Spectrum Sharing Game (CSSG and present novel distributed heuristic algorithms to optimize TCP throughput. Computer simulations show that SSSG and CSSG could double the SUs throughput of current MAC-based scheme when primary users (PUs use their channel infrequently, and with up to 12% to 100% throughput increase when PUs are more active. The simulation results also illustrated that CSSG performs up to 20% better than SSSG in terms of the throughput.

  17. Theoretical simulations on the antioxidant mechanism of naturally occurring flavonoid: A DFT approach

    International Nuclear Information System (INIS)

    Praveena, R.; Sadasivam, K.

    2016-01-01

    Synthetic antioxidants such as butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) are found to be toxic, hence non-carcinogenic naturally occurring radical scavengers especially flavonoids have gained considerable importance in the past two decades. In the present investigation, the radical scavenging activity of C-glycosyl flavonoids is evaluated using theoretical approach which could broaden its scope in therapeutic applications. Gas and solvent phase studies of structural and molecular characteristics of C-glycosyl flavonoid, isovitexin is investigated through hydrogen atom transfer mechanism (HAT), Electron transfer-proton transfer (ET–PT) and Sequential proton loss electron transfer (SPLET) by Density functional theory (DFT) using hybrid parameters. The computed values of the adiabatic ionization potential, electron affinity, hardness, softness, electronegativity and electrophilic index indicate that isovitexin possess good radical scavenging activity. The behavior of different –OH groups in polyphenolic compounds is assessed by considering electronic effects of the neighbouring groups and the overall geometry of molecule which in turn helps in analyzing the antioxidant capacity of the polyphenolic molecule. The studies indicate that the H–atom abstraction from 4’–OH site is preferred during the radical scavenging process. From Mulliken spin density analysis and FMOs, B–ring is found to be more delocalized center and capable of electron donation. Comparison of antioxidant activity of vitexin and isovitexin leads to the conclusion that isovitexin acts as a better radical scavenger. This is an evidence for the importance of position of glucose unit in the flavonoid.

  18. A Hybrid Chaotic and Number Theoretic Approach for Securing DICOM Images

    Directory of Open Access Journals (Sweden)

    Jeyamala Chandrasekaran

    2017-01-01

    Full Text Available The advancements in telecommunication and networking technologies have led to the increased popularity and widespread usage of telemedicine. Telemedicine involves storage and exchange of large volume of medical records for remote diagnosis and improved health care services. Images in medical records are characterized by huge volume, high redundancy, and strong correlation among adjacent pixels. This research work proposes a novel idea of integrating number theoretic approach with Henon map for secure and efficient encryption. Modular exponentiation of the primitive roots of the chosen prime in the range of its residual set is employed in the generation of two-dimensional array of keys. The key matrix is permuted and chaotically controlled by Henon map to decide the encryption keys for every pixel of DICOM image. The proposed system is highly secure because of the randomness introduced due to the application of modular exponentiation key generation and application of Henon maps for permutation of keys. Experiments have been conducted to analyze key space, key sensitivity, avalanche effect, correlation distribution, entropy, and histograms. The corresponding results confirm the strength of the proposed design towards statistical and differential crypt analysis. The computational requirements for encryption/decryption have been reduced significantly owing to the reduced number of computations in the process of encryption/decryption.

  19. Power and Law in Enlightened Absolutism – Carl Gottlieb Svarez’ Theoretical and Practical Approach

    Directory of Open Access Journals (Sweden)

    Milan Kuhli

    2013-01-01

    Full Text Available The term Enlightened Absolutism reflects a certain tension between its two components. This tension is in a way a continuation of the dichotomy between power on one hand and law on the other. The present paper shall provide an analysis of these two concepts from the perspective of Carl Gottlieb Svarez, who, in his position as a high-ranking Prussian civil servant and legal reformist, had unparalleled influence on the legislative history of the Prussian states towards the end of the 18th century. Working side-by-side with Johann Heinrich Casimir von Carmer, who held the post of Prussian minister of justice from 1779 to 1798, Svarez was able to make use of his talent for reforming and legislating. From 1780 to 1794 he was primarily responsible for the elaboration of the codification of the Prussian private law – the »Allgemeines Landrecht für die Preußischen Staaten« in 1794. In the present paper, Svarez’ approach to the relation between law and power shall be analysed on two different levels. Firstly, on a theoretical level, the reformist’s thoughts and reflections as laid down in his numerous works, papers and memorandums, shall be discussed. Secondly, on a practical level, the question of the extent to which he implemented his ideas in Prussian legal reality shall be explored.

  20. Strategic exploration of battery waste management: A game-theoretic approach.

    Science.gov (United States)

    Kaushal, Rajendra Kumar; Nema, Arvind K; Chaudhary, Jyoti

    2015-07-01

    Electronic waste or e-waste is the fastest growing stream of solid waste today. It contains both toxic substances as well as valuable resources. The present study uses a non-cooperative game-theoretic approach for efficient management of e-waste, particularly batteries that contribute a major portion of any e-waste stream and further analyses the economic consequences of recycling of these obsolete, discarded batteries. Results suggest that the recycler would prefer to collect the obsolete batteries directly from the consumer rather than from the manufacturer, only if, the incentive return to the consumer is less than 33.92% of the price of the battery, the recycling fee is less than 6.46% of the price of the battery, and the price of the recycled material is more than 31.08% of the price of the battery. The manufacturer's preferred choice of charging a green tax from the consumer can be fruitful for the battery recycling chain. © The Author(s) 2015.

  1. The game as strategy for approach to sexuality with adolescents: theoretical-methodological reflections

    Directory of Open Access Journals (Sweden)

    Vânia de Souza

    Full Text Available ABSTRACT Objective: To describe the Papo Reto [Straight Talk] game and reflect on its theoretical-methodological basis. Method: Analytical study on the process of elaboration of the Papo Reto online game, destined to adolescents aged 15-18 years, with access to the Game between 2014 and 2015. Results: the interactions of 60 adolescents from Belo Horizonte and São Paulo constituted examples of the potentialities of the Game to favor the approach to sexuality with adolescents through simulation of reality, invention and interaction. Based on those potentialities, four thinking categories were discussed: the game as pedagogic device; the game as simulation of realities; the game as device for inventive learning; and the game empowering the interaction. Conclusion: By permitting that the adolescents take risks on new ways, the Game allows them to become creative and active in the production of senses, in the creation of their discourses and in the ways of thinking, feeling and acting in the sexuality field.

  2. Bovine serum albumin adsorption onto functionalized polystyrene lattices: A theoretical modeling approach and error analysis

    Science.gov (United States)

    Beragoui, Manel; Aguir, Chadlia; Khalfaoui, Mohamed; Enciso, Eduardo; Torralvo, Maria José; Duclaux, Laurent; Reinert, Laurence; Vayer, Marylène; Ben Lamine, Abdelmottaleb

    2015-03-01

    The present work involves the study of bovine serum albumin adsorption onto five functionalized polystyrene lattices. The adsorption measurements have been carried out using a quartz crystal microbalance. Poly(styrene-co-itaconic acid) was found to be an effective adsorbent for bovine serum albumin molecule adsorption. The experimental isotherm data were analyzed using theoretical models based on a statistical physics approach, namely monolayer, double layer with two successive energy levels, finite multilayer, and modified Brunauer-Emmet-Teller. The equilibrium data were then analyzed using five different non-linear error analysis methods and it was found that the finite multilayer model best describes the protein adsorption data. Surface characteristics, i.e., surface charge density and number density of surface carboxyl groups, were used to investigate their effect on the adsorption capacity. The combination of the results obtained from the number of adsorbed layers, the number of adsorbed molecules per site, and the thickness of the adsorbed bovine serum albumin layer allows us to predict that the adsorption of this protein molecule can also be distinguished by monolayer or multilayer adsorption with end-on, side-on, and overlap conformations. The magnitudes of the calculated adsorption energy indicate that bovine serum albumin molecules are physisorbed onto the adsorbent lattices.

  3. A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange

    KAUST Repository

    Douik, Ahmed S.

    2014-05-11

    In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.

  4. Intraplate seismicity in Canada: a graph theoretic approach to data analysis and interpretation

    Directory of Open Access Journals (Sweden)

    K. Vasudevan

    2010-10-01

    Full Text Available Intraplate seismicity occurs in central and northern Canada, but the underlying origin and dynamics remain poorly understood. Here, we apply a graph theoretic approach to characterize the statistical structure of spatiotemporal clustering exhibited by intraplate seismicity, a direct consequence of the underlying nonlinear dynamics. Using a recently proposed definition of "recurrences" based on record breaking processes (Davidsen et al., 2006, 2008, we have constructed directed graphs using catalogue data for three selected regions (Region 1: 45°−48° N/74°−80° W; Region 2: 51°−55° N/77°−83° W; and Region 3: 56°−70° N/65°−95° W, with attributes drawn from the location, origin time and the magnitude of the events. Based on comparisons with a null model derived from Poisson distribution or Monte Carlo shuffling of the catalogue data, our results provide strong evidence in support of spatiotemporal correlations of seismicity in all three regions considered. Similar evidence for spatiotemporal clustering has been documented using seismicity catalogues for southern California, suggesting possible similarities in underlying earthquake dynamics of both regions despite huge differences in the variability of seismic activity.

  5. Theoretical simulations on the antioxidant mechanism of naturally occurring flavonoid: A DFT approach

    Energy Technology Data Exchange (ETDEWEB)

    Praveena, R. [Department of Chemistry, Bannari Amman Institute of Technology, Sathyamangalam, Erode, Tamil Nadu (India); Sadasivam, K. [Department of Physics, Bannari Amman Institute of Technology, Sathyamangalam, Erode, Tamil Nadu (India)

    2016-05-06

    Synthetic antioxidants such as butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) are found to be toxic, hence non-carcinogenic naturally occurring radical scavengers especially flavonoids have gained considerable importance in the past two decades. In the present investigation, the radical scavenging activity of C-glycosyl flavonoids is evaluated using theoretical approach which could broaden its scope in therapeutic applications. Gas and solvent phase studies of structural and molecular characteristics of C-glycosyl flavonoid, isovitexin is investigated through hydrogen atom transfer mechanism (HAT), Electron transfer-proton transfer (ET–PT) and Sequential proton loss electron transfer (SPLET) by Density functional theory (DFT) using hybrid parameters. The computed values of the adiabatic ionization potential, electron affinity, hardness, softness, electronegativity and electrophilic index indicate that isovitexin possess good radical scavenging activity. The behavior of different –OH groups in polyphenolic compounds is assessed by considering electronic effects of the neighbouring groups and the overall geometry of molecule which in turn helps in analyzing the antioxidant capacity of the polyphenolic molecule. The studies indicate that the H–atom abstraction from 4’–OH site is preferred during the radical scavenging process. From Mulliken spin density analysis and FMOs, B–ring is found to be more delocalized center and capable of electron donation. Comparison of antioxidant activity of vitexin and isovitexin leads to the conclusion that isovitexin acts as a better radical scavenger. This is an evidence for the importance of position of glucose unit in the flavonoid.

  6. Perspectives on Cybersecurity Information Sharing among Multiple Stakeholders Using a Decision-Theoretic Approach.

    Science.gov (United States)

    He, Meilin; Devine, Laura; Zhuang, Jun

    2018-02-01

    The government, private sectors, and others users of the Internet are increasingly faced with the risk of cyber incidents. Damage to computer systems and theft of sensitive data caused by cyber attacks have the potential to result in lasting harm to entities under attack, or to society as a whole. The effects of cyber attacks are not always obvious, and detecting them is not a simple proposition. As the U.S. federal government believes that information sharing on cybersecurity issues among organizations is essential to safety, security, and resilience, the importance of trusted information exchange has been emphasized to support public and private decision making by encouraging the creation of the Information Sharing and Analysis Center (ISAC). Through a decision-theoretic approach, this article provides new perspectives on ISAC, and the advent of the new Information Sharing and Analysis Organizations (ISAOs), which are intended to provide similar benefits to organizations that cannot fit easily into the ISAC structure. To help understand the processes of information sharing against cyber threats, this article illustrates 15 representative information sharing structures between ISAC, government, and other participating entities, and provide discussions on the strategic interactions between different stakeholders. This article also identifies the costs of information sharing and information security borne by different parties in this public-private partnership both before and after cyber attacks, as well as the two main benefits. This article provides perspectives on the mechanism of information sharing and some detailed cost-benefit analysis. © 2017 Society for Risk Analysis.

  7. Theoretical approach to description of some corrosion product transport processes in PWRs primary circuit

    International Nuclear Information System (INIS)

    Zmitko, M.

    1990-10-01

    The behavior and mass transport of corrosion products in primary circuits of PWR type reactors are described assuming that the products occur in ionic form, in colloidal form (about 0.01-0.6 μm in size) and in particulate form. The transport of the soluble form is treated as a diffusion process. For the colloidal form, allowance is made for its Van der Waals attraction and repulsion interaction with the surfaces. For particles and their agglomerates, the hydrodynamical effects of the flowing liquid on the agglomerate breakdown and re-formation of the particle suspension are taken into account. Efforts were made to employ theoretical relations rather than particular experimental data, for the conclusions to be applicable to different facilities. It is believed that the complex approach to the problem can contribute to gaining insight into the role of the individual factors and processes involved, particularly as regards colloidal particles whose effect on the formation of radiation fields is not yet fully understood. (author). 3 figs., 10 refs

  8. Power Transmission Scheduling for Generators in a Deregulated Environment Based on a Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-12-01

    Full Text Available In a deregulated environment of the power market, in order to lower their energy price and guarantee the stability of the power network, appropriate transmission lines have to be considered for electricity generators to sell their energy to the end users. This paper proposes a game-theoretic power transmission scheduling for multiple generators to lower their wheeling cost. Based on the embedded cost method, a wheeling cost model consisting of congestion cost, cost of losses and cost of transmission capacity is presented. By assuming each generator behaves in a selfish and rational way, the competition among the multiple generators is formulated as a non-cooperative game, where the players are the generators and the strategies are their daily schedules of power transmission. We will prove that there exists at least one pure-strategy Nash equilibrium of the formulated power transmission game. Moreover, a distributed algorithm will be provided to realize the optimization in terms of minimizing the wheeling cost. Finally, simulations were performed and discussed to verify the feasibility and effectiveness of the proposed non-cooperative game approach for the generators in a deregulated environment.

  9. A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange

    KAUST Repository

    Douik, Ahmed S.; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim; Sorour, Sameh; Tembine, Hamidou

    2014-01-01

    In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.

  10. THE EASTERN PARTNERSHIP AS PART OF THE EU FOREIGN POLICY: A REVIEW OF THEORETICAL APPROACHES

    Directory of Open Access Journals (Sweden)

    Andrei SCRINIC

    2014-10-01

    Full Text Available The Eastern Partnership (2009 as a component part of the European Neighbourhood Policy is a tool that aims at the economic integration and political cooperation of the countries that are included in this project by signing association and free trade agreements with the European Union (EU. The recent events in Ukraine have revealed the possibility of these countries to become EU member states depending on the progress made, which is confirmed by many European experts. However, there are big differences among the Eastern Partnership countries on their way to EU integration on the background of the strong pressure from Russia, aimed to suppress any pro-European manifestations of such countries. Despite the sharpening of geopolitical challenges, the EU continues to use the traditional ways of enlargement and deepening of cooperation processes with the Eastern Neighbourhood. This paper aims at reviewing the theoretical approaches through which the EU, as a normative power, exerts major influence on the Eastern Partnership (EaP countries by extending the neofunctional practices, intergovernmental cooperation and the constructivist model. However, in view of reaching the soft power objectives, we aim at transforming and strengthening the EU positions in the context of amplified economic and political-ideological problems at regional level.

  11. Combination of real options and game-theoretic approach in investment analysis

    Science.gov (United States)

    Arasteh, Abdollah

    2016-09-01

    Investments in technology create a large amount of capital investments by major companies. Assessing such investment projects is identified as critical to the efficient assignment of resources. Viewing investment projects as real options, this paper expands a method for assessing technology investment decisions in the linkage existence of uncertainty and competition. It combines the game-theoretic models of strategic market interactions with a real options approach. Several key characteristics underlie the model. First, our study shows how investment strategies rely on competitive interactions. Under the force of competition, firms hurry to exercise their options early. The resulting "hurry equilibrium" destroys the option value of waiting and involves violent investment behavior. Second, we get best investment policies and critical investment entrances. This suggests that integrating will be unavoidable in some information product markets. The model creates some new intuitions into the forces that shape market behavior as noticed in the information technology industry. It can be used to specify best investment policies for technology innovations and adoptions, multistage R&D, and investment projects in information technology.

  12. Force-induced bone growth and adaptation: A system theoretical approach to understanding bone mechanotransduction

    International Nuclear Information System (INIS)

    Maldonado, Solvey; Findeisen, Rolf

    2010-01-01

    The modeling, analysis, and design of treatment therapies for bone disorders based on the paradigm of force-induced bone growth and adaptation is a challenging task. Mathematical models provide, in comparison to clinical, medical and biological approaches an structured alternative framework to understand the concurrent effects of the multiple factors involved in bone remodeling. By now, there are few mathematical models describing the appearing complex interactions. However, the resulting models are complex and difficult to analyze, due to the strong nonlinearities appearing in the equations, the wide range of variability of the states, and the uncertainties in parameters. In this work, we focus on analyzing the effects of changes in model structure and parameters/inputs variations on the overall steady state behavior using systems theoretical methods. Based on an briefly reviewed existing model that describes force-induced bone adaptation, the main objective of this work is to analyze the stationary behavior and to identify plausible treatment targets for remodeling related bone disorders. Identifying plausible targets can help in the development of optimal treatments combining both physical activity and drug-medication. Such treatments help to improve/maintain/restore bone strength, which deteriorates under bone disorder conditions, such as estrogen deficiency.

  13. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  14. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  15. A new approach to Ozone Depletion Potential (ODP) estimation

    Science.gov (United States)

    Portmann, R. W.; Daniel, J. S.; Yu, P.

    2017-12-01

    The Ozone Depletion Potential (ODP) is given by the time integrated global ozone loss of an ozone depleting substance (ODS) relative to a reference ODS (usually CFC-11). The ODP is used by the Montreal Protocol (and subsequent amendments) to inform policy decisions on the production of ODSs. Since the early 1990s, ODPs have usually been estimated using an approximate formulism that utilizes the lifetime and the fractional release factor of the ODS. This has the advantage that it can utilize measured concentrations of the ODSs to estimate their fractional release factors. However, there is a strong correlation between stratospheric lifetimes and fractional release factors of ODSs and that this can introduce uncertainties into ODP calculations when the terms are estimated independently. Instead, we show that the ODP is proportional to the average global ozone loss per equivalent chlorine molecule released in the stratosphere by the ODS loss process (which we call the Γ factor) and, importantly, this ratio varies only over a relatively small range ( 0.3-1.5) for ODPs with stratospheric lifetimes of 20 to more than 1,000 years. The Γ factor varies smoothly with stratospheric lifetime for ODSs with loss processes dominated by photolysis and is larger for long-lived species, while stratospheric OH loss processes produce relatively small Γs that are nearly independent of stratospheric lifetime. The fractional release approach does not accurately capture these relationships. We propose a new formulation that takes advantage of this smooth variation by parameterizing the Γ factor using ozone changes computed using the chemical climate model CESM-WACCM and the NOCAR two-dimensional model. We show that while the absolute Γ's vary between WACCM and NOCAR models, much of the difference is removed for the Γ/ΓCFC-11 ratio that is used in the ODP formula. This parameterized method simplifies the computation of ODPs while providing enhanced accuracy compared to the

  16. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  17. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  18. Theoretical Antecedents of Standing at Work: An Experience Sampling Approach Using the Theory of Planned Behavior.

    Science.gov (United States)

    Meyer, M Renée Umstattd; Wu, Cindy; Walsh, Shana M

    2016-01-01

    Time spent sitting has been associated with an increased risk of diabetes, cancer, obesity, and mental health impairments. However, 75% of Americans spend most of their days sitting, with work-sitting accounting for 63% of total daily sitting time. Little research examining theory-based antecedents of standing or sitting has been conducted. This lack of solid groundwork makes it difficult to design effective intervention strategies to decrease sitting behaviors. Using the Theory of Planned Behavior (TPB) as our theoretical lens to better understand factors related with beneficial standing behaviors already being practiced, we examined relationships between TPB constructs and time spent standing at work among "positive deviants" (those successful in behavior change). Experience sampling methodology (ESM), 4 times a day (midmorning, before lunch, afternoon, and before leaving work) for 5 consecutive workdays (Monday to Friday), was used to assess employees' standing time. TPB scales assessing attitude (α = 0.81-0.84), norms (α = 0.83), perceived behavioral control (α = 0.77), and intention (α = 0.78) were developed using recommended methods and collected once on the Friday before the ESM surveys started. ESM data are hierarchically nested, therefore we tested our hypotheses using multilevel structural equation modeling with Mplus. Hourly full-time university employees (n = 50; 70.6% female, 84.3% white, mean age = 44 (SD = 11), 88.2% in full-time staff positions) with sedentary occupation types (time at desk while working ≥6 hours/day) participated. A total of 871 daily surveys were completed. Only perceived behavioral control (β = 0.45, p work-standing at the event-level (model fit: just fit); mediation through intention was not supported. This is the first study to examine theoretical antecedents of real-time work-standing in a naturalistic field setting among positive deviants. These relationships should be further examined, and behavioral intervention

  19. Theoretical Antecedents of Standing at Work: An Experience Sampling Approach Using the Theory of Planned Behavior

    Directory of Open Access Journals (Sweden)

    M. Renée Umstattd Meyer

    2016-09-01

    Full Text Available Time spent sitting has been associated with an increased risk of diabetes, cancer, obesity, and mental health impairments. However, 75% of Americans spend most of their days sitting, with work-sitting accounting for 63% of total daily sitting time. Little research examining theory-based antecedents of standing or sitting has been conducted. This lack of solid groundwork makes it difficult to design effective intervention strategies to decrease sitting behaviors. Using the Theory of Planned Behavior (TPB as our theoretical lens to better understand factors related with beneficial standing behaviors already being practiced, we examined relationships between TPB constructs and time spent standing at work among “positive deviants” (those successful in behavior change. Experience sampling methodology (ESM, 4 times a day (midmorning, before lunch, afternoon, and before leaving work for 5 consecutive workdays (Monday to Friday, was used to assess employees’ standing time. TPB scales assessing attitude (α = 0.81–0.84, norms (α = 0.83, perceived behavioral control (α = 0.77, and intention (α = 0.78 were developed using recommended methods and collected once on the Friday before the ESM surveys started. ESM data are hierarchically nested, therefore we tested our hypotheses using multilevel structural equation modeling with Mplus. Hourly full-time university employees (n = 50; 70.6% female, 84.3% white, mean age = 44 (SD = 11, 88.2%in full-time staff positions with sedentary occupation types (time at desk while working ≥6 hours/day participated. A total of 871 daily surveys were completed. Only perceived behavioral control (β = 0.45, p < 0.05 was related with work-standing at the event-level (model fit: just fit; mediation through intention was not supported. This is the first study to examine theoretical antecedents of real-time work-standing in a naturalistic field setting among positive deviants. These relationships should be further

  20. On the electric dipole moments of small sodium clusters from different theoretical approaches

    Energy Technology Data Exchange (ETDEWEB)

    Aguado, Andres, E-mail: aguado@metodos.fam.cie.uva.es [Departamento de Fisica Teorica, Atomica, y Optica, Universidad de Valladolid (Spain); Largo, Antonio, E-mail: alargo@qf.uva.es [Departamento de Quimica Fisica y Quimica Inorganica, Universidad de Valladolid (Spain); Vega, Andres, E-mail: vega@fta.uva.es [Departamento de Fisica Teorica, Atomica, y Optica, Universidad de Valladolid (Spain); Balbas, Luis Carlos, E-mail: balbas@fta.uva.es [Departamento de Fisica Teorica, Atomica, y Optica, Universidad de Valladolid (Spain)

    2012-05-03

    Graphical abstract: The dipole moments and polarizabilities of a few isomers of sodium clusters of selected sizes (n = 13, 14, 16) are calculated using density functional theory methods as well as ab initio MP2, CASSCF, and MR-CI methods. Among the density functional approaches, we consider the usual local density and generalized gradient approximations, as well as a recent van der Waals self-consistent functional accounting for non-local dispersion interactions. Highlights: Black-Right-Pointing-Pointer Dipole moment and polarizability of sodium clusters from DFT and ab initio methods. Black-Right-Pointing-Pointer New van der Waals selfconsistent implementation of non-local dispersion interactions. Black-Right-Pointing-Pointer New starting isomeric geometries from extensive search of global minimum structures. Black-Right-Pointing-Pointer Good agreement with recent experiments at cryogenic temperatures. - Abstract: The dipole moments of Na{sub n} clusters in the size range 10 < n < 20, recently measured at very low temperature (20 K), are much smaller than predicted by standard density functional methods. On the other hand, the calculated static dipole polarizabilities in that range of sizes deviate non-systematically from the measured ones, depending on the employed first principles approach. In this work we calculate the dipole moments and polarizabilities of a few isomers of Na{sub n} clusters of selected sizes (n = 13, 14, 16), obtained recently through an extensive unbiased search of the global minimum structures, and using density functional theory methods as well as ab initio MP2, CASSCF, and MR-CI methods. Among the density functional approaches, we consider the usual local density and generalized gradient approximations, as well as a recent van der Waals self-consistent functional accounting for non-local dispersion interactions. Both non-local pseudopotentials and all-electron implementations are employed and compared in order to assess the possible

  1. A game-theoretic framework for estimating a health purchaser's willingness-to-pay for health and for expansion.

    Science.gov (United States)

    Yaesoubi, Reza; Roberts, Stephen D

    2010-12-01

    A health purchaser's willingness-to-pay (WTP) for health is defined as the amount of money the health purchaser (e.g. a health maximizing public agency or a profit maximizing health insurer) is willing to spend for an additional unit of health. In this paper, we propose a game-theoretic framework for estimating a health purchaser's WTP for health in markets where the health purchaser offers a menu of medical interventions, and each individual in the population selects the intervention that maximizes her prospect. We discuss how the WTP for health can be employed to determine medical guidelines, and to price new medical technologies, such that the health purchaser is willing to implement them. The framework further introduces a measure for WTP for expansion, defined as the amount of money the health purchaser is willing to pay per person in the population served by the health provider to increase the consumption level of the intervention by one percent without changing the intervention price. This measure can be employed to find how much to invest in expanding a medical program through opening new facilities, advertising, etc. Applying the proposed framework to colorectal cancer screening tests, we estimate the WTP for health and the WTP for expansion of colorectal cancer screening tests for the 2005 US population.

  2. An estimation of crude oil import demand in Turkey: Evidence from time-varying parameters approach

    International Nuclear Information System (INIS)

    Ozturk, Ilhan; Arisoy, Ibrahim

    2016-01-01

    The aim of this study is to model crude oil import demand and estimate the price and income elasticities of imported crude oil in Turkey based on a time-varying parameters (TVP) approach with the aim of obtaining accurate and more robust estimates of price and income elasticities. This study employs annual time series data of domestic oil consumption, real GDP, and oil price for the period 1966–2012. The empirical results indicate that both the income and price elasticities are in line with the theoretical expectations. However, the income elasticity is statistically significant while the price elasticity is statistically insignificant. The relatively high value of income elasticity (1.182) from this study suggests that crude oil import in Turkey is more responsive to changes in income level. This result indicates that imported crude oil is a normal good and rising income levels will foster higher consumption of oil based equipments, vehicles and services by economic agents. The estimated income elasticity of 1.182 suggests that imported crude oil consumption grows at a higher rate than income. This in turn reduces oil intensity over time. Therefore, crude oil import during the estimation period is substantially driven by income. - Highlights: • We estimated the price and income elasticities of imported crude oil in Turkey. • Income elasticity is statistically significant and it is 1.182. • The price elasticity is statistically insignificant. • Crude oil import in Turkey is more responsive to changes in income level. • Crude oil import during the estimation period is substantially driven by income.

  3. Model-free approach to the estimation of radiation hazards. I. Theory

    International Nuclear Information System (INIS)

    Zaider, M.; Brenner, D.J.

    1986-01-01

    The experience of the Japanese atomic bomb survivors constitutes to date the major data base for evaluating the effects of low doses of ionizing radiation on human populations. Although numerous analyses have been performed and published concerning this experience, it is clear that no consensus has emerged as to the conclusions that may be drawn to assist in setting realistic radiation protection guidelines. In part this is an inherent consequences of the rather limited amount of data available. In this paper the authors address an equally important problem; namely, the use of arbitrary parametric risk models which have little theoretical foundation, yet almost totally determine the final conclusions drawn. They propose the use of a model-free approach to the estimation of radiation hazards

  4. Theoretical Antecedents of Standing at Work: An Experience Sampling Approach Using the Theory of Planned Behavior

    Science.gov (United States)

    Meyer, M. Renée Umstattd; Wu, Cindy; Walsh, Shana M.

    2016-01-01

    Time spent sitting has been associated with an increased risk of diabetes, cancer, obesity, and mental health impairments. However, 75% of Americans spend most of their days sitting, with work-sitting accounting for 63% of total daily sitting time. Little research examining theory-based antecedents of standing or sitting has been conducted. This lack of solid groundwork makes it difficult to design effective intervention strategies to decrease sitting behaviors. Using the Theory of Planned Behavior (TPB) as our theoretical lens to better understand factors related with beneficial standing behaviors already being practiced, we examined relationships between TPB constructs and time spent standing at work among “positive deviants” (those successful in behavior change). Experience sampling methodology (ESM), 4 times a day (midmorning, before lunch, afternoon, and before leaving work) for 5 consecutive workdays (Monday to Friday), was used to assess employees' standing time. TPB scales assessing attitude (α = 0.81–0.84), norms (α = 0.83), perceived behavioral control (α = 0.77), and intention (α = 0.78) were developed using recommended methods and collected once on the Friday before the ESM surveys started. ESM data are hierarchically nested, therefore we tested our hypotheses using multilevel structural equation modeling with Mplus. Hourly full-time university employees (n = 50; 70.6% female, 84.3% white, mean age = 44 (SD = 11), 88.2% in full-time staff positions) with sedentary occupation types (time at desk while working ≥6 hours/day) participated. A total of 871 daily surveys were completed. Only perceived behavioral control (β = 0.45, p deviance approach to enhance perceived behavioral control, in addition to implementing environmental changes like installing standing desks. PMID:29546189

  5. Theoretical Approaches in the Context of Spatial Planning Decisions and the Relation with Urban Sustainability

    Science.gov (United States)

    Kumlu, Kadriye Burcu Yavuz; Tüdeş, Şule

    2017-10-01

    The sustainability agenda has maintained its importance since the days, when the production system took its capitalist form, as well as the population in the urban areas started to rise. Increasing number of both goods and the people have caused the degradation of the certain systems, which generate the urban areas. These systems could mainly be classified as social, environmental, physical and economical systems. Today, urban areas still have difficulty to protect those systems, due to the significant demand of the population. Therefore, studies related with the sustainable issues are significant in the sense of continuity of the urban systems. Therefore, in this paper, those studies in the context of the effects of physical decisions taken in the spatial planning process on urban sustainability, will be examined. The components of the physical decisions are limited to land use, density and design. Land use decisions will be examined in the context of mixed land use. On the other hand, decisions related with density will be analyzed in the sense of population density and floor area ratio (FAR). Besides, design decisions will be examined, by linking them with neighborhood design criteria. Additionally, the term of urban sustainability will only be limited to its social and environmental contexts in this study. Briefly stated, studies in the sustainable literature concerned with the effects of land use, density and design decisions taken in the spatial planning process on the social and environmental sustainability will be examined in this paper. After the compilation and the analyze of those studies, a theoretical approach will be proposed to determine social and environmental sustainability in the context of land use, density and design decisions, taken in the spatial planning process.

  6. Multivariate Location Estimation Using Extension of $R$-Estimates Through $U$-Statistics Type Approach

    OpenAIRE

    Chaudhuri, Probal

    1992-01-01

    We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...

  7. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-01

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed

  8. Exploring SiSn as a performance enhancing semiconductor: A theoretical and experimental approach

    KAUST Repository

    Hussain, Aftab M.; Singh, Nirpendra; Fahad, Hossain M.; Rader, Kelly; Schwingenschlö gl, Udo; Hussain, Muhammad Mustafa

    2014-01-01

    We present a novel semiconducting alloy, silicon-tin (SiSn), as channel material for complementary metal oxide semiconductor (CMOS) circuit applications. The material has been studied theoretically using first principles analysis as well

  9. THE DETECTION RATE OF EARLY UV EMISSION FROM SUPERNOVAE: A DEDICATED GALEX/PTF SURVEY AND CALIBRATED THEORETICAL ESTIMATES

    Energy Technology Data Exchange (ETDEWEB)

    Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer [Department of Particle Physics and Astrophysics, Faculty of Physics, The Weizmann Institute of Science, Rehovot 76100 (Israel); Kulkarni, Shrinivas R.; Kasliwal, Mansi M. [Cahill Center for Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Ben-Ami, Sagi [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Ctr. for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Chelouche, Doron; Rafter, Stephen [Physics Department, Faculty of Natural Sciences, University of Haifa, 31905 Haifa (Israel); Behar, Ehud; Laor, Ari [Physics Department, Technion Israel Institute of Technology, 32000 Haifa (Israel); Poznanski, Dovi; Nakar, Ehud; Maoz, Dan [School of Physics and Astronomy, Tel Aviv University, 69978 Tel Aviv (Israel); Trakhtenbrot, Benny [Institute for Astronomy, ETH Zurich, Wolfgang-Pauli-Strasse 27 Zurich 8093 (Switzerland); Neill, James D.; Barlow, Thomas A.; Martin, Christofer D., E-mail: noam.ganot@gmail.com [California Institute of Technology, 1200 East California Boulevard, MC 278-17, Pasadena, CA 91125 (United States); Collaboration: ULTRASAT Science Team; WTTH consortium; GALEX Science Team; Palomar Transient Factory; and others

    2016-03-20

    The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R{sub ⊙}, explosion energies of 10{sup 51} erg, and ejecta masses of 10 M{sub ⊙}. Exploding blue supergiants and Wolf–Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (∼0.5 SN per deg{sup 2}), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.

  10. Aperture Array Photonic Metamaterials: Theoretical approaches, numerical techniques and a novel application

    Science.gov (United States)

    Lansey, Eli

    Optical or photonic metamaterials that operate in the infrared and visible frequency regimes show tremendous promise for solving problems in renewable energy, infrared imaging, and telecommunications. However, many of the theoretical and simulation techniques used at lower frequencies are not applicable to this higher-frequency regime. Furthermore, technological and financial limitations of photonic metamaterial fabrication increases the importance of reliable theoretical models and computational techniques for predicting the optical response of photonic metamaterials. This thesis focuses on aperture array metamaterials. That is, a rectangular, circular, or other shaped cavity or hole embedded in, or penetrating through a metal film. The research in the first portion of this dissertation reflects our interest in developing a fundamental, theoretical understanding of the behavior of light's interaction with these aperture arrays, specifically regarding enhanced optical transmission. We develop an approximate boundary condition for metals at optical frequencies, and a comprehensive, analytical explanation of the physics underlying this effect. These theoretical analyses are augmented by computational techniques in the second portion of this thesis, used both for verification of the theoretical work, and solving more complicated structures. Finally, the last portion of this thesis discusses the results from designing, fabricating and characterizing a light-splitting metamaterial.

  11. Nuclear waste repository characterization: a spatial estimation/identification approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1981-03-01

    This paper considers the application of spatial estimation techniques to a groundwater aquifer and geological borehole data. It investigates the adequacy of these techniques to reliably develop contour maps from various data sets. The practice of spatial estimation is discussed and the estimator is then applied to a groundwater aquifer system and a deep geological formation. It is shown that the various statistical models must first be identified from the data and evaluated before reasonable results can be expected

  12. Variational approach for spatial point process intensity estimation

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper

    is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....

  13. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  14. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    Science.gov (United States)

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  15. Field theoretical approach to proton-nucleus reactions. I - One step inelastic scattering

    International Nuclear Information System (INIS)

    Eiras, A.; Kodama, T.; Nemes, M.C.

    1988-01-01

    In this work we obtain a closed form expression to the double differential cross section for one step proton-nucleus reaction within a field theoretical framework. Energy and momentum conservation as well as nuclear structure effects are consistently taken into account within the field theoretical eikonal approximation. In our formulation the kinematics of such reaction is not dominated by the free nucleon-nucleon cross section but a new factor which we call relativistic differential cross section in a Born Approximation. (author) [pt

  16. A short course in quantum information theory. An approach from theoretical physics

    International Nuclear Information System (INIS)

    Diosi, L.

    2007-01-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. (orig.)

  17. SNP based heritability estimation using a Bayesian approach

    DEFF Research Database (Denmark)

    Krag, Kristian; Janss, Luc; Mahdi Shariati, Mohammad

    2013-01-01

    . Differences in family structure were in general not found to influence the estimation of the heritability. For the sample sizes used in this study, a 10-fold increase of SNP density did not improve precision estimates compared with set-ups with a less dense distribution of SNPs. The methods used in this study...

  18. A New Theoretical Approach to Postsecondary Student Disability: Disability-Diversity (Dis)Connect Model

    Science.gov (United States)

    Aquino, Katherine C.

    2016-01-01

    Disability is often viewed as an obstacle to postsecondary inclusion, but not a characteristic of student diversity. Additionally, current theoretical frameworks isolate disability from other student diversity characteristics. In response, a new conceptual framework, the Disability-Diversity (Dis)Connect Model (DDDM), was created to address…

  19. Studies of the tautomeric equilibrium of 1,3-thiazolidine-2-thione: Theoretical and experimental approaches

    Energy Technology Data Exchange (ETDEWEB)

    Abbehausen, Camilla; Paiva, Raphael E.F. de [Institute of Chemistry, University of Campinas - UNICAMP, P.O. Box 6154, 13083-970 Campinas, SP (Brazil); Formiga, Andre L.B., E-mail: formiga@iqm.unicamp.br [Institute of Chemistry, University of Campinas - UNICAMP, P.O. Box 6154, 13083-970 Campinas, SP (Brazil); Corbi, Pedro P. [Institute of Chemistry, University of Campinas - UNICAMP, P.O. Box 6154, 13083-970 Campinas, SP (Brazil)

    2012-10-26

    Highlights: Black-Right-Pointing-Pointer Tautomeric equilibrium in solution. Black-Right-Pointing-Pointer Spectroscopic and theoretical studies. Black-Right-Pointing-Pointer UV-Vis theoretical and experimental spectra. Black-Right-Pointing-Pointer {sup 1}H NMR theoretical and experimental spectra. -- Abstract: The tautomeric equilibrium of the thione/thiol forms of 1,3-thiazolidine-2-thione was studied by nuclear magnetic resonance, infrared and ultraviolet-visible spectroscopies. Density functional theory was used to support the experimental data and indicates the predominance of the thione tautomer in the solid state, being in agreement with previously reported crystallographic data. In solution, the tautomeric equilibrium was evaluated using {sup 1}H NMR at different temperatures in four deuterated solvents acetonitrile, dimethylsulfoxide, chloroform and methanol. The equilibrium constants, K = (thiol)/(thione), and free Gibbs energies were obtained by integration of N bonded hydrogen signals at each temperature for each solvent, excluding methanol. The endothermic tautomerization is entropy-driven and the combined effect of solvent and temperature can be used to achieve almost 50% thiol concentrations in solution. The nature of the electronic transitions was investigated theoretically and the assignment of the bands was made using time-dependent DFT as well as the influence of solvent on the energy of the most important bands of the spectra.

  20. A theoretical perspective of the nature of hydrogen-bond types - the atoms in molecules approach

    Czech Academy of Sciences Publication Activity Database

    Pandiyan, B. V.; Kolandaivel, P.; Deepa, Palanisamy

    2014-01-01

    Roč. 112, č. 12 (2014), s. 1609-1623 ISSN 0026-8976 Institutional support: RVO:61388963 Keywords : hydrogen bond * proton affinity * deprotanation enthalpy * atoms in molecules * chemical shift Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.720, year: 2014

  1. A Dialectical Approach to Theoretical Integration in Developmental-Contextual Identity Research

    Science.gov (United States)

    Seaman, Jayson; Sharp, Erin Hiley; Coppens, Andrew D.

    2017-01-01

    Future advances in identity research will depend on integration across major theoretical traditions. Developmental-contextualism has established essential criteria to guide this effort, including specifying the context of identity development, its timing over the life course, and its content. This article assesses 4 major traditions of identity…

  2. Between altruism and narcissism: An action theoretical approach of personal homepages devoted to existential meaning

    NARCIS (Netherlands)

    Hijmans, E.J.S.; Selm, M. van

    2002-01-01

    This article aims to examine existential meaning constructions from an action theoretical perspective in a specific Internet environment: the personal homepage. Personal homepages are on-line multi-media documents addressing the question Who am I? Authors of personal homepages provide information on

  3. A Holistic Theoretical Approach to Intellectual Disability: Going beyond the Four Current Perspectives

    Science.gov (United States)

    Schalock, Robert L.; Luckasson, Ruth; Tassé, Marc J.; Verdugo, Miguel Angel

    2018-01-01

    This article describes a holistic theoretical framework that can be used to explain intellectual disability (ID) and organize relevant information into a usable roadmap to guide understanding and application. Developing the framework involved analyzing the four current perspectives on ID and synthesizing this information into a holistic…

  4. A decision-theoretic approach to collaboration: Principal description methods and efficient heuristic approximations

    NARCIS (Netherlands)

    Oliehoek, F.A.; Visser, A.; Babuška, R.; Groen, F.C.A

    2010-01-01

    This chapter gives an overview of the state of the art in decision-theoretic models to describe cooperation between multiple agents in a dynamic environment. Making (near-) optimal decisions in such settings gets harder when the number of agents grows or the uncertainty about the environment

  5. On the information-theoretic approach to G\\"odel's incompleteness theorem

    OpenAIRE

    D'Abramo, Germano

    2002-01-01

    In this paper we briefly review and analyze three published proofs of Chaitin's theorem, the celebrated information-theoretic version of G\\"odel's incompleteness theorem. Then, we discuss our main perplexity concerning a key step common to all these demonstrations.

  6. Estimating economic losses from earthquakes using an empirical approach

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.

    2013-01-01

    We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.

  7. Blood velocity estimation using ultrasound and spectral iterative adaptive approaches

    DEFF Research Database (Denmark)

    Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2011-01-01

    -mode images are interleaved with the Doppler emissions. Furthermore, the techniques are shown, using both simplified and more realistic Field II simulations as well as in vivo data, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30......This paper proposes two novel iterative data-adaptive spectral estimation techniques for blood velocity estimation using medical ultrasound scanners. The techniques make no assumption on the sampling pattern of the emissions or the depth samples, allowing for duplex mode transmissions where B...

  8. A new approach for estimation of component failure rate

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Kljenak, I.

    1999-01-01

    In the paper, a formal method for component failure rate estimation is described, which is proposed to be used for components, for which no specific numerical data necessary for probabilistic estimation exist. The framework of the method is the Bayesian updating procedure. A prior distribution is selected from a generic database, whereas the likelihood distribution is assessed from specific data on component state using principles of fuzzy logic theory. With the proposed method the component failure rate estimation is based on a much larger quantity of information compared to presently used classical methods.(author)

  9. Theoretical study of molecular vibrations in electron momentum spectroscopy experiments on furan: An analytical versus a molecular dynamical approach

    International Nuclear Information System (INIS)

    Morini, Filippo; Deleuze, Michael S.; Watanabe, Noboru; Takahashi, Masahiko

    2015-01-01

    The influence of thermally induced nuclear dynamics (molecular vibrations) in the initial electronic ground state on the valence orbital momentum profiles of furan has been theoretically investigated using two different approaches. The first of these approaches employs the principles of Born-Oppenheimer molecular dynamics, whereas the so-called harmonic analytical quantum mechanical approach resorts to an analytical decomposition of contributions arising from quantized harmonic vibrational eigenstates. In spite of their intrinsic differences, the two approaches enable consistent insights into the electron momentum distributions inferred from new measurements employing electron momentum spectroscopy and an electron impact energy of 1.2 keV. Both approaches point out in particular an appreciable influence of a few specific molecular vibrations of A 1 symmetry on the 9a 1 momentum profile, which can be unravelled from considerations on the symmetry characteristics of orbitals and their energy spacing

  10. Hankin and Reeves' approach to estimating fish abundance in small streams: limitations and alternatives

    Science.gov (United States)

    William L. Thompson

    2003-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled...

  11. an approach to estimate total dissolved solids in groundwater using

    African Journals Online (AJOL)

    resistivities of the aquifer delineated were subsequently used to estimate TDS in groundwater which was correlated with those ... the concentrations of these chemical constituents in the ..... TDS determined by water analysis varied between 17.

  12. Quantum molecular dynamics approach to estimate spallation yield ...

    Indian Academy of Sciences (India)

    Consequently, the need for reliable data to design and construct spallation neutron sources has prompted ... A major disadvantage of the QMD code .... have estimated the average neutron multiplicities per primary reaction and kinetic energy.

  13. THEORETICAL AND METHODOLOGICAL APPROACHES TO THE STUDY OF THE IMPACT OF INFORMATION TECHNOLOGY ON SOCIAL CONNECTIONS AMONG YOUTH

    Directory of Open Access Journals (Sweden)

    Sofia Alexandrovna Zverkova

    2015-11-01

    Full Text Available The urgency is due to the virtualization of communication in modern society, especially among young people, affecting social relations and social support services. Stressed the need for a more in-depth study of network virtualization of social relations of society, due to the ambiguous consequences of this phenomenon among the youth.Purpose. Analyze classic and contemporary theoretical and methodological approaches to the study of social ties and social support in terms of technological progress.Results. The article presents a sociological analysis of theoretical and methodological approaches to the study of problems of interaction and social support among youth through strong and weak social ties in cyberspace and in the real world.Practical implications. The analysis gives the opportunity for a wide range of examining social relations in various fields of sociology, such as sociology of youth, sociology of communications.

  14. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-21

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.

  15. The current status of theoretically based approaches to the prediction of the critical heat flux in flow boiling

    International Nuclear Information System (INIS)

    Weisman, J.

    1991-01-01

    This paper reports on the phenomena governing the critical heat flux in flow boiling. Inducts which vary with the flow pattern. Separate models are needed for dryout in annular flow, wall overheating in plug or slug flow and formation of a vapor blanket in dispersed flow. The major theories and their current status are described for the annular and dispersed regions. The need for development of the theoretical approach in the plug and slug flow region is indicated

  16. Theoretical analysis of two ACO approaches for the traveling salesman problem

    DEFF Research Database (Denmark)

    Kötzing, Timo; Neumann, Frank; Röglin, Heiko

    2012-01-01

    Bioinspired algorithms, such as evolutionary algorithms and ant colony optimization, are widely used for different combinatorial optimization problems. These algorithms rely heavily on the use of randomness and are hard to understand from a theoretical point of view. This paper contributes...... to the theoretical analysis of ant colony optimization and studies this type of algorithm on one of the most prominent combinatorial optimization problems, namely the traveling salesperson problem (TSP). We present a new construction graph and show that it has a stronger local property than one commonly used...... for constructing solutions of the TSP. The rigorous runtime analysis for two ant colony optimization algorithms, based on these two construction procedures, shows that they lead to good approximation in expected polynomial time on random instances. Furthermore, we point out in which situations our algorithms get...

  17. Exploring SiSn as a performance enhancing semiconductor: A theoretical and experimental approach

    KAUST Repository

    Hussain, Aftab M.

    2014-12-14

    We present a novel semiconducting alloy, silicon-tin (SiSn), as channel material for complementary metal oxide semiconductor (CMOS) circuit applications. The material has been studied theoretically using first principles analysis as well as experimentally by fabricating MOSFETs. Our study suggests that the alloy offers interesting possibilities in the realm of silicon band gap tuning. We have explored diffusion of tin (Sn) into the industry\\'s most widely used substrate, silicon (100), as it is the most cost effective, scalable and CMOS compatible way of obtaining SiSn. Our theoretical model predicts a higher mobility for p-channel SiSn MOSFETs, due to a lower effective mass of the holes, which has been experimentally validated using the fabricated MOSFETs. We report an increase of 13.6% in the average field effect hole mobility for SiSn devices compared to silicon control devices.

  18. A novel game theoretic approach for modeling competitive information diffusion in social networks with heterogeneous nodes

    Science.gov (United States)

    Agha Mohammad Ali Kermani, Mehrdad; Fatemi Ardestani, Seyed Farshad; Aliahmadi, Alireza; Barzinpour, Farnaz

    2017-01-01

    Influence maximization deals with identification of the most influential nodes in a social network given an influence model. In this paper, a game theoretic framework is developed that models a competitive influence maximization problem. A novel competitive influence model is additionally proposed that incorporates user heterogeneity, message content, and network structure. The proposed game-theoretic model is solved using Nash Equilibrium in a real-world dataset. It is shown that none of the well-known strategies are stable and at least one player has the incentive to deviate from the proposed strategy. Moreover, violation of Nash equilibrium strategy by each player leads to their reduced payoff. Contrary to previous works, our results demonstrate that graph topology, as well as the nodes' sociability and initial tendency measures have an effect on the determination of the influential node in the network.

  19. A theoretical approach to dual practice regulations in the health sector.

    Science.gov (United States)

    González, Paula; Macho-Stadler, Inés

    2013-01-01

    Internationally, there is wide cross-country heterogeneity in government responses to dual practice in the health sector. This paper provides a uniform theoretical framework to analyze and compare some of the most common regulations. We focus on three interventions: banning dual practice, offering rewarding contracts to public physicians, and limiting dual practice (including both limits to private earnings of dual providers and limits to involvement in private activities). An ancillary objective of the paper is to investigate whether regulations that are optimal for developed countries are adequate for developing countries as well. Our results offer theoretical support for the desirability of different regulations in different economic environments. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. New approaches to the estimation of erosion-corrosion

    International Nuclear Information System (INIS)

    Bakirov, Murat; Ereemin, Alexandr; Levchuck, Vasiliy; Chubarov, Sergey

    2006-09-01

    erosion-corrosion in a double-phase flow is that of moving deaerated liquid in directly contact with metal as a barrier between the metal and main steam-drop flow. Local processes of mass transfer, corrosion properties and water-chemical parameters of this film define intensity of erosion-corrosion and features of its behavior. Erosion-corrosion of metal in a double-phase flow is determined by the gas-dynamics of double-phase flaws, water chemistry, thermodynamic, materials science, etc. The goal of the work: development of theoretical and methodological basis of physical, chemical and mathematical models, as well as the finding of technical solutions and method of diagnostics, forecast and control of the erosion-corrosion processes. It will allow the increase of reliability and safety operation of the power equipment of the secondary circuit in NPP with WWER by use of monitoring of erosion-corrosion wear of pipelines. One concludes by stressing that the described design-experimental approach for solving of FAC problem will enable to carry out the following works: - elaboration and certification of the procedure of design-experimental substantiation of zones, aims and periodicity of the NPP elements operational inspection; - development and certification of a new Regulatory Document of stress calculation for definition of the minimum acceptable wall thickness levels considering real wear shape, FAC rates and inaccuracy of devices for wall thickness measurements; - improving the current Regulatory Documents and correcting of the Typical programs of operational inspection - optimization of zones, aims and periodicity of the inspection; - elaboration of recommendations for operational lifetime prolongation of the WWER secondary circuits elements by means of increasing of erosion-corrosion resistance of the new equipment and of the equipment, exceeding the design lifetime; - improving of safe and uninterrupted work of the power unit due to prediction of the most damaged

  1. Black hole state counting in loop quantum gravity: a number-theoretical approach.

    Science.gov (United States)

    Agulló, Iván; Barbero G, J Fernando; Díaz-Polo, Jacobo; Fernández-Borja, Enrique; Villaseñor, Eduardo J S

    2008-05-30

    We give an efficient method, combining number-theoretic and combinatorial ideas, to exactly compute black hole entropy in the framework of loop quantum gravity. Along the way we provide a complete characterization of the relevant sector of the spectrum of the area operator, including degeneracies, and explicitly determine the number of solutions to the projection constraint. We use a computer implementation of the proposed algorithm to confirm and extend previous results on the detailed structure of the black hole degeneracy spectrum.

  2. What happens in recessions? A value-theoretic approach to Liquidity Preference

    OpenAIRE

    Freeman, Alan

    1998-01-01

    This paper develops the paper entitled ‘‘Time, the Value of Money and the Quantification of Value’ which was presented at the conference of the Middle East Technical University in September 1998. It presents the case for a value-theoretic treatment of liquidity preference in axiomatic form, based on a temporal analysis. It discusses why temporal analysis is universally excluded from economic discourse. It argues that economic thought is divided not by the schism between classical and marg...

  3. Towards a comprehensive theory for He II: II. A temperature-dependent field-theoretic approach

    International Nuclear Information System (INIS)

    Chela-Flores, J.; Ghassib, H.B.

    1982-09-01

    New experimental aspects of He II are used as a guide towards a comprehensive theory in which non-zero temperature U(1) and SU(2) gauge fields are incorporated into a gauge hierarchy of effective Lagrangians. We conjecture that an SU(n) gauge-theoretic description of the superfluidity of 4 He may be obtained in the limit n→infinity. We indicate, however, how experiments may be understood in the zeroth, first and second order of the hierarchy. (author)

  4. Defining and measuring blood donor altruism: a theoretical approach from biology, economics and psychology.

    Science.gov (United States)

    Evans, R; Ferguson, E

    2014-02-01

    While blood donation is traditionally described as a behaviour motivated by pure altruism, the assessment of altruism in the blood donation literature has not been theoretically informed. Drawing on theories of altruism from psychology, economics and evolutionary biology, it is argued that a theoretically derived psychometric assessment of altruism is needed. Such a measure is developed in this study that can be used to help inform both our understanding of the altruistic motives of blood donors and recruitment intervention strategies. A cross-sectional survey (N = 414), with a 1-month behavioural follow-up (time 2, N = 77), was designed to assess theoretically derived constructs from psychological, economic and evolutionary biological theories of altruism. Theory of planned behaviour (TPB) variables and co-operation were also assessed at time 1 and a measure of behavioural co-operation at time 2. Five theoretical dimensions (impure altruism, kinship, self-regarding motives, reluctant altruism and egalitarian warm glow) of altruism were identified through factor analyses. These five altruistic motives differentiated blood donors from non-donors (donors scored higher on impure altruism and reluctant altruism), showed incremental validity over TPB constructs to predict donor intention and predicted future co-operative behaviour. These findings show that altruism in the context of blood donation is multifaceted and complex and, does not reflect pure altruism. This has implication for recruitment campaigns that focus solely on pure altruism. © 2013 The Authors. Vox Sanguinis published by John Wiley & Sons Ltd. on behalf of International Society of Blood Transfusion.

  5. THEORETICAL APPROACHES TO ASSESS THE IMPACT OF ADVERTISING ON CONSUMERS AND MARKET COMPETITION

    Directory of Open Access Journals (Sweden)

    Maryna SOBOLIEVA

    2016-07-01

    Full Text Available In the article we examine theoretical perspectives on the impact of advertising on consumer behavior, entry barriers in the industry, the structure of the industry, the competitive behavior of firms and market power; systemic structure of the research of advertising impact on consumer behavior; analyze the main results of empirical studies of the effects of advertising on the competitive relationship in the market.

  6. THEORETICAL APPROACHES TO ASSESS THE IMPACT OF ADVERTISING ON CONSUMERS AND MARKET COMPETITION

    Directory of Open Access Journals (Sweden)

    Maryna SOBOLIEVA

    2016-07-01

    Full Text Available In the article we examine theoretical perspectives on the impact of advertising on consumer behavior, entry barriers in the industry, the structure of the industry, the competitive behavior of firms and market power; systemize structure of the research of advertising impact on consumer behavior; analyze the main results of empirical studies of the effects of advertising on the competitive relationship in the market.

  7. Something new: a new approach to correcting theoretical emitted intensities for absorption effects

    International Nuclear Information System (INIS)

    Willis, J.P.; Lachance, G.R.

    2002-01-01

    Full text: For monochromatic incident radiation of wavelength λ, absorption only (no enhancement), and ignoring such effects as the absorption edge jump ratio, the fluorescence yield, and the probability that a Kα line will be emitted instead of a Kβ line, a simplified view of the theoretical emitted intensity of a characteristic line of element >i= from a layer in a specimen is given by a familiar equation which involves mass absorption coefficients. While this equation allows for the calculation of the theoretical emitted intensity, it is cumbersome to use when trying to explain X-ray excitation in a step-wise manner. It is therefore proposed that the mass attenuation coefficients (μ iλ , and the sum of μ sλ ' + μ sλi '' , in the numerator and denominator of this equation be replaced by the product of two coefficients correcting for absorption, namely aN H aO. The advantages of using the proposed equation in the stepwise calculation of theoretical intensities (in a similar manner to Monte Carlo calculations) will be discussed. Copyright (2002) Australian X-ray Analytical Association Inc

  8. Neutrosophic Game Theoretic Approach to Indo-Pak Conflict over Jammu-Kashmir

    Directory of Open Access Journals (Sweden)

    Surapati Pramanik

    2014-03-01

    Full Text Available The study deals with the enduring conflict between India and Pakistan over Jammu and Kashmir since 1947. The ongoing conflict is analyzed as an enduring rivalry; characterized by three major wars (1947-48, 1965, 1971, low intensity military conflict (Siachen, mini war at Kargil (1999, internal insurgency, cross border terrorism. We examine the progress and the status of the dispute, as well as the dynamics of the India Pakistan relationship by considering the influence of USA and China in crisis dynamics. We discuss the possible solutions offered by the various study groups and persons. Most of the studies were done in crisp environment. Pramanik and Roy (S. Pramanik and T.K. Roy, Game theoretic model to the Jammu-Kashmir conflict between India and Pakistan. International Journal of Mathematical Archive (IJMA, 4(8 (2013, 162-170. studied game theoretic model toJammu and Kashmir conflict in crisp environment. In the present study we have extended the concept of the game theoric model of the Jammu and Kashmir conflict in neutrosophic envirorment. We have explored the possibilities and developed arguments for an application of principle of neutrosophic game theory to understand properly of the Jammu and Kashmir conflict in terms of goals and strategy of either side. Standard 2×2 zero-sum game theoretic model used to identify an optimal solution.

  9. δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions

    Directory of Open Access Journals (Sweden)

    Hengrong Ju

    2014-01-01

    Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.

  10. A Methodological Demonstration of Set-theoretical Approach to Social Media Maturity Models Using Necessary Condition Analysis

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...... and evaluate some of arguments presented by previous conceptual focused social media maturity models....... by applying the Necessary Condition Analysis (NCA) technique to derive maturity stages and stage boundaries conditions. The ontology is to view stages (boundaries) in maturity models as a collection of necessary condition. Using social media maturity data, we demonstrate the strength of our approach...

  11. Algorithms and programs of dynamic mixture estimation unified approach to different types of components

    CERN Document Server

    Nagy, Ivan

    2017-01-01

    This book provides a general theoretical background for constructing the recursive Bayesian estimation algorithms for mixture models. It collects the recursive algorithms for estimating dynamic mixtures of various distributions and brings them in the unified form, providing a scheme for constructing the estimation algorithm for a mixture of components modeled by distributions with reproducible statistics. It offers the recursive estimation of dynamic mixtures, which are free of iterative processes and close to analytical solutions as much as possible. In addition, these methods can be used online and simultaneously perform learning, which improves their efficiency during estimation. The book includes detailed program codes for solving the presented theoretical tasks. Codes are implemented in the open source platform for engineering computations. The program codes given serve to illustrate the theory and demonstrate the work of the included algorithms.

  12. A collaborative approach for estimating terrestrial wildlife abundance

    Science.gov (United States)

    Ransom, Jason I.; Kaczensky, Petra; Lubow, Bruce C.; Ganbaatar, Oyunsaikhan; Altansukh, Nanjid

    2012-01-01

    Accurately estimating abundance of wildlife is critical for establishing effective conservation and management strategies. Aerial methodologies for estimating abundance are common in developed countries, but they are often impractical for remote areas of developing countries where many of the world's endangered and threatened fauna exist. The alternative terrestrial methodologies can be constrained by limitations on access, technology, and human resources, and have rarely been comprehensively conducted for large terrestrial mammals at landscape scales. We attempted to overcome these problems by incorporating local peoples into a simultaneous point count of Asiatic wild ass (Equus hemionus) and goitered gazelle (Gazella subgutturosa) across the Great Gobi B Strictly Protected Area, Mongolia. Paired observers collected abundance and covariate metrics at 50 observation points and we estimated population sizes using distance sampling theory, but also assessed individual observer error to examine potential bias introduced by the large number of minimally trained observers. We estimated 5671 (95% CI = 3611–8907) wild asses and 5909 (95% CI = 3762–9279) gazelle inhabited the 11,027 km2 study area at the time of our survey and found that the methodology developed was robust at absorbing the logistical challenges and wide range of observer abilities. This initiative serves as a functional model for estimating terrestrial wildlife abundance while integrating local people into scientific and conservation projects. This, in turn, creates vested interest in conservation by the people who are most influential in, and most affected by, the outcomes.

  13. Supplementary Appendix for: Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.

    2016-01-01

    In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  14. Asynchronous machine rotor speed estimation using a tabulated numerical approach

    Science.gov (United States)

    Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane

    2017-12-01

    This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.

  15. An Iterative Adaptive Approach for Blood Velocity Estimation Using Ultrasound

    DEFF Research Database (Denmark)

    Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2010-01-01

    This paper proposes a novel iterative data-adaptive spectral estimation technique for blood velocity estimation using medical ultrasound scanners. The technique makes no assumption on the sampling pattern of the slow-time or the fast-time samples, allowing for duplex mode transmissions where B......-mode images are interleaved with the Doppler emissions. Furthermore, the technique is shown, using both simplified and more realistic Field II simulations, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30% of the transmissions......, thereby allowing for the examination of two separate vessel regions while retaining an adequate updating rate of the B-mode images. In addition, the proposed method also allows for more flexible transmission patterns, as well as exhibits fewer spectral artifacts as compared to earlier techniques....

  16. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  17. ANALYSIS OF THEORETICAL AND METHODOLOGICAL APPROACHES TO DESIGN OF ELECTRONIC TEXTBOOKS FOR STUDENTS OF HIGHER AGRICULTURAL EDUCATIONAL INSTITUTIONS

    Directory of Open Access Journals (Sweden)

    Olena Yu. Balalaieva

    2017-06-01

    Full Text Available The article deals with theoretical and methodological approaches to the design of electronic textbook, in particular systems, competence, activity, personality oriented, technological one, that in complex reflect the general trends in the formation of a new educational paradigm, distinctive features of which lie in constructing the heuristic searching model of the learning process, focusing on developmental teaching, knowledge integration, skills development for the independent information search and processing, technification of the learning process. The approach in this study is used in a broad sense as a synthesis of the basic ideas, views, principles that determine the overall research strategy. The main provisions of modern approaches to design are not antagonistic, they should be applied in a complex, taking into account the advantages of each of them and leveling shortcomings for the development of optimal concept of electronic textbook. The model of electronic textbook designing and components of methodology for its using based on these approaches are described.

  18. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  19. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  20. A Semantics-Based Approach to Construction Cost Estimating

    Science.gov (United States)

    Niknam, Mehrdad

    2015-01-01

    A construction project requires collaboration of different organizations such as owner, designer, contractor, and resource suppliers. These organizations need to exchange information to improve their teamwork. Understanding the information created in other organizations requires specialized human resources. Construction cost estimating is one of…

  1. Estimating raptor nesting success: old and new approaches

    Science.gov (United States)

    Brown, Jessi L.; Steenhof, Karen; Kochert, Michael N.; Bond, Laura

    2013-01-01

    Studies of nesting success can be valuable in assessing the status of raptor populations, but differing monitoring protocols can present unique challenges when comparing populations of different species across time or geographic areas. We used large datasets from long-term studies of 3 raptor species to compare estimates of apparent nest success (ANS, the ratio of successful to total number of nesting attempts), Mayfield nesting success, and the logistic-exposure model of nest survival. Golden eagles (Aquila chrysaetos), prairie falcons (Falco mexicanus), and American kestrels (F. sparverius) differ in their breeding biology and the methods often used to monitor their reproduction. Mayfield and logistic-exposure models generated similar estimates of nesting success with similar levels of precision. Apparent nest success overestimated nesting success and was particularly sensitive to inclusion of nesting attempts discovered late in the nesting season. Thus, the ANS estimator is inappropriate when exact point estimates are required, especially when most raptor pairs cannot be located before or soon after laying eggs. However, ANS may be sufficient to assess long-term trends of species in which nesting attempts are highly detectable.

  2. A Balanced Approach to Adaptive Probability Density Estimation

    Directory of Open Access Journals (Sweden)

    Julio A. Kovacs

    2017-04-01

    Full Text Available Our development of a Fast (Mutual Information Matching (FIM of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.

  3. Fracture mechanics approach to estimate rail wear limits

    Science.gov (United States)

    2009-10-01

    This paper describes a systematic methodology to estimate allowable limits for rail head wear in terms of vertical head-height loss, gage-face side wear, and/or the combination of the two. This methodology is based on the principles of engineering fr...

  4. Approaching the theoretical capacitance of graphene through copper foam integrated three-dimensional graphene networks

    DEFF Research Database (Denmark)

    Dey, Ramendra Sundar; Hjuler, Hans Aage; Chi, Qijin

    2015-01-01

    We report a facile and low-cost approach for the preparation of all-in-one supercapacitor electrodes using copper foam (CuF) integrated three-dimensional (3D) reduced graphene oxide (rGO) networks. The binderfree 3DrGO@CuF electrodes are capable of delivering high specific capacitance approaching...

  5. Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach

    Science.gov (United States)

    Setthawong, Pisal; Vannija, Vajirasak

    This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.

  6. On the design of a hierarchical SS7 network: A graph theoretical approach

    Science.gov (United States)

    Krauss, Lutz; Rufa, Gerhard

    1994-04-01

    This contribution is concerned with the design of Signaling System No. 7 networks based on graph theoretical methods. A hierarchical network topology is derived by combining the advantage of the hierarchical network structure with the realization of node disjoint routes between nodes of the network. By using specific features of this topology, we develop an algorithm to construct circle-free routing data and to assure bidirectionality also in case of failure situations. The methods described are based on the requirements that the network topology, as well as the routing data, may be easily changed.

  7. A theoretical approach to sputtering due to molecular ion bombardment, 1

    International Nuclear Information System (INIS)

    Karashima, Shosuke; Ootoshi, Tsukuru; Kamiyama, Masahide; Kim, Pil-Hyon; Namba, Susumu.

    1981-01-01

    A shock wave model is proposed to explain theoretically the non-linear effects in sputtering phenomena by molecular ion bombardments. In this theory the sputtering processes are separated into two parts; one is due to linear effects and another is due to non-linear effects. The treatment of the linear parts is based on the statistical model by Schwarz and Helms concerning a broad range of atomic collision cascades. The non-linear parts are treated by the model of shock wave due to overlapping cascades, and useful equations to calculate the sputtering yields and the dynamical quantities in the system are derived. (author)

  8. Nonlocal approach to the analysis of the stress distribution in granular systems. I. Theoretical framework

    Science.gov (United States)

    Kenkre, V. M.; Scott, J. E.; Pease, E. A.; Hurd, A. J.

    1998-05-01

    A theoretical framework for the analysis of the stress distribution in granular materials is presented. It makes use of a transformation of the vertical spatial coordinate into a formal time variable and the subsequent study of a generally non-Markoffian, i.e., memory-possessing (nonlocal) propagation equation. Previous treatments are obtained as particular cases corresponding to, respectively, wavelike and diffusive limits of the general evolution. Calculations are presented for stress propagation in bounded and unbounded media. They can be used to obtain desired features such as a prescribed stress distribution within the compact.

  9. Special course on modern theoretical and experimental approaches to turbulent flow structure and its modelling

    Energy Technology Data Exchange (ETDEWEB)

    1987-08-01

    The large eddy concept in turbulent modeling and techniques for direct simulation are discussed. A review of turbulence modeling is presented along with physical and numerical aspects and applications. A closure model for turbulent flows is presented and routes to chaos by quasi-periodicity are discussed. Theoretical aspects of transition to turbulence by space/time intermittency are covered. The application to interpretation of experimental results of fractal dimensions and connection of spatial temporal chaos are reviewed. Simulation of hydrodynamic flow by using cellular automata is discussed.

  10. Wavelength shift in a whispering gallery microdisk due to bacterial sensing: A theoretical approach

    Directory of Open Access Journals (Sweden)

    Hala Ghali

    2017-04-01

    Full Text Available Whispering gallery mode microcavities have recently been studied as a means to achieve real-time label-free detection of biological targets such as virus particles, specific DNA sequences, or proteins. Binding of a biomolecule to the surface of a microresonator will increase its path length, leading to a shift in the resonance frequency according to the reactive sensing principle. In this paper, we develop a theoretical expression that will link the reactive shift to the bacteria and microdisk parameters and help quantify the number of bacteria that bind to the surface of a 200μm-diameter silica microdisk. Keywords: Optical microdisk, Wavelength shift, Bacterial sensing

  11. Towards a comprehensive theory for He II: A temperature-dependent field-theoretic approach

    International Nuclear Information System (INIS)

    Ghassib, H.B.; Chela-Flores, J.

    1983-07-01

    New experimental aspects of He II, as well as recent developments in particle physics, are invoked to construct the rudiments of a comprehensive theory in which temperature-dependent U(1) and SU(2) gauge fields are incorporated into a hierarchy of effective Lagrangians. It is conjectured that an SU(n) gauge-theoretic description of superfluidity may be obtained in the limit n→infinity. However, it is outlined how experiments can be understood in the zeroth, first and second order of the hierarchy. (author)

  12. The stochastic system approach for estimating dynamic treatments effect.

    Science.gov (United States)

    Commenges, Daniel; Gégout-Petit, Anne

    2015-10-01

    The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.

  13. Determination of pKa and the corresponding structures of quinclorac using combined experimental and theoretical approaches

    Science.gov (United States)

    Song, Dean; Sun, Huiqing; Jiang, Xiaohua; Kong, Fanyu; Qiang, Zhimin; Zhang, Aiqian; Liu, Huijuan; Qu, Jiuhui

    2018-01-01

    As an emerging environmental contaminant, the herbicide quinclorac has attracted much attention in recent years. However, a very fundamental issue, the acid dissociation of quinclorac has not yet to be studied in detail. Herein, the pKa value and the corresponding structures of quinclorac were systematically investigated using combined experimental and theoretical approaches. The experimental pKa of quinclorac was determined by the spectrophotometric method to be 2.65 at 25 °C with ionic strength of 0.05 M, and was corrected to be 2.56 at ionic strength of zero. The molecular structures of quinclorac were then located by employing the DFT calculation. The anionic quinclorac was directly located with the carboxylic group perpendicular to the aromatic ring, while neutral quinclorac was found to be the equivalent twin structures. The result was further confirmed by analyzing the UV/Vis and MS-MS2 spectra from both experimental and theoretical viewpoints. By employing the QSPR approach, the theoretical pKa of QCR was determined to be 2.50, which is excellent agreement with the experimental result obtained herein. The protonation of QCR at the carboxylic group instead of the quinoline structure was attributed to the weak electronegative property of nitrogen atom induced by the electron-withdrawing groups. It is anticipated that this work could not only help in gaining a deep insight into the acid dissociation of quinclorac but also offering the key information on its reaction and interaction with others.

  14. Theoretical estimation of "6"4Cu production with neutrons emitted during "1"8F production with a 30 MeV medical cyclotron

    International Nuclear Information System (INIS)

    Auditore, Lucrezia; Amato, Ernesto; Baldari, Sergio

    2017-01-01

    Purpose: This work presents the theoretical estimation of a combined production of "1"8F and "6"4Cu isotopes for PET applications. "6"4Cu production is induced in a secondary target by neutrons emitted during a routine "1"8F production with a 30 MeV cyclotron: protons are used to produce "1"8F by means of the "1"8O(p,n)"1"8F reaction on a ["1"8O]-H_2O target (primary target) and the emitted neutrons are used to produce "6"4Cu by means of the "6"4Zn(n,p)"6"4Cu reaction on enriched zinc target (secondary target). Methods: Monte Carlo simulations were carried out using Monte Carlo N Particle eXtended (MCNPX) code to evaluate flux and energy spectra of neutrons produced in the primary (Be+["1"8O]-H_2O) target by protons and the attenuation of neutron flux in the secondary target. "6"4Cu yield was estimated using an analytical approach based on both TENDL-2015 data library and experimental data selected from EXFOR database. Results: Theoretical evaluations indicate that about 3.8 MBq/μA of "6"4Cu can be obtained as a secondary, ‘side’ production with a 30 MeV cyclotron, for 2 h of irradiation of a proper designed zinc target. Irradiating for 2 h with a proton current of 120 μA, a yield of about 457 MBq is expected. Moreover, the most relevant contaminants result to be "6"3","6"5Zn, which can be chemically separated from "6"4Cu contrarily to what happens with proton irradiation of an enriched "6"4Ni target, which provides "6"4Cu mixed to other copper isotopes as contaminants. Conclusions: The theoretical study discussed in this paper evaluates the potential of the combined production of "1"8F and "6"4Cu for medical purposes, irradiating a properly designed target with 30 MeV protons. Interesting yields of "6"4Cu are obtainable and the estimation of contaminants in the irradiated zinc target is discussed. - Highlights: • "6"4Cu production with secondary neutrons from "1"8F production with protons was investigated. • Neutron reactions induced in enriched "6"4Zn

  15. An approach to correlate experimental and theoretical thermal conductivity of MWNT/PMMA polymer composites

    International Nuclear Information System (INIS)

    Verma, M; Patidar, D; Sharma, K B; Saxena, N S

    2015-01-01

    In this paper an effort is made to correlate temperature dependent effective thermal conductivity measured by experimental method to theoretical results obtained from different models. MWNT/PMMA polymer nanocomposites were prepared by solution casting method, with different wt% of MWNT (0, 0.05, 0.1, 0.2, 0.3, 0.5, 1, 5, 10 wt%) dispersed in the PMMA matrix. The effective thermal conductivity from 30 °C to 110 °C is measured by Hot Disk Thermal Constant Analyser, based on transient plane source technique. Experimental study reveals that effective thermal conductivity increases with increasing concentration of MWNT in PMMA and increases exponentially at high temperatures for high (5, 10) wt% samples. This behavior of effective thermal conductivity is explained in terms of the interactions between polymer–MWNT and MWNT–MWNT. Consequently these results were found to be in agreement with theoretical models such as Series, Parallel, Lewis/Neilson and empirical formula. The discrepancy found in Lewis/Neilson model at high temperature for high wt% of MWNT in PMMA is due to some change in values of parameters incorporated in the model. (paper)

  16. Theoretical approaches of online social network interventions and implications for behavioral change: a systematic review.

    Science.gov (United States)

    Arguel, Amaël; Perez-Concha, Oscar; Li, Simon Y W; Lau, Annie Y S

    2018-02-01

    The aim of this review was to identify general theoretical frameworks used in online social network interventions for behavioral change. To address this research question, a PRISMA-compliant systematic review was conducted. A systematic review (PROSPERO registration number CRD42014007555) was conducted using 3 electronic databases (PsycINFO, Pubmed, and Embase). Four reviewers screened 1788 abstracts. 15 studies were selected according to the eligibility criteria. Randomized controlled trials and controlled studies were assessed using Cochrane Collaboration's "risk-of-bias" tool, and narrative synthesis. Five eligible articles used the social cognitive theory as a framework to develop interventions targeting behavioral change. Other theoretical frameworks were related to the dynamics of social networks, intention models, and community engagement theories. Only one of the studies selected in the review mentioned a well-known theory from the field of health psychology. Conclusions were that guidelines are lacking in the design of online social network interventions for behavioral change. Existing theories and models from health psychology that are traditionally used for in situ behavioral change should be considered when designing online social network interventions in a health care setting. © 2016 John Wiley & Sons, Ltd.

  17. Merging Theoretical Models and Therapy Approaches in the Context of Internet Gaming Disorder: A Personal Perspective

    Science.gov (United States)

    Young, Kimberly S.; Brand, Matthias

    2017-01-01

    Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD) has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013). This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA) fit with the processes hypothesized in the I-PACE model. PMID:29104555

  18. A novel multi-stage direct contact membrane distillation module: Design, experimental and theoretical approaches

    KAUST Repository

    Lee, Jung Gil

    2016-10-24

    An economic desalination system with a small scale and footprint for remote areas, which have a limited and inadequate water supply, insufficient water treatment and low infrastructure, is strongly demanded in the desalination markets. Here, a direct contact membrane distillation (DCMD) process has the simplest configuration and potentially the highest permeate flux among all of the possible MD processes. This process can also be easily instituted in a multi-stage manner for enhanced compactness, productivity, versatility and cost-effectiveness. In this study, an innovative, multi-stage, DCMD module under countercurrent-flow configuration is first designed and then investigate both theoretically and experimentally to identify its feasibility and operability for desalination application. Model predictions and measured data for mean permeate flux are compared and shown to be in good agreement. The effect of the number of module stages on the mean permeate flux, performance ratio and daily water production of the MDCMD system has been theoretically identified at inlet feed and permeate flow rates of 1.5 l/min and inlet feed and permeate temperatures of 70 °C and 25 °C, respectively. The daily water production of a three-stage DCMD module with a membrane area of 0.01 m2 at each stage is found to be 21.5 kg.

  19. A novel multi-stage direct contact membrane distillation module: Design, experimental and theoretical approaches.

    Science.gov (United States)

    Lee, Jung-Gil; Kim, Woo-Seung; Choi, June-Seok; Ghaffour, Noreddine; Kim, Young-Deuk

    2016-12-15

    An economic desalination system with a small scale and footprint for remote areas, which have a limited and inadequate water supply, insufficient water treatment and low infrastructure, is strongly demanded in the desalination markets. Here, a direct contact membrane distillation (DCMD) process has the simplest configuration and potentially the highest permeate flux among all of the possible MD processes. This process can also be easily instituted in a multi-stage manner for enhanced compactness, productivity, versatility and cost-effectiveness. In this study, an innovative, multi-stage, DCMD module under countercurrent-flow configuration is first designed and then investigate both theoretically and experimentally to identify its feasibility and operability for desalination application. Model predictions and measured data for mean permeate flux are compared and shown to be in good agreement. The effect of the number of module stages on the mean permeate flux, performance ratio and daily water production of the MDCMD system has been theoretically identified at inlet feed and permeate flow rates of 1.5 l/min and inlet feed and permeate temperatures of 70 °C and 25 °C, respectively. The daily water production of a three-stage DCMD module with a membrane area of 0.01 m 2  at each stage is found to be 21.5 kg. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Choosing Appropriate Theories for Understanding Hospital Reporting of Adverse Drug Events, a Theoretical Domains Framework Approach.

    Science.gov (United States)

    Shalviri, Gloria; Yazdizadeh, Bahareh; Mirbaha, Fariba; Gholami, Kheirollah; Majdzadeh, Reza

    2018-01-01

    Adverse drug events (ADEs) may cause serious injuries including death. Spontaneous reporting of ADEs plays a great role in detection and prevention of them; however, underreporting always exists. Although several interventions have been utilized to solve this problem, they are mainly based on experience and the rationale for choosing them has no theoretical base. The vast variety of behavioural theories makes it difficult to choose appropriate theory. Theoretical domains framework (TDF) is suggested as a solution. The objective of this study was to select the best theory for evaluating ADE reporting in hospitals based on TDF. We carried out three focus group discussions with hospital pharmacists and nurses, based on TDF questions. The analysis was performed through five steps including coding discussions transcript, extracting beliefs, selecting relevant domains, matching related constructs to the extracted beliefs, and determining the appropriate theories in each domain. The theory with the highest number of matched domains and constructs was selected as the theory of choice. A total of six domains were identified relevant to ADE reporting, including "Knowledge", "Skills", "Beliefs about consequences", "Motivation and goals", "Environmental context and resources" and "Social influences". We found theory of planned behavior as the comprehensive theory to study factors influencing ADE reporting in hospitals, since it was relevant theory in five out of six relevant domains and the common theory in 55 out of 75 identified beliefs. In conclusion, we suggest theory of planned behavior for further studies on designing appropriate interventions to increase ADE reporting in hospitals.

  1. Merging Theoretical Models and Therapy Approaches in the Context of Internet Gaming Disorder: A Personal Perspective.

    Science.gov (United States)

    Young, Kimberly S; Brand, Matthias

    2017-01-01

    Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD) has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013). This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA) fit with the processes hypothesized in the I-PACE model.

  2. Merging Theoretical Models and Therapy Approaches in the Context of Internet Gaming Disorder: A Personal Perspective

    Directory of Open Access Journals (Sweden)

    Kimberly S. Young

    2017-10-01

    Full Text Available Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013. This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA fit with the processes hypothesized in the I-PACE model.

  3. New aspects of the antioxidant properties of phenolic acids: a combined theoretical and experimental approach.

    Science.gov (United States)

    Anouar, E; Kosinová, P; Kozlowski, D; Mokrini, R; Duroux, J L; Trouillas, P

    2009-09-21

    Ferulic acid is widely distributed in the leaves and seeds of cereals as well as in coffee, apples, artichokes, peanuts, oranges and pineapples. Like numerous other natural polyphenols it exhibits antioxidant properties. It is known to act as a free radical scavenger by H atom transfer from the phenolic OH group. In the present joint experimental and theoretical studies we studied a new mechanism to explain such activities. Ferulic acid can indeed act by radical addition on the alpha,beta-double bond. On the basis of the identification of metabolites formed in an oxidative radiolytic solution and after DFT calculations, we studied the thermodynamic and kinetic aspects of this reaction. Addition and HAT reactions were treated as competitive reactions. The possibility of dimer formation was also investigated from a theoretical point of view; the high barriers we obtained contribute to explaining why we did not observe those compounds as major radiolytic compounds. The DPPH free radical scavenging capacity of ferulic acid and the oxidative products was measured and is discussed on the basis of DFT calculations (BDEs and spin densities).

  4. Electronic properties of Fe charge transfer complexes – A combined experimental and theoretical approach

    International Nuclear Information System (INIS)

    Ferreira, Hendrik; Eschwege, Karel G. von; Conradie, Jeanet

    2016-01-01

    Highlights: • Experimental and computational study of Fe II -phen, -bpy & -tpy compleesx. • Close correlations between experimental redox and spectral, and computational data. • Computational methods fast-track DSSC research. - Abstract: Dye-sensitized solar cell technology holds huge potential in renewable electricity generation of the future. Due to demand urgency, ways need to be explored to reduce research time and cost. Against this background, quantum computational chemistry is illustrated to be a reliable tool at the onset of studies in this field, simulating charge transfer, spectral (solar energy absorbed) and electrochemical (ease by which electrons may be liberated) tuning of related photo-responsive dyes. Comparative experimental and theoretical DFT studies were done under similar conditions, involving an extended series of electrochemically altered phenanthrolines, bipyridyl and terpyridyl complexes of Fe II . Fe II/III oxidation waves vary from 0.363 V for tris(3,6-dimethoxybipyridyl)Fe II to 0.894 V (versus Fc/Fc + ) for the 5-nitrophenanthroline complex. Theoretical DFT computed ionization potentials in the bipyridyl sub-series achieved an almost 100% linear correlation with experimental electrochemical oxidation potentials, while the phenanthroline sub-series gave R 2 = 0.95. Apart from the terpyridyl complex which accorded an almost perfect match, in general, TDDFT oscillators were computed at slightly lower energies than what was observed experimentally, while molecular HOMO and LUMO renderings reveal desired complexes with directional charge transfer propensities.

  5. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  6. A Structural VAR Approach to Estimating Budget Balance Targets

    OpenAIRE

    Robert A Buckle; Kunhong Kim; Julie Tam

    2001-01-01

    The Fiscal Responsibility Act 1994 states that, as a principle of responsible fiscal management, a New Zealand government should ensure total Crown debt is at a prudent level by ensuring total operating expenses do not exceed total operating revenues. In this paper a structural VAR model is estimated to evaluate the impact on the government's cash operating surplus (or budget balance) of four independent disturbances: supply, fiscal, real private demand, and nominal disturbances. Based on the...

  7. A model-based approach to estimating forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2006-01-01

    A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...

  8. A Novel Rules Based Approach for Estimating Software Birthmark

    Science.gov (United States)

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  9. A service and value based approach to estimating environmental flows

    DEFF Research Database (Denmark)

    Korsgaard, Louise; Jensen, R.A.; Jønch-Clausen, Torkil

    2008-01-01

    at filling that gap by presenting a new environmental flows assessment approach that explicitly links environmental flows to (socio)-economic values by focusing on ecosystem services. This Service Provision Index (SPI) approach is a novel contribution to the existing field of environmental flows assessment...... of sustaining ecosystems but also a matter of supporting humankind/livelihoods. One reason for the marginalisation of environmental flows is the lack of operational methods to demonstrate the inherently multi-disciplinary link between environmental flows, ecosystem services and economic value. This paper aims...

  10. Performance analysis of demodulation with diversity -- A combinatorial approach I: Symmetric function theoretical methods

    Directory of Open Access Journals (Sweden)

    Jean-Louis Dornstetter

    2002-12-01

    Full Text Available This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.

  11. Performance analysis of demodulation with diversity -- A combinatorial approach I: Symmetric function theoretical methods

    OpenAIRE

    Jean-Louis Dornstetter; Daniel Krob; Jean-Yves Thibon; Ekaterina A. Vassilieva

    2002-01-01

    This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.

  12. A Game-theoretical Approach for Distributed Cooperative Control of Autonomous Underwater Vehicles

    KAUST Repository

    Lu, Yimeng

    2018-01-01

    design and learning process of the algorithm are modified to fit specific constraints of underwater exploration/monitoring tasks. The revised approach can take the real scenario of underwater monitoring applications such as the effect of sea current

  13. The combined theoretical and experimental approach to arrive at optimum parameters in friction stir welding

    Science.gov (United States)

    Jagadeesha, C. B.

    2017-12-01

    Even though friction stir welding was invented long back (1991) by TWI England, till now there has no method or procedure or approach developed, which helps to obtain quickly optimum or exact parameters yielding good or sound weld. An approach has developed in which an equation has been derived, by which approximate rpm can be obtained and by setting range of rpm ±100 or 50 rpm over approximate rpm and by setting welding speed equal to 60 mm/min or 50 mm/min one can conduct FSW experiment to reach optimum parameters; one can reach quickly to optimum parameters, i.e. desired rpm, and welding speed, which yield sound weld by the approach. This approach can be effectively used to obtain sound welds for all similar and dissimilar combinations of materials such as Steel, Al, Mg, Ti, etc.

  14. A Comparison of Approaches for Solving Hard Graph-Theoretic Problems

    Science.gov (United States)

    2015-04-29

    and Search”, in Discrete Mathematics and Its Applications, Book 7, CRC Press (1998): Boca Raton. [6] A. Lucas, “Ising Formulations of Many NP Problems...owner. 14. ABSTRACT In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many... combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a

  15. Theoretical basis, application, reliability, and sample size estimates of a Meridian Energy Analysis Device for Traditional Chinese Medicine Research

    Directory of Open Access Journals (Sweden)

    Ming-Yen Tsai

    Full Text Available OBJECTIVES: The Meridian Energy Analysis Device is currently a popular tool in the scientific research of meridian electrophysiology. In this field, it is generally believed that measuring the electrical conductivity of meridians provides information about the balance of bioenergy or Qi-blood in the body. METHODS AND RESULTS: PubMed database based on some original articles from 1956 to 2014 and the authoŕs clinical experience. In this short communication, we provide clinical examples of Meridian Energy Analysis Device application, especially in the field of traditional Chinese medicine, discuss the reliability of the measurements, and put the values obtained into context by considering items of considerable variability and by estimating sample size. CONCLUSION: The Meridian Energy Analysis Device is making a valuable contribution to the diagnosis of Qi-blood dysfunction. It can be assessed from short-term and long-term meridian bioenergy recordings. It is one of the few methods that allow outpatient traditional Chinese medicine diagnosis, monitoring the progress, therapeutic effect and evaluation of patient prognosis. The holistic approaches underlying the practice of traditional Chinese medicine and new trends in modern medicine toward the use of objective instruments require in-depth knowledge of the mechanisms of meridian energy, and the Meridian Energy Analysis Device can feasibly be used for understanding and interpreting traditional Chinese medicine theory, especially in view of its expansion in Western countries.

  16. Theoretical and experimental estimation of the lead equivalent for some materials used in finishing of diagnostic x-ray rooms in Syria

    International Nuclear Information System (INIS)

    Shwekani, R.; Suman, H.; Takeyeddin, M.; Suleiman, J.

    2003-11-01

    This work aimed at estimating the lead equivalent values for finishing materials, which are frequently used in Syria. These materials are ceramic and marble. In the past, many studies were performed to estimate the lead equivalent values for different types of bricks, which are widely used in Syria. Therefore, this work could be considered as a follow up in order to be able to estimate the structural shielding of diagnostic X-ray rooms and accurately perform the shielding calculations to reduce unnecessary added shields. The work was done in two ways, theoretical using MCNP computer code and experimental in the secondary standard laboratory. The theoretical work was focused on generalizing the results scope to cover the real existing variations in the structure of the materials used in the finishing or the variations in the X-ray machines. Therefore, quantifying different sources of errors were strongly focused on using the methodology of sensitivity analysis. While, the experiment measurements were performed to make sure that their results will be within the error range produced by the theoretical study. The obtained results showed a strong correlation between theoretical and experimental data. (author)

  17. A tensor approach to the estimation of hydraulic conductivities in ...

    African Journals Online (AJOL)

    Based on the field measurements of the physical properties of fractured rocks, the anisotropic properties of hydraulic conductivity (HC) of the fractured rock aquifer can be assessed and presented using a tensor approach called hydraulic conductivity tensor. Three types of HC values, namely point value, axial value and flow ...

  18. Locating sensors for detecting source-to-target patterns of special nuclear material smuggling: a spatial information theoretic approach.

    Science.gov (United States)

    Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong

    2010-01-01

    In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  19. Locating Sensors for Detecting Source-to-Target Patterns of Special Nuclear Material Smuggling: A Spatial Information Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Xuesong Zhou

    2010-08-01

    Full Text Available In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  20. Equivalence among three alternative approaches to estimating live tree carbon stocks in the eastern United States

    Science.gov (United States)

    Coeli M. Hoover; James E. Smith

    2017-01-01

    Assessments of forest carbon are available via multiple alternate tools or applications and are in use to address various regulatory and reporting requirements. The various approaches to making such estimates may or may not be entirely comparable. Knowing how the estimates produced by some commonly used approaches vary across forest types and regions allows users of...

  1. Three-dimensionality of space and the quantum bit: an information-theoretic approach

    International Nuclear Information System (INIS)

    Müller, Markus P; Masanes, Lluís

    2013-01-01

    It is sometimes pointed out as a curiosity that the state space of quantum two-level systems, i.e. the qubit, and actual physical space are both three-dimensional and Euclidean. In this paper, we suggest an information-theoretic analysis of this relationship, by proving a particular mathematical result: suppose that physics takes place in d spatial dimensions, and that some events happen probabilistically (not assuming quantum theory in any way). Furthermore, suppose there are systems that carry ‘minimal amounts of direction information’, interacting via some continuous reversible time evolution. We prove that this uniquely determines spatial dimension d = 3 and quantum theory on two qubits (including entanglement and unitary time evolution), and that it allows observers to infer local spatial geometry from probability measurements. (paper)

  2. THE NECESSITY OF APPROACHING THE ENTERPRISE PERFORMANCE CONCEPT THROUGH A THEORETICAL FUNDAMENTAL SYSTEM

    Directory of Open Access Journals (Sweden)

    DEAC VERONICA

    2017-10-01

    Full Text Available The purpose of this paper is to justify the necessity of building of a theoretical-fundamental system to define and delimitate the integrated notions applicable to the concept of enterprise performance. Standing as a fundamental research, the present paper argues and shows that the literature in this field and the applied environment, as well, require a more clearer segregation, respectively an increase of specificity of the concept "enterprise performance" considering that it is not unanimously defined, on one hand, and, especially, due to the fact that it represents a key concept widely used, which, ultimately, has to be measured in order to be helpful, on the other hand. Moreover, the present paper would be useful to scholars working in the field of firm performance who are willing to understand this concept and to develop the future research referring to enterprise performance measurement.

  3. Role of word-of-mouth for programs of voluntary vaccination: A game-theoretic approach.

    Science.gov (United States)

    Bhattacharyya, Samit; Bauch, Chris T; Breban, Romulus

    2015-11-01

    We propose a model describing the synergetic feedback between word-of-mouth (WoM) and epidemic dynamics controlled by voluntary vaccination. The key feature consists in combining a game-theoretic model for the spread of WoM and a compartmental model describing VSIR disease dynamics in the presence of a program of voluntary vaccination. We evaluate and compare two scenarios for determinants of behavior, depending on what WoM disseminates: (1) vaccine advertising, which may occur whether or not an epidemic is ongoing and (2) epidemic status, notably disease prevalence. Understanding the synergy between the two strategies could be particularly important for designing voluntary vaccination campaigns. We find that, in the initial phase of an epidemic, vaccination uptake is determined more by vaccine advertising than the epidemic status. As the epidemic progresses, epidemic status becomes increasingly important for vaccination uptake, considerably accelerating vaccination uptake toward a stable vaccination coverage. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. On Innovation: A Theoretical Approach on the Challenges of Utilities Marketing

    Directory of Open Access Journals (Sweden)

    Florina PÎNZARU

    2012-06-01

    Full Text Available One of the markets not long ago closed and completely regulated is now in the growing process of liberalization and deregulation: it is the utilities market we refer to (water, sewege, gas, electricity, waste collection. The deregulation of a market is usually followed by the appearance of competition expression conditions and, unassailably, the occurrence of specific marketing strategies. This paper investigates the specific of utilities marketing as it develops now, an bourgeoning domain, although with a rather discreet presence in this field’s theoretical analysis studies. Exploratory research on the analysis type products, promotional offers and communication of this market’s players shows an effervescent players practice, but also a continuous innovation necessary in a market where consumers are unfamiliar with bein persuaded by commercial means

  5. A theoretical approach to the restoration of azulejos by re-firing

    Directory of Open Access Journals (Sweden)

    João Manuel Mimoso

    2016-01-01

    Full Text Available LNEC found, as a by-product of another research project, that in at least some cases, glazed ceramic tiles (azulejos could be restored by re-firing. The re-firing of façade glazed tiles as a viable alternative to their outright dumping should, in principle, present no doubts. However the mere idea of restoring in the kiln brings forth methodologic arguments unparalleled in other restoration techniques. The present communication discusses the re-firing of azulejos based on theoretical restoration principles, aiming to demonstrate that it cannot be discarded straightforwardly without considering individually its advantages and possible applications. However, and although no damaging consequences where identified, the eventual application of this method to azulejos that are not considered as industrial products still requires complementary studies regarding its long-term dangerousness.

  6. Field theory approaches to new media practices: An introduction and some theoretical considerations

    Directory of Open Access Journals (Sweden)

    Ida Willig

    2015-05-01

    Full Text Available In this article introducing the theme of the special issue we argue that studies of new media practices might benefit from especially Pierre Bourdieu’s research on cultural production. We introduce some of the literature, which deals with the use of digital media, and which have taken steps to develop field theory in this context. Secondly, we present the four thematic articles in this issue and the articles outside the theme, which includes two translations of classic texts within communication and media research. This introduction article concludes by encouraging media scholars to embark on more studies within a field theory framework, as the ability of the comprehensive theoretical work and the ideas of a reflexive sociology is able to trigger the good questions, more than it claims to offer a complete and self-sufficient sociology of media and inherent here also new media.

  7. Formation of Virtual Organizations in Grids: A Game-Theoretic Approach

    Science.gov (United States)

    Carroll, Thomas E.; Grosu, Daniel

    The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.

  8. Structural modeling and analysis of an effluent treatment process for electroplating--a graph theoretic approach.

    Science.gov (United States)

    Kumar, Abhishek; Clement, Shibu; Agrawal, V P

    2010-07-15

    An attempt is made to address a few ecological and environment issues by developing different structural models for effluent treatment system for electroplating. The effluent treatment system is defined with the help of different subsystems contributing to waste minimization. Hierarchical tree and block diagram showing all possible interactions among subsystems are proposed. These non-mathematical diagrams are converted into mathematical models for design improvement, analysis, comparison, storage retrieval and commercially off-the-shelf purchases of different subsystems. This is achieved by developing graph theoretic model, matrix models and variable permanent function model. Analysis is carried out by permanent function, hierarchical tree and block diagram methods. Storage and retrieval is done using matrix models. The methodology is illustrated with the help of an example. Benefits to the electroplaters/end user are identified. 2010 Elsevier B.V. All rights reserved.

  9. A New Theoretical Approach to Single-Molecule Fluorescence Optical Studies of RNA Dynamics

    International Nuclear Information System (INIS)

    Zhao Xinghai; Shan Guangcun; Bao Shuying

    2011-01-01

    Single-molecule fluorescence spectroscopy in condensed phases has many important chemical and biological applications. The single-molecule fluorescence measurements contain information about conformational dynamics on a vast range of time scales. Based on the data analysis protocols methodology proposed by X. Sunney Xie, the theoretical study here mainly focuses on the single-molecule studies of single RNA with interconversions among different conformational states, to with a single FRET pair attached. We obtain analytical expressions for fluorescence lifetime correlation functions that relate changes in fluorescence lifetime to the distance-dependent FRET mechanism within the context of the Smoluchowski diffusion model. The present work establishes useful guideline for the single-molecule studies of biomolecules to reveal the complicated folding dynamics of single RNA molecules at nanometer scale.

  10. Matched pairs approach to set theoretic solutions of the Yang-Baxter equation

    International Nuclear Information System (INIS)

    Gateva-Ivanova, T.; Majid, S.

    2005-08-01

    We study set-theoretic solutions (X,r) of the Yang-Baxter equations on a set X in terms of the induced left and right actions of X on itself. We give a characterization of involutive square-free solutions in terms of cyclicity conditions. We characterise general solutions in terms of an induced matched pair of unital semigroups S(X,r) and construct (S,r S ) from the matched pair. Finally, we study extensions of solutions in terms of matched pairs of their associated semigroups. We also prove several general results about matched pairs of unital semigroups of the required type, including iterated products S bowtie S bowtie S underlying the proof that r S is a solution, and extensions (S bowtie T, r Sb owtie T ). Examples include a general 'double' construction (S bowtie S,r Sb owtie S ) and some concrete extensions, their actions and graphs based on small sets. (author)

  11. Theoretical Approach to Synergistic Interaction of Ionizing Radiation with Other Factors

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Petinb, Vladislav G.

    2005-01-01

    Living objects including men are never exposed to merely one harmful agent. Many physical, chemical, biological and social factors may simultaneously exert their deleterious influence to man and the environment. Risk assessment is generally performed with the simplest assumption that the factor under consideration acts largely independently of others. However, the combined exposure to two harmful agents could result in a higher effect than would be expected from the addition of the separate exposures to individual agents. Hence, there is a possibility that, at least at high exposures, the combined effect of ionizing radiation with other environmental factors can be resulted in a greater overall risk. The problem is not so clear for low intensity and there is no possibility of testing all conceivable combinations of agents. For further insight into the mode of synergistic interaction, discussed are a common feature of synergistic interaction display and a theoretical model to describe, optimize and predict the synergistic effects

  12. A game theoretical approach for QoS provisioning in heterogeneous networks

    Directory of Open Access Journals (Sweden)

    A.S.M. Zadid Shifat

    2015-09-01

    Full Text Available With the proliferation of mobile phone users, interference management is a big concern in this neoteric years. To cope with this problem along with ensuring better Quality of Service (QoS, femtocell plays an imperious preamble in heterogeneous networks (HetNets for some of its noteworthy characteristics. In this paper, we propose a game theoretic algorithm along with dynamic channel allocation and hybrid access mechanism with self-organizing power control scheme. With a view to resolving prioritized access issue, the concept of primary and secondary users is applied. Existence of pure strategy Nash equilibrium (NE has been investigated and comes to a perfection that our proposed scheme can be adopted both increasing capacity and increasing revenue of operators considering optimal price for consumers.

  13. Transmedial Worlds: Conceptual Review and Theoretical Approaches on the Art of Worldmaking

    Directory of Open Access Journals (Sweden)

    Nieves Rosendo Sánchez

    2016-01-01

    Full Text Available The concept of transmedia storytelling introduced by Henry Jenkins (2003, 2006 has been widely developed in the academic field, getting popular and being redefined with time. In transmedia storytelling’s definition we can observe the focus on the concept of world, which from a theoretical and critical perspective did not had the development that the relationship among narrative and media had. This work gathers and analyzes a selection of references from the main authors that have defined the nature, limits and transferability of the so called transmedial worlds, proposing conclusions related to the need to observe the phenomenon of transmedia narratives, its founding elements and the processes related to them, with the aim to elaborate a catalogue of criteria for the analysis and consideration of key element of transmedia storytelling.

  14. Insights into Glycol Ether-Alkanol Mixtures from a Combined Experimental and Theoretical Approach.

    Science.gov (United States)

    Alcalde, Rafael; Gutiérrez, Alberto; Atilhan, Mert; Trenzado, José Luis; Aparicio, Santiago

    2017-06-08

    The binary liquid mixtures of glycol ethers (glymes) + 1-alkanol were characterized from the microscopic and macroscopic viewpoints through a combined experimental and theoretical study. Structuring, dynamics, and intermolecular forces were determined using density functional theory and classical molecular dynamics methods. The macroscopic behavior was studied though the measurement of relevant physicochemical properties and Raman IR studies. The changes in intermolecular forces with mixture composition, temperature, and the effects from the types of glymes as well as 1-alkanols were considered. Hydrogen bonding in the mixed fluids, its changes upon mixing, and mixture composition showed a large effect on fluids' structure and determined most of the fluids' properties together with the presence of hydrophobic domains from long 1-alkanols.

  15. Theoretical and methodological approaches to determining the effectiveness of medical services

    Directory of Open Access Journals (Sweden)

    Yu. Yu. Shvets

    2016-01-01

    Full Text Available The article analyzes the results of theoretical and methodological studies on the effectiveness of the health care system. It formulated the concept and components of the efficiency of medical services it is an economic category of health care operations. In particular, it is determined that the efficiency of medical services is a characteristic effect and shows the feasibility of the use of material, financial and human resources in this event, method, intervention. The results showed that in respect of medical services is determined by the medical, social and economic efficiency, between which there is a relationship and interdependence. The article analyzes the fundamentals and methodological bases of social, medical and economic efficiency of medical services. It was determined that a priority in the health system are the very social and medical effectiveness, taking into account those of socially important tasks to perform which is directed the whole health system in the country. Given the ambiguous understanding of the relationship of efficiency of medical services with its quality, based on the analysis of theoretical developments, identified the common and distinctive features of the characteristics of the health service in order to determine their impact on software quality and affordable health care services to the population. In particular, the efficiency of medical services is considered in close conjunction with its quality. The quality and efficiency of medical services are interrelated characteristics. However, the results showed that it is not always effective healthcare services, particularly in terms of economic efficiency, can be qualitative, and not always high-quality medical services – can be called economically efficient.

  16. Impaired cerebral blood flow networks in temporal lobe epilepsy with hippocampal sclerosis: A graph theoretical approach.

    Science.gov (United States)

    Sone, Daichi; Matsuda, Hiroshi; Ota, Miho; Maikusa, Norihide; Kimura, Yukio; Sumida, Kaoru; Yokoyama, Kota; Imabayashi, Etsuko; Watanabe, Masako; Watanabe, Yutaka; Okazaki, Mitsutoshi; Sato, Noriko

    2016-09-01

    Graph theory is an emerging method to investigate brain networks. Altered cerebral blood flow (CBF) has frequently been reported in temporal lobe epilepsy (TLE), but graph theoretical findings of CBF are poorly understood. Here, we explored graph theoretical networks of CBF in TLE using arterial spin labeling imaging. We recruited patients with TLE and unilateral hippocampal sclerosis (HS) (19 patients with left TLE, and 21 with right TLE) and 20 gender- and age-matched healthy control subjects. We obtained all participants' CBF maps using pseudo-continuous arterial spin labeling and analyzed them using the Graph Analysis Toolbox (GAT) software program. As a result, compared to the controls, the patients with left TLE showed a significantly low clustering coefficient (p=0.024), local efficiency (p=0.001), global efficiency (p=0.010), and high transitivity (p=0.015), whereas the patients with right TLE showed significantly high assortativity (p=0.046) and transitivity (p=0.011). The group with right TLE also had high characteristic path length values (p=0.085), low global efficiency (p=0.078), and low resilience to targeted attack (p=0.101) at a trend level. Lower normalized clustering coefficient (p=0.081) in the left TLE and higher normalized characteristic path length (p=0.089) in the right TLE were found also at a trend level. Both the patients with left and right TLE showed significantly decreased clustering in similar areas, i.e., the cingulate gyri, precuneus, and occipital lobe. Our findings revealed differing left-right network metrics in which an inefficient CBF network in left TLE and vulnerability to irritation in right TLE are suggested. The left-right common finding of regional decreased clustering might reflect impaired default-mode networks in TLE. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Demirjian approach of dental age estimation: Abridged for operator ease.

    Science.gov (United States)

    Jain, Vanshika; Kapoor, Priyanka; Miglani, Ragini

    2016-01-01

    Present times have seen an alarming increase in incidence of crimes by juveniles and of mass destruction that Highlight the preponderance of individual age estimation. Of the numerous techniques employed for age assessment, dental age estimation (DAE) and its correlation with chronological age (CA) have been of great significance in the recent past. Demirjian system, considered as gold standard in DAE is a simple and convenient method for DAE, though,, although, referring to multiple tables make it cumbersome and less eco friendly due to excessive paper load. The present study was aimed to develop a comprehensive chart (DAEcc) inclusive of all Demirjian tables and developmental stages of teeth and also to as well as to test the operator ease of 50 undergraduate dental students in performing DAE using this chart. The study was performed in two stages, wherein the first stage was aimed at formulation of the comprehensive chart (DAE CC ) which included pictorial representation of calcification stages, the Federation Dentaire Internationale notation of the teeth, and the corresponding scores for each stage with a concluding column at the end to enter the total score. The second stage assessed the applicability of the ease of DAE by DAE CC , whereby fifty 2 nd year BDS students were asked to trace the calcification stages of the seven permanent left mandibular teeth on a panorex, identify the correct stage, assign the corresponding score, and to calculate the total score for subsequent dental age assessment. showed that average time taken by the students for tracing seven mandibular teeth was 5 min and for assessment of dental age was 7 min. The total time taken for DAE was approximately 12 min, thus making the procedure less time consuming. Hence, this study proposes the use of DAEcc for age estimation due to ease in comprehension and execution of Demirjian system.

  18. A service and value based approach to estimating environmental flows

    DEFF Research Database (Denmark)

    Korsgaard, Louise; Jensen, R.A.; Jønch-Clausen, Torkil

    2008-01-01

    An important challenge of Integrated Water Resources Management (IWRM) is to balance water allocation between different users and uses. While economically and/or politically powerful users have relatively well developed methods for quantifying and justifying their water needs, this is not the case...... methodologies. The SPI approach is a pragmatic and transparent tool for incorporating ecosystems and environmental flows into the evaluation of water allocation scenarios, negotiations of trade-offs and decision-making in IWRM....

  19. Estimating radionuclide air concentrations near buildings: a screening approach

    International Nuclear Information System (INIS)

    Miller, C.W.; Yildiran, M.

    1984-01-01

    For some facilities that routinely release small amounts of radionuclides to the atmosphere, such as hospitals, research laboratories, contaminated clothing laundries, and others, it is necessary to estimate the dose to persons very near the buildings from which the releases occur. Such facilities need simple screening procedures which provide reasonable assurance that as long as the calculated dose is less than some fraction of a relevant dose limit no individual will receive a dose in excess of that limit. Screening procedures have been proposed for persons living within hundreds of meters to a few kilometers from a source of radioactive effluent. This paper examines a screening technique for estimating long-term average radionuclide air concentrations within approximately 100 m of a building from which the release occurs. The technique is based on a modified gaussion plume model (HB model) which considers the influence of the tallest building within 100 m and is independant of atmospheric stability and downwind distance. 4 references, 2 tables

  20. A representation-theoretic approach to the calculation of evolutionary distance in bacteria

    Science.gov (United States)

    Sumner, Jeremy G.; Jarvis, Peter D.; Francis, Andrew R.

    2017-08-01

    In the context of bacteria and models of their evolution under genome rearrangement, we explore a novel application of group representation theory to the inference of evolutionary history. Our contribution is to show, in a very general maximum likelihood setting, how to use elementary matrix algebra to sidestep intractable combinatorial computations and convert the problem into one of eigenvalue estimation amenable to standard numerical approximation techniques.