WorldWideScience

Sample records for approximate reasoning methods

  1. Approximate reasoning in physical systems

    International Nuclear Information System (INIS)

    Mutihac, R.

    1991-01-01

    The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)

  2. Approximate reasoning in decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M M; Sanchez, E

    1982-01-01

    The volume aims to incorporate the recent advances in both theory and applications. It contains 44 articles by 74 contributors from 17 different countries. The topics considered include: membership functions; composite fuzzy relations; fuzzy logic and inference; classifications and similarity measures; expert systems and medical diagnosis; psychological measurements and human behaviour; approximate reasoning and decision analysis; and fuzzy clustering algorithms.

  3. An Approximate Reasoning-Based Method for Screening High-Level-Waste Tanks for Flammable Gas

    International Nuclear Information System (INIS)

    Eisenhawer, Stephen W.; Bott, Terry F.; Smith, Ronald E.

    2000-01-01

    The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop an improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts

  4. An approximate reasoning-based method for screening high-level-waste tanks for flammable gas

    International Nuclear Information System (INIS)

    Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.

    2000-01-01

    The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop and improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts

  5. An approximate reasoning-based method for screening high-level-waste tanks for flammable gas

    Energy Technology Data Exchange (ETDEWEB)

    Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.

    2000-06-01

    The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop and improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts.

  6. An approximate-reasoning-based method for screening high-level waste tanks for flammable gas

    International Nuclear Information System (INIS)

    Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.

    1998-01-01

    The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at Hanford have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop an improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. AR models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. The authors performed a pilot study to investigate the utility of AR for flammable gas screening. They found that the effort to implement such a model was acceptable and that computational requirements were reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts

  7. An approximate-reasoning-based method for screening flammable gas tanks

    International Nuclear Information System (INIS)

    Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.

    1998-03-01

    High-level waste (HLW) produces flammable gases as a result of radiolysis and thermal decomposition of organics. Under certain conditions, these gases can accumulate within the waste for extended periods and then be released quickly into the dome space of the storage tank. As part of the effort to reduce the safety concerns associated with flammable gas in HLW tanks at Hanford, a flammable gas watch list (FGWL) has been established. Inclusion on the FGWL is based on criteria intended to measure the risk associated with the presence of flammable gas. It is important that all high-risk tanks be identified with high confidence so that they may be controlled. Conversely, to minimize operational complexity, the number of tanks on the watchlist should be reduced as near to the true number of flammable risk tanks as the current state of knowledge will support. This report presents an alternative to existing approaches for FGWL screening based on the theory of approximate reasoning (AR) (Zadeh 1976). The AR-based model emulates the inference process used by an expert when asked to make an evaluation. The FGWL model described here was exercised by performing two evaluations. (1) A complete tank evaluation where the entire algorithm is used. This was done for two tanks, U-106 and AW-104. U-106 is a single shell tank with large sludge and saltcake layers. AW-104 is a double shell tank with over one million gallons of supernate. Both of these tanks had failed the screening performed by Hodgson et al. (2) Partial evaluations using a submodule for the predictor likelihood for all of the tanks on the FGWL that had been flagged previously by Whitney (1995)

  8. Intelligent control-II: review of fuzzy systems and theory of approximate reasoning

    International Nuclear Information System (INIS)

    Nagrial, M.H.

    2004-01-01

    Fuzzy systems are knowledge-based or rule-based systems. The heart of a fuzzy systems knowledge base consisting of the so-called fuzzy IF -THEN rules. This paper reviews various aspects of fuzzy IF-THEN rules. The theory of approximate reasoning, which provides a powerful framework for reasoning the imprecise and uncertain information, , is also reviewed. Additional properties of fuzzy systems are also discussed. (author)

  9. Evaluation of high-level waste pretreatment processes with an approximate reasoning model

    International Nuclear Information System (INIS)

    Bott, T.F.; Eisenhawer, S.W.; Agnew, S.F.

    1999-01-01

    The development of an approximate-reasoning (AR)-based model to analyze pretreatment options for high-level waste is presented. AR methods are used to emulate the processes used by experts in arriving at a judgment. In this paper, the authors first consider two specific issues in applying AR to the analysis of pretreatment options. They examine how to combine quantitative and qualitative evidence to infer the acceptability of a process result using the example of cesium content in low-level waste. They then demonstrate the use of simple physical models to structure expert elicitation and to produce inferences consistent with a problem involving waste particle size effects

  10. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  11. Uncertainty and approximate reasoning in waste pretreatment planning

    International Nuclear Information System (INIS)

    Agnew, S.F.; Eisenhawer, S.W.; Bott, T.F.

    1998-01-01

    Waste pretreatment process planning within the DOE complex must consider many different outcomes in order to perform the tradeoffs necessary to accomplish this important national mission. One of the difficulties encountered by many who assess these tradeoffs is that the complexity of this problem taxes the abilities of any single person or small group of individuals. For example, uncertainties in waste composition as well as process efficiency are well known yet incompletely considered in the search for optimum solutions. This paper describes a tool, the pre-treatment Process Analysis Tool (PAT), for evaluating tank waste pretreatment options at Hanford, Oak Ridge, Idaho National Environmental and Engineering Laboratory, and Savannah River Sites. The PAT propagates uncertainty in both tank waste composition and process partitioning into a set of ten outcomes. These outcomes are, for example, total cost, Cs-137 in iLAW, iHLW MT, and so on. Tradeoffs among outcomes are evaluated or scored by means of an approximate reasoning module that uses linguistic bases to evaluate tradeoffs for each process based on user valuations of outcomes

  12. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  13. Information processing systems, reasoning modules, and reasoning system design methods

    Science.gov (United States)

    Hohimer, Ryan E; Greitzer, Frank L; Hampton, Shawn D

    2014-03-04

    Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.

  14. Information processing systems, reasoning modules, and reasoning system design methods

    Energy Technology Data Exchange (ETDEWEB)

    Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.

    2016-08-23

    Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.

  15. Information processing systems, reasoning modules, and reasoning system design methods

    Energy Technology Data Exchange (ETDEWEB)

    Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.

    2015-08-18

    Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.

  16. Scientific Facts and Methods in Public Reason

    DEFF Research Database (Denmark)

    Jønch-Clausen, Karin; Kappel, Klemens

    2016-01-01

    Should scientific facts and methods have an epistemically privileged status in public reason? In Rawls’s public reason account he asserts what we will label the Scientific Standard Stricture: citizens engaged in public reason must be guided by non-controversial scientific methods, and public reason...... must be in line with non-controversial scientific conclusions. The Scientific Standard Stricture is meant to fulfill important tasks such as enabling the determinateness and publicity of the public reason framework. However, Rawls leaves us without elucidation with regard to when science...

  17. Approximate analytical methods for solving ordinary differential equations

    CERN Document Server

    Radhika, TSL; Rani, T Raja

    2015-01-01

    Approximate Analytical Methods for Solving Ordinary Differential Equations (ODEs) is the first book to present all of the available approximate methods for solving ODEs, eliminating the need to wade through multiple books and articles. It covers both well-established techniques and recently developed procedures, including the classical series solution method, diverse perturbation methods, pioneering asymptotic methods, and the latest homotopy methods.The book is suitable not only for mathematicians and engineers but also for biologists, physicists, and economists. It gives a complete descripti

  18. A simple approximation method for dilute Ising systems

    International Nuclear Information System (INIS)

    Saber, M.

    1996-10-01

    We describe a simple approximate method to analyze dilute Ising systems. The method takes into consideration the fluctuations of the effective field, and is based on a probability distribution of random variables which correctly accounts for all the single site kinematic relations. It is shown that the simplest approximation gives satisfactory results when compared with other methods. (author). 12 refs, 2 tabs

  19. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Saddlepoint approximation methods in financial engineering

    CERN Document Server

    Kwok, Yue Kuen

    2018-01-01

    This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables.  The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...

  1. Nonlinear ordinary differential equations analytical approximation and numerical methods

    CERN Document Server

    Hermann, Martin

    2016-01-01

    The book discusses the solutions to nonlinear ordinary differential equations (ODEs) using analytical and numerical approximation methods. Recently, analytical approximation methods have been largely used in solving linear and nonlinear lower-order ODEs. It also discusses using these methods to solve some strong nonlinear ODEs. There are two chapters devoted to solving nonlinear ODEs using numerical methods, as in practice high-dimensional systems of nonlinear ODEs that cannot be solved by analytical approximate methods are common. Moreover, it studies analytical and numerical techniques for the treatment of parameter-depending ODEs. The book explains various methods for solving nonlinear-oscillator and structural-system problems, including the energy balance method, harmonic balance method, amplitude frequency formulation, variational iteration method, homotopy perturbation method, iteration perturbation method, homotopy analysis method, simple and multiple shooting method, and the nonlinear stabilized march...

  2. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  3. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  4. Multiuser detection and channel estimation: Exact and approximate methods

    DEFF Research Database (Denmark)

    Fabricius, Thomas

    2003-01-01

    subtractive interference cancellation with hyperbolic tangent tentative decision device, in statistical mechanics and machine learning called the naive mean field approach. The differences between the proposed algorithms lie in how the bias is estimated/approximated. We propose approaches based on a second...... propose here to use accurate approximations borrowed from statistical mechanics and machine learning. These give us various algorithms that all can be formulated in a subtractive interference cancellation formalism. The suggested algorithms can e ectively be seen as bias corrections to standard...... of the Junction Tree Algorithm, which is a generalisation of Pearl's Belief Propagation, the BCJR, sum product, min/max sum, and Viterbi's algorithm. Although efficient algoithms, they have an inherent exponential complexity in the number of users when applied to CDMA multiuser detection. For this reason we...

  5. 26 CFR 1.412(c)(3)-1 - Reasonable funding methods.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Reasonable funding methods. 1.412(c)(3)-1... Reasonable funding methods. (a) Introduction—(1) In general. This section prescribes rules for determining whether or not, in the case of an ongoing plan, a funding method is reasonable for purposes of section 412...

  6. Efficient solution of parabolic equations by Krylov approximation methods

    Science.gov (United States)

    Gallopoulos, E.; Saad, Y.

    1990-01-01

    Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.

  7. Mean-field approximation for spacing distribution functions in classical systems

    Science.gov (United States)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  8. Improvement of Tone's method with two-term rational approximation

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Endo, Tomohiro; Chiba, Go

    2011-01-01

    An improvement of Tone's method, which is a resonance calculation method based on the equivalence theory, is proposed. In order to increase calculation accuracy, the two-term rational approximation is incorporated for the representation of neutron flux. Furthermore, some theoretical aspects of Tone's method, i.e., its inherent approximation and choice of adequate multigroup cross section for collision probability estimation, are also discussed. The validity of improved Tone's method is confirmed through a verification calculation in an irregular lattice geometry, which represents part of an LWR fuel assembly. The calculation result clarifies the validity of the present method. (author)

  9. Major Accidents (Gray Swans) Likelihood Modeling Using Accident Precursors and Approximate Reasoning.

    Science.gov (United States)

    Khakzad, Nima; Khan, Faisal; Amyotte, Paul

    2015-07-01

    Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.

  10. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  11. Approximate solution methods in engineering mechanics

    International Nuclear Information System (INIS)

    Boresi, A.P.; Cong, K.P.

    1991-01-01

    This is a short book of 147 pages including references and sometimes bibliographies at the end of each chapter, and subject and author indices at the end of the book. The test includes an introduction of 3 pages, 29 pages explaining approximate analysis, 41 pages on finite differences, 36 pages on finite elements, and 17 pages on specialized methods

  12. Tau method approximation of the Hubbell rectangular source integral

    International Nuclear Information System (INIS)

    Kalla, S.L.; Khajah, H.G.

    2000-01-01

    The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows

  13. Approximate Method for Solving the Linear Fuzzy Delay Differential Equations

    Directory of Open Access Journals (Sweden)

    S. Narayanamoorthy

    2015-01-01

    Full Text Available We propose an algorithm of the approximate method to solve linear fuzzy delay differential equations using Adomian decomposition method. The detailed algorithm of the approach is provided. The approximate solution is compared with the exact solution to confirm the validity and efficiency of the method to handle linear fuzzy delay differential equation. To show this proper features of this proposed method, numerical example is illustrated.

  14. Evaluation of the successive approximations method for acoustic streaming numerical simulations.

    Science.gov (United States)

    Catarino, S O; Minas, G; Miranda, J M

    2016-05-01

    This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.

  15. MODERN METHODS OF REASONABLE PRODUCT SUPPLY

    Directory of Open Access Journals (Sweden)

    Anna Kulik

    2016-11-01

    Full Text Available Thesis objective is to study modern methods of product supply with the purpose to determine optimal ways for their rationalization. Since the use of reasonable practices, taking into account external and internal factors under the specific conditions of product moving from the supplier to the buyer, makes the process of product supply economically viable, i.e., low costs for product transportation, ensures fast moving products, their safety and, ultimately, results in reduction of the costs of product disposal. Methodology. The study is based on theoretical methods to study this problem. System analysis method and simulation of the ways to improve were also used in the study. Results. Addressing these issues, the concept, form and stages of product supply process organization depending on the type of product have been studied; product supply management methods based on logistics concept of “demand response”. Practical significance. Optimization of the principles and methods of product supply, factors affecting its organization will, in practice, contribute to the development of reasonable product delivery systems featured with economic efficiency of advanced technologies of product supply. Value/ originality. The analyzed methods of product supply management based on logistics concept of “demand response” can ensure maximum reduction of response time to the changes in demand by rapid stocktaking at those points of the market where the demand is expected to increase, which will allow to reduce the costs of bringing the product to the consumer.

  16. The generalized approximation method and nonlinear heat transfer equations

    Directory of Open Access Journals (Sweden)

    Rahmat Khan

    2009-01-01

    Full Text Available Generalized approximation technique for a solution of one-dimensional steady state heat transfer problem in a slab made of a material with temperature dependent thermal conductivity, is developed. The results obtained by the generalized approximation method (GAM are compared with those studied via homotopy perturbation method (HPM. For this problem, the results obtained by the GAM are more accurate as compared to the HPM. Moreover, our (GAM generate a sequence of solutions of linear problems that converges monotonically and rapidly to a solution of the original nonlinear problem. Each approximate solution is obtained as the solution of a linear problem. We present numerical simulations to illustrate and confirm the theoretical results.

  17. Approximate Methods for the Generation of Dark Matter Halo Catalogs in the Age of Precision Cosmology

    Directory of Open Access Journals (Sweden)

    Pierluigi Monaco

    2016-10-01

    Full Text Available Precision cosmology has recently triggered new attention on the topic of approximate methods for the clustering of matter on large scales, whose foundations date back to the period from the late 1960s to early 1990s. Indeed, although the prospect of reaching sub-percent accuracy in the measurement of clustering poses a challenge even to full N-body simulations, an accurate estimation of the covariance matrix of clustering statistics, not to mention the sampling of parameter space, requires usage of a large number (hundreds in the most favourable cases of simulated (mock galaxy catalogs. Combination of few N-body simulations with a large number of realizations performed with approximate methods gives the most promising approach to solve these problems with a reasonable amount of resources. In this paper I review this topic, starting from the foundations of the methods, then going through the pioneering efforts of the 1990s, and finally presenting the latest extensions and a few codes that are now being used in present-generation surveys and thoroughly tested to assess their performance in the context of future surveys.

  18. Improved stochastic approximation methods for discretized parabolic partial differential equations

    Science.gov (United States)

    Guiaş, Flavius

    2016-12-01

    We present improvements of the stochastic direct simulation method, a known numerical scheme based on Markov jump processes which is used for approximating solutions of ordinary differential equations. This scheme is suited especially for spatial discretizations of evolution partial differential equations (PDEs). By exploiting the full path simulation of the stochastic method, we use this first approximation as a predictor and construct improved approximations by Picard iterations, Runge-Kutta steps, or a combination. This has as consequence an increased order of convergence. We illustrate the features of the improved method at a standard benchmark problem, a reaction-diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D).

  19. Approximation and inference methods for stochastic biochemical kinetics—a tutorial review

    International Nuclear Information System (INIS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2017-01-01

    Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics. (topical review)

  20. Approximate solution fuzzy pantograph equation by using homotopy perturbation method

    Science.gov (United States)

    Jameel, A. F.; Saaban, A.; Ahadkulov, H.; Alipiah, F. M.

    2017-09-01

    In this paper, Homotopy Perturbation Method (HPM) is modified and formulated to find the approximate solution for its employment to solve (FDDEs) involving a fuzzy pantograph equation. The solution that can be obtained by using HPM is in the form of infinite series that converge to the actual solution of the FDDE and this is one of the benefits of this method In addition, it can be used for solving high order fuzzy delay differential equations directly without reduction to a first order system. Moreover, the accuracy of HPM can be detected without needing the exact solution. The HPM is studied for fuzzy initial value problems involving pantograph equation. Using the properties of fuzzy set theory, we reformulate the standard approximate method of HPM and obtain the approximate solutions. The effectiveness of the proposed method is demonstrated for third order fuzzy pantograph equation.

  1. Space-angle approximations in the variational nodal method

    International Nuclear Information System (INIS)

    Lewis, E. E.; Palmiotti, G.; Taiwo, T.

    1999-01-01

    The variational nodal method is formulated such that the angular and spatial approximations maybe examined separately. Spherical harmonic, simplified spherical harmonic, and discrete ordinate approximations are coupled to the primal hybrid finite element treatment of the spatial variables. Within this framework, two classes of spatial trial functions are presented: (1) orthogonal polynomials for the treatment of homogeneous nodes and (2) bilinear finite subelement trial functions for the treatment of fuel assembly sized nodes in which fuel-pin cell cross sections are represented explicitly. Polynomial and subelement trial functions are applied to benchmark water-reactor problems containing MOX fuel using spherical harmonic and simplified spherical harmonic approximations. The resulting accuracy and computing costs are compared

  2. A working-set framework for sequential convex approximation methods

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2008-01-01

    We present an active-set algorithmic framework intended as an extension to existing implementations of sequential convex approximation methods for solving nonlinear inequality constrained programs. The framework is independent of the choice of approximations and the stabilization technique used...... to guarantee global convergence of the method. The algorithm works directly on the nonlinear constraints in the convex sub-problems and solves a sequence of relaxations of the current sub-problem. The algorithm terminates with the optimal solution to the sub-problem after solving a finite number of relaxations....

  3. The C{sub n} method for approximation of the Boltzmann equation; La methode C{sub n} d'approximation de l'equation de Boltzmann

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P; Kavenoky, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1968-01-15

    In a new method of approximation of the Boltzmann equation, one starts from a particular form of the equation which involves only the angular flux at the boundary of the considered medium and where the space variable does not appear explicitly. Expanding in orthogonal polynomials the angular flux of neutrons leaking from the medium and making no assumption about the angular flux within the medium, very good approximations to several classical plane geometry problems, i.e. the albedo of slabs and the transmission by slabs, the extrapolation length of the Milne problem, the spectrum of neutrons reflected by a semi-infinite slowing down medium. The method can be extended to other geometries. (authors) [French] On etablit une nouvelle methode d'approximation pour l'equation de Boltzmann en partant d'une forme particuliere de cette equation qui n'implique que le flux angulaire a la frontiere du milieu et ou les variables d'espace n'apparaissent pas explicitement. Par un developpement en polynomes orthogonaux du flux angulaire sortant du milieu et sans faire d'hypothese sur le flux angulaire a l'interieur du milieu, on obtient de tres bonnes approximations pour plusieurs problemes classiques en geometrie plane: l'albedo et le facteur de transmission des plaques, la longueur d'extrapolation du probleme de Milne, le spectre des neutrons reflechis par un milieu semi-infini ralentisseur. La methode se generalise a d'autres geometries. (auteurs)

  4. An approximation to the interference term using Frobenius Method

    Energy Technology Data Exchange (ETDEWEB)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C. da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mail: aquilino@lmp.ufrj.br

    2007-07-01

    An analytical approximation of the interference term {chi}(x,{xi}) is proposed. The approximation is based on the differential equation to {chi}(x,{xi}) using the Frobenius method and the parameter variation. The analytical expression of the {chi}(x,{xi}) obtained in terms of the elementary functions is very simple and precise. In this work one applies the approximations to the Doppler broadening functions and to the interference term in determining the neutron cross sections. Results were validated for the resonances of the U{sup 238} isotope for different energies and temperature ranges. (author)

  5. An approximation to the interference term using Frobenius Method

    International Nuclear Information System (INIS)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C. da

    2007-01-01

    An analytical approximation of the interference term χ(x,ξ) is proposed. The approximation is based on the differential equation to χ(x,ξ) using the Frobenius method and the parameter variation. The analytical expression of the χ(x,ξ) obtained in terms of the elementary functions is very simple and precise. In this work one applies the approximations to the Doppler broadening functions and to the interference term in determining the neutron cross sections. Results were validated for the resonances of the U 238 isotope for different energies and temperature ranges. (author)

  6. A cluster approximation for the transfer-matrix method

    International Nuclear Information System (INIS)

    Surda, A.

    1990-08-01

    A cluster approximation for the transfer-method is formulated. The calculation of the partition function of lattice models is transformed to a nonlinear mapping problem. The method yields the free energy, correlation functions and the phase diagrams for a large class of lattice models. The high accuracy of the method is exemplified by the calculation of the critical temperature of the Ising model. (author). 14 refs, 2 figs, 1 tab

  7. Approximation methods for efficient learning of Bayesian networks

    CERN Document Server

    Riggelsen, C

    2008-01-01

    This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.

  8. Application of plausible reasoning to AI-based control systems

    Science.gov (United States)

    Berenji, Hamid; Lum, Henry, Jr.

    1987-01-01

    Some current approaches to plausible reasoning in artificial intelligence are reviewed and discussed. Some of the most significant recent advances in plausible and approximate reasoning are examined. A synergism among the techniques of uncertainty management is advocated, and brief discussions on the certainty factor approach, probabilistic approach, Dempster-Shafer theory of evidence, possibility theory, linguistic variables, and fuzzy control are presented. Some extensions to these methods are described, and the applications of the methods are considered.

  9. Approximation methods for the partition functions of anharmonic systems

    International Nuclear Information System (INIS)

    Lew, P.; Ishida, T.

    1979-07-01

    The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations

  10. Local Approximation and Hierarchical Methods for Stochastic Optimization

    Science.gov (United States)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

  11. A method based on the Jacobi tau approximation for solving multi-term time-space fractional partial differential equations

    Science.gov (United States)

    Bhrawy, A. H.; Zaky, M. A.

    2015-01-01

    In this paper, we propose and analyze an efficient operational formulation of spectral tau method for multi-term time-space fractional differential equation with Dirichlet boundary conditions. The shifted Jacobi operational matrices of Riemann-Liouville fractional integral, left-sided and right-sided Caputo fractional derivatives are presented. By using these operational matrices, we propose a shifted Jacobi tau method for both temporal and spatial discretizations, which allows us to present an efficient spectral method for solving such problem. Furthermore, the error is estimated and the proposed method has reasonable convergence rates in spatial and temporal discretizations. In addition, some known spectral tau approximations can be derived as special cases from our algorithm if we suitably choose the corresponding special cases of Jacobi parameters θ and ϑ. Finally, in order to demonstrate its accuracy, we compare our method with those reported in the literature.

  12. Finite rank separable approximation for Skyrme interactions: spin-isospin excitations

    International Nuclear Information System (INIS)

    Severyukhin, A.P.; Voronov, V.V.; Borzov, I.N.; Nguyen Van Giai

    2012-01-01

    A finite rank separable approximation for the quasiparticle random phase approximation with the Skyrme interactions is applied for the case of charge-exchange nuclear modes. The coupling between one- and two-phonon terms in the wave functions is taken into account. It has been shown that the approximation reproduces reasonably well the full charge-exchange RPA results for the spin-dipole resonances in 132 Sn. As an illustration of the method, the phonon-phonon coupling effect on the β-decay half-life of 78 Ni is considered

  13. The complex variable boundary element method: Applications in determining approximative boundaries

    Science.gov (United States)

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  14. An Approximate Method for the Acoustic Attenuating VTI Eikonal Equation

    KAUST Repository

    Hao, Q.

    2017-05-26

    We present an approximate method to solve the acoustic eikonal equation for attenuating transversely isotropic media with a vertical symmetry axis (VTI). A perturbation method is used to derive the perturbation formula for complex-valued traveltimes. The application of Shanks transform further enhances the accuracy of approximation. We derive both analytical and numerical solutions to the acoustic eikonal equation. The analytic solution is valid for homogeneous VTI media with moderate anellipticity and strong attenuation and attenuation-anisotropy. The numerical solution is applicable for inhomogeneous attenuating VTI media.

  15. An Approximate Method for the Acoustic Attenuating VTI Eikonal Equation

    KAUST Repository

    Hao, Q.; Alkhalifah, Tariq Ali

    2017-01-01

    We present an approximate method to solve the acoustic eikonal equation for attenuating transversely isotropic media with a vertical symmetry axis (VTI). A perturbation method is used to derive the perturbation formula for complex-valued traveltimes. The application of Shanks transform further enhances the accuracy of approximation. We derive both analytical and numerical solutions to the acoustic eikonal equation. The analytic solution is valid for homogeneous VTI media with moderate anellipticity and strong attenuation and attenuation-anisotropy. The numerical solution is applicable for inhomogeneous attenuating VTI media.

  16. An explicit approximate solution to the Duffing-harmonic oscillator by a cubication method

    International Nuclear Information System (INIS)

    Belendez, A.; Mendez, D.I.; Fernandez, E.; Marini, S.; Pascual, I.

    2009-01-01

    The nonlinear oscillations of a Duffing-harmonic oscillator are investigated by an approximated method based on the 'cubication' of the initial nonlinear differential equation. In this cubication method the restoring force is expanded in Chebyshev polynomials and the original nonlinear differential equation is approximated by a Duffing equation in which the coefficients for the linear and cubic terms depend on the initial amplitude, A. The replacement of the original nonlinear equation by an approximate Duffing equation allows us to obtain explicit approximate formulas for the frequency and the solution as a function of the complete elliptic integral of the first kind and the Jacobi elliptic function, respectively. These explicit formulas are valid for all values of the initial amplitude and we conclude this cubication method works very well for the whole range of initial amplitudes. Excellent agreement of the approximate frequencies and periodic solutions with the exact ones is demonstrated and discussed and the relative error for the approximate frequency is as low as 0.071%. Unlike other approximate methods applied to this oscillator, which are not capable to reproduce exactly the behaviour of the approximate frequency when A tends to zero, the cubication method used in this Letter predicts exactly the behaviour of the approximate frequency not only when A tends to infinity, but also when A tends to zero. Finally, a closed-form expression for the approximate frequency is obtained in terms of elementary functions. To do this, the relationship between the complete elliptic integral of the first kind and the arithmetic-geometric mean as well as Legendre's formula to approximately obtain this mean are used.

  17. Metacognition and reasoning

    Science.gov (United States)

    Fletcher, Logan; Carruthers, Peter

    2012-01-01

    This article considers the cognitive architecture of human meta-reasoning: that is, metacognition concerning one's own reasoning and decision-making. The view we defend is that meta-reasoning is a cobbled-together skill comprising diverse self-management strategies acquired through individual and cultural learning. These approximate the monitoring-and-control functions of a postulated adaptive system for metacognition by recruiting mechanisms that were designed for quite other purposes. PMID:22492753

  18. On quasiclassical approximation in the inverse scattering method

    International Nuclear Information System (INIS)

    Geogdzhaev, V.V.

    1985-01-01

    Using as an example quasiclassical limits of the Korteweg-de Vries equation and nonlinear Schroedinger equation, the quasiclassical limiting variant of the inverse scattering problem method is presented. In quasiclassical approximation the inverse scattering problem for the Schroedinger equation is reduced to the classical inverse scattering problem

  19. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  20. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  1. Network Forensics Method Based on Evidence Graph and Vulnerability Reasoning

    Directory of Open Access Journals (Sweden)

    Jingsha He

    2016-11-01

    Full Text Available As the Internet becomes larger in scale, more complex in structure and more diversified in traffic, the number of crimes that utilize computer technologies is also increasing at a phenomenal rate. To react to the increasing number of computer crimes, the field of computer and network forensics has emerged. The general purpose of network forensics is to find malicious users or activities by gathering and dissecting firm evidences about computer crimes, e.g., hacking. However, due to the large volume of Internet traffic, not all the traffic captured and analyzed is valuable for investigation or confirmation. After analyzing some existing network forensics methods to identify common shortcomings, we propose in this paper a new network forensics method that uses a combination of network vulnerability and network evidence graph. In our proposed method, we use vulnerability evidence and reasoning algorithm to reconstruct attack scenarios and then backtrack the network packets to find the original evidences. Our proposed method can reconstruct attack scenarios effectively and then identify multi-staged attacks through evidential reasoning. Results of experiments show that the evidence graph constructed using our method is more complete and credible while possessing the reasoning capability.

  2. A Case-Based Reasoning Method with Rank Aggregation

    Science.gov (United States)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  3. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  4. Study of the scientific reasoning methods: Identifying the salient reasoning characteristics exhibited by engineers and scientists in an R&D environment

    Science.gov (United States)

    Kuhn, William F.

    At the core of what it means to be a scientist or engineer is the ability to think rationally using scientific reasoning methods. Yet, typically if asked, scientist and engineers are hard press for a reply what that means. Some may argue that the meaning of scientific reasoning methods is a topic for the philosophers and psychologist, but this study believes and will prove that the answers lie with the scientists and engineers, for who really know the workings of the scientific reasoning thought process than they. This study will provide evidence to the aims: (a) determine the fundamental characteristics of cognitive reasoning methods exhibited by engineer/scientists working in R&D projects, (b) sample the engineer/scientist community to determine their views as to the importance, frequency, and ranking of each of characteristics towards benefiting their R&D projects, (c) make concluding remarks regarding any identified competency gaps in the exhibited or expected cognitive reasoning methods of engineer/scientists working on R&D projects. To drive these aims are the following three research questions. The first, what are the salient characteristics of cognitive reasoning methods exhibited by engineer/scientists in an R&D environment? The second, what do engineer/scientists consider to be the frequency and importance of the salient cognitive reasoning methods characteristics? And the third, to what extent, if at all, do patent holders and technical fellows differ with regard to their perceptions of the importance and frequency of the salient cognitive reasoning characteristics of engineer/scientists? The methodology and empirical approach utilized and described: (a) literature search, (b) Delphi technique composed of seven highly distinguish engineer/scientists, (c) survey instrument directed to distinguish Technical Fellowship, (d) data collection analysis. The results provide by Delphi Team answered the first research question. The collaborative effort validated

  5. Enhanced Multistage Homotopy Perturbation Method: Approximate Solutions of Nonlinear Dynamic Systems

    Directory of Open Access Journals (Sweden)

    Daniel Olvera

    2014-01-01

    Full Text Available We introduce a new approach called the enhanced multistage homotopy perturbation method (EMHPM that is based on the homotopy perturbation method (HPM and the usage of time subintervals to find the approximate solution of differential equations with strong nonlinearities. We also study the convergence of our proposed EMHPM approach based on the value of the control parameter h by following the homotopy analysis method (HAM. At the end of the paper, we compare the derived EMHPM approximate solutions of some nonlinear physical systems with their corresponding numerical integration solutions obtained by using the classical fourth order Runge-Kutta method via the amplitude-time response curves.

  6. Quantal density functional theory II. Approximation methods and applications

    International Nuclear Information System (INIS)

    Sahni, Viraht

    2010-01-01

    This book is on approximation methods and applications of Quantal Density Functional Theory (QDFT), a new local effective-potential-energy theory of electronic structure. What distinguishes the theory from traditional density functional theory is that the electron correlations due to the Pauli exclusion principle, Coulomb repulsion, and the correlation contribution to the kinetic energy -- the Correlation-Kinetic effects -- are separately and explicitly defined. As such it is possible to study each property of interest as a function of the different electron correlations. Approximations methods based on the incorporation of different electron correlations, as well as a many-body perturbation theory within the context of QDFT, are developed. The applications are to the few-electron inhomogeneous electron gas systems in atoms and molecules, as well as to the many-electron inhomogeneity at metallic surfaces. (orig.)

  7. Parallel iterative solvers and preconditioners using approximate hierarchical methods

    Energy Technology Data Exchange (ETDEWEB)

    Grama, A.; Kumar, V.; Sameh, A. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.

  8. Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Benzi, M. [Universita di Bologna (Italy); Tuma, M. [Inst. of Computer Sciences, Prague (Czech Republic)

    1996-12-31

    A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.

  9. Development of Probabilistic and Possebilistic Approaches to Approximate Reasoning and Its Applications

    Science.gov (United States)

    1989-10-31

    fo tmaa OmfuogeM ara Mmi. fal in fM?05V~ ~ ~ ~ ~ ~ A D A 2 4 0409"~ n ugt Psoo,@’ oducbof Proton (07044 136M. WagaWapN. DC 20141 T1 3. REPORT TYPE...Al (circumscription, non- monotonic reasoning, and default reasoning), our approach is based on fuzzy logic and, more specifically, on the theory of

  10. Variational, projection methods and Pade approximants in scattering theory

    International Nuclear Information System (INIS)

    Turchetti, G.

    1980-12-01

    Several aspects on the scattering theory are discussed in a perturbative scheme. The Pade approximant method plays an important role in such a scheme. Solitons solutions are also discussed in this same scheme. (L.C.) [pt

  11. Reasons, methods used and decision-making for pregnancy ...

    African Journals Online (AJOL)

    Objective: To explore the methods, reasons and decision-making process for termination of pregnancy among adolescents and older women, in Mulago hospital, Kampala, Uganda. Design: Comparative study. Subjects: Nine hundred and forty two women seeking postabortion care, of which 333 had induced abortion (of ...

  12. On approximate reasoning and minimal models for the development of robust outdoor vehicle navigation schemes

    International Nuclear Information System (INIS)

    Pin, F.G.

    1993-01-01

    Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ''minimal model'' for accomplishing given tasks and proposes to utilize only the minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept

  13. On approximate reasoning and minimal models for the development of robust outdoor vehicle navigation schemes

    Energy Technology Data Exchange (ETDEWEB)

    Pin, F.G.

    1993-11-01

    Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ``minimal model`` for accomplishing given tasks and proposes to utilize only the minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept.

  14. Epistemological Development and Judgments and Reasoning about Teaching Methods

    Science.gov (United States)

    Spence, Sarah; Helwig, Charles C.

    2013-01-01

    Children's, adolescents', and adults' (N = 96 7-8, 10-11, and 13-14-year-olds and university students) epistemological development and its relation to judgments and reasoning about teaching methods was examined. The domain (scientific or moral), nature of the topic (controversial or noncontroversial), and teaching method (direct instruction by…

  15. A domian Decomposition Method for Transient Neutron Transport with Pomrning-Eddington Approximation

    International Nuclear Information System (INIS)

    Hendi, A.A.; Abulwafa, E.E.

    2008-01-01

    The time-dependent neutron transport problem is approximated using the Pomraning-Eddington approximation. This approximation is two-flux approximation that expands the angular intensity in terms of the energy density and the net flux. This approximation converts the integro-differential Boltzmann equation into two first order differential equations. The A domian decomposition method that used to solve the linear or nonlinear differential equations is used to solve the resultant two differential equations to find the neutron energy density and net flux, which can be used to calculate the neutron angular intensity through the Pomraning-Eddington approximation

  16. Introduction to methods of approximation in physics and astronomy

    CERN Document Server

    van Putten, Maurice H P M

    2017-01-01

    This textbook provides students with a solid introduction to the techniques of approximation commonly used in data analysis across physics and astronomy. The choice of methods included is based on their usefulness and educational value, their applicability to a broad range of problems and their utility in highlighting key mathematical concepts. Modern astronomy reveals an evolving universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data-analysis. The book is organized to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal dete...

  17. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic; Nouy, Anthony

    2017-01-01

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  18. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic

    2017-06-30

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  19. An approximate method to calculate ionization of LTE and non-LTE plasma

    International Nuclear Information System (INIS)

    Zhang Jun; Gu Peijun

    1987-01-01

    When matter, especially high Z element, is heated to high temperature, it will be ionized many times. The degree of ionization has a strong effect on many plasma properties. So an approximate method to calculate the mean ionization degree is needed for solving many practical problems. An analytical expression which is convenient for the approximate numerical calculation is given by fitting it to the scaling law and numerical results of the ionization potential of Thomas-Fermi statistical model. In LTE case, the ionization degree of Au calculated by using the approximate method is in agreement with that of the average ion model. By extending the approximate method to non-LTE case, the ionization degree of Au is similarly calculated according to Corona model and Collision-Radiatoin model(C-R). The results of Corona model agree with the published data quite well, while the results of C-R approach those of Corona model as the density is reduced and approach those of LTE as the density is increased. Finally, all approximately calculated results of ionization degree of Au and the comparision of them are given in figures and tables

  20. Optimization in engineering sciences approximate and metaheuristic methods

    CERN Document Server

    Stefanoiu, Dan; Popescu, Dumitru; Filip, Florin Gheorghe; El Kamel, Abdelkader

    2014-01-01

    The purpose of this book is to present the main metaheuristics and approximate and stochastic methods for optimization of complex systems in Engineering Sciences. It has been written within the framework of the European Union project ERRIC (Empowering Romanian Research on Intelligent Information Technologies), which is funded by the EU's FP7 Research Potential program and has been developed in co-operation between French and Romanian teaching researchers. Through the principles of various proposed algorithms (with additional references) this book allows the reader to explore various methods o

  1. Approximation of the Doppler broadening function by Frobenius method

    International Nuclear Information System (INIS)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C.

    2005-01-01

    An analytical approximation of the Doppler broadening function ψ(x,ξ) is proposed. This approximation is based on the solution of the differential equation for ψ(x,ξ) using the methods of Frobenius and the parameters variation. The analytical form derived for ψ(x,ξ) in terms of elementary functions is very simple and precise. It can be useful for applications related to the treatment of nuclear resonances mainly for the calculations of multigroup parameters and self-protection factors of the resonances, being the last used to correct microscopic cross-sections measurements by the activation technique. (author)

  2. Analytical Evaluation of Beam Deformation Problem Using Approximate Methods

    DEFF Research Database (Denmark)

    Barari, Amin; Kimiaeifar, A.; Domairry, G.

    2010-01-01

    The beam deformation equation has very wide applications in structural engineering. As a differential equation, it has its own problem concerning existence, uniqueness and methods of solutions. Often, original forms of governing differential equations used in engineering problems are simplified......, and this process produces noise in the obtained answers. This paper deals with the solution of second order of differential equation governing beam deformation using four analytical approximate methods, namely the Perturbation, Homotopy Perturbation Method (HPM), Homotopy Analysis Method (HAM) and Variational...... Iteration Method (VIM). The comparisons of the results reveal that these methods are very effective, convenient and quite accurate for systems of non-linear differential equation....

  3. Evaluation of Fresnel's corrections to the eikonal approximation by the separabilization method

    International Nuclear Information System (INIS)

    Musakhanov, M.M.; Zubarev, A.L.

    1975-01-01

    Method of separabilization of potential over the Schroedinger approximate solutions, leading to Schwinger's variational principle for scattering amplitude, is suggested. The results are applied to calculation of the Fresnel corrections to the Glauber approximation

  4. Approximating methods for intractable probabilistic models: Applications in neuroscience

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro

    2002-01-01

    This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...

  5. An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2014-01-01

    Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.

  6. An approximate methods approach to probabilistic structural analysis

    Science.gov (United States)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.

  7. Adaptive ACMS: A robust localized Approximated Component Mode Synthesis Method

    OpenAIRE

    Madureira, Alexandre L.; Sarkis, Marcus

    2017-01-01

    We consider finite element methods of multiscale type to approximate solutions for two-dimensional symmetric elliptic partial differential equations with heterogeneous $L^\\infty$ coefficients. The methods are of Galerkin type and follows the Variational Multiscale and Localized Orthogonal Decomposition--LOD approaches in the sense that it decouples spaces into multiscale and fine subspaces. In a first method, the multiscale basis functions are obtained by mapping coarse basis functions, based...

  8. Comparison of Two-Block Decomposition Method and Chebyshev Rational Approximation Method for Depletion Calculation

    International Nuclear Information System (INIS)

    Lee, Yoon Hee; Cho, Nam Zin

    2016-01-01

    The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.

  9. Comparison of Two-Block Decomposition Method and Chebyshev Rational Approximation Method for Depletion Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hee; Cho, Nam Zin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.

  10. Deconvolution of EPR spectral lines with an approximate method

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1990-10-01

    A recently reported approximation expression to deconvolution Lorentzian-Gaussian spectral lines. with small Gaussian contribution, is applied to study an EPR line shape. The potassium-ammonium solution line reported in the literature by other authors was used and the results are compared with those obtained by employing a precise method. (Author)

  11. An approximate method for lateral stability analysis of wall-frame ...

    Indian Academy of Sciences (India)

    Initially the stability differential equation of this equivalent sandwich beam is ... buckling loads of coupled shear-wall structures using continuous medium ... In this study, an approximate method based on continuum system model and transfer.

  12. Low rank approximation method for efficient Green's function calculation of dissipative quantum transport

    Science.gov (United States)

    Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann

    2013-06-01

    In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.

  13. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    International Nuclear Information System (INIS)

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-01-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm

  14. Effect of flux discontinuity on spatial approximations for discrete ordinates methods

    International Nuclear Information System (INIS)

    Duo, J.I.; Azmy, Y.Y.

    2005-01-01

    This work presents advances on error analysis of the spatial approximation of the discrete ordinates method for solving the neutron transport equation. Error norms for different non-collided flux problems over a two dimensional pure absorber medium are evaluated using three numerical methods. The problems are characterized by the incoming flux boundary conditions to obtain solutions with different level of differentiability. The three methods considered are the Diamond Difference (DD) method, the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic type (AHOT-C). The last two methods are employed in constant, linear and quadratic orders of spatial approximation. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that the level of differentiability of the exact solution profoundly affects the rate of convergence of the numerical methods' solutions. Furthermore, in the case of discontinuous exact flux the methods fail to converge in the maximum error norm, or in the pointwise sense, in accordance with previous local error analysis. (authors)

  15. Laplace transform homotopy perturbation method for the approximation of variational problems.

    Science.gov (United States)

    Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R

    2016-01-01

    This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.

  16. Rational function approximation method for discrete ordinates problems in slab geometry

    International Nuclear Information System (INIS)

    Leal, Andre Luiz do C.; Barros, Ricardo C.

    2009-01-01

    In this work we use rational function approaches to obtain the transfer functions that appear in the spectral Green's function (SGF) auxiliary equations for one-speed isotropic scattering SN equations in one-dimensional Cartesian geometry. For this task we use the computation of the Pade approximants to compare the results with the standard SGF method's applied to deep penetration problems in homogeneous domains. This work is a preliminary investigation of a new proposal for handling leakage terms that appear in the two transverse integrated one-dimensional SN equations in the exponential SGF method (SGF-ExpN). Numerical results are presented to illustrate the rational function approximation accuracy. (author)

  17. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Science.gov (United States)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  18. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    Science.gov (United States)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  19. Approximate Analytic Solutions for the Two-Phase Stefan Problem Using the Adomian Decomposition Method

    Directory of Open Access Journals (Sweden)

    Xiao-Ying Qin

    2014-01-01

    Full Text Available An Adomian decomposition method (ADM is applied to solve a two-phase Stefan problem that describes the pure metal solidification process. In contrast to traditional analytical methods, ADM avoids complex mathematical derivations and does not require coordinate transformation for elimination of the unknown moving boundary. Based on polynomial approximations for some known and unknown boundary functions, approximate analytic solutions for the model with undetermined coefficients are obtained using ADM. Substitution of these expressions into other equations and boundary conditions of the model generates some function identities with the undetermined coefficients. By determining these coefficients, approximate analytic solutions for the model are obtained. A concrete example of the solution shows that this method can easily be implemented in MATLAB and has a fast convergence rate. This is an efficient method for finding approximate analytic solutions for the Stefan and the inverse Stefan problems.

  20. An Approximate Method for Solving Optimal Control Problems for Discrete Systems Based on Local Approximation of an Attainability Set

    Directory of Open Access Journals (Sweden)

    V. A. Baturin

    2017-03-01

    Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.

  1. Generation method of educational materials using qualitative reasoning

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Yamada, Shigeo; Fujisawa, Noriyoshi.

    1992-01-01

    Central Research Institute of Electric Power Industry has developed a nuclear power plant educational system in which educational materials for several events are included. The system effectively teaches operators by tailoring the event explanations to their knowledge levels of understanding. The preparation of the educational materials, however, is laborious and this becomes one of the problems in the practical use of the system. Discussed in the present paper is a basic explanation generation method using qualitative reasoning. This has been developed to solve the problem. Qualitative equations describing a recirculation pumps trip were transformed into production rules. These were stored in the knowledge base of an event explanation generation system together with explanation sentences. When an operator selects a certain variable's time-interval in which he wants to know the reasons for a variable change, the inference engine searches for the rule which satisfies both the qualitative value and qualitative differential value concerned with this time-interval. Then the event explanation generation section provides explanations by combining the explanation sentences attached to the rules. This paper demonstrates that it is possible to apply qualitative reasoning to such complex reactor systems, and also that explanations can be generated using the simulation results from a transient analysis code. (author)

  2. Content-related interactions and methods of reasoning within self-initiated organic chemistry study groups

    Science.gov (United States)

    Christian, Karen Jeanne

    2011-12-01

    Students often use study groups to prepare for class or exams; yet to date, we know very little about how these groups actually function. This study looked at the ways in which undergraduate organic chemistry students prepared for exams through self-initiated study groups. We sought to characterize the methods of social regulation, levels of content processing, and types of reasoning processes used by students within their groups. Our analysis showed that groups engaged in predominantly three types of interactions when discussing chemistry content: co-construction, teaching, and tutoring. Although each group engaged in each of these types of interactions at some point, their prevalence varied between groups and group members. Our analysis suggests that the types of interactions that were most common depended on the relative content knowledge of the group members as well as on the difficulty of the tasks in which they were engaged. Additionally, we were interested in characterizing the reasoning methods used by students within their study groups. We found that students used a combination of three content-relevant methods of reasoning: model-based reasoning, case-based reasoning, or rule-based reasoning, in conjunction with one chemically-irrelevant method of reasoning: symbol-based reasoning. The most common way for groups to reason was to use rules, whereas the least common way was for students to work from a model. In general, student reasoning correlated strongly to the subject matter to which students were paying attention, and was only weakly related to student interactions. Overall, results from this study may help instructors to construct appropriate tasks to guide what and how students study outside of the classroom. We found that students had a decidedly strategic approach in their study groups, relying heavily on material provided by their instructors, and using the reasoning strategies that resulted in the lowest levels of content processing. We suggest

  3. Approximate method in estimation sensitivity responses to variations in delayed neutron energy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J; Shin, H S; Song, T Y; Park, W S [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    Previous our numerical results in computing point kinetics equations show a possibility in developing approximations to estimate sensitivity responses of nuclear reactor. We recalculate sensitivity responses by maintaining the corrections with first order of sensitivity parameter. We present a method for computing sensitivity responses of nuclear reactor based on an approximation derived from point kinetics equations. Exploiting this approximation, we found that the first order approximation works to estimate variations in the time to reach peak power because of their linear dependence on a sensitivity parameter, and that there are errors in estimating the peak power in the first order approximation for larger sensitivity parameters. To confirm legitimacy of out approximation, these approximate results are compared with exact results obtained from out previous numerical study. 4 refs., 2 figs., 3 tabs. (Author)

  4. Approximate method in estimation sensitivity responses to variations in delayed neutron energy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Shin, H. S.; Song, T. Y.; Park, W. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    Previous our numerical results in computing point kinetics equations show a possibility in developing approximations to estimate sensitivity responses of nuclear reactor. We recalculate sensitivity responses by maintaining the corrections with first order of sensitivity parameter. We present a method for computing sensitivity responses of nuclear reactor based on an approximation derived from point kinetics equations. Exploiting this approximation, we found that the first order approximation works to estimate variations in the time to reach peak power because of their linear dependence on a sensitivity parameter, and that there are errors in estimating the peak power in the first order approximation for larger sensitivity parameters. To confirm legitimacy of out approximation, these approximate results are compared with exact results obtained from out previous numerical study. 4 refs., 2 figs., 3 tabs. (Author)

  5. Linear source approximation scheme for method of characteristics

    International Nuclear Information System (INIS)

    Tang Chuntao

    2011-01-01

    Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)

  6. Higher accuracy analytical approximations to a nonlinear oscillator with discontinuity by He's homotopy perturbation method

    International Nuclear Information System (INIS)

    Belendez, A.; Hernandez, A.; Belendez, T.; Neipp, C.; Marquez, A.

    2008-01-01

    He's homotopy perturbation method is used to calculate higher-order approximate periodic solutions of a nonlinear oscillator with discontinuity for which the elastic force term is proportional to sgn(x). We find He's homotopy perturbation method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. Only one iteration leads to high accuracy of the solutions with a maximal relative error for the approximate period of less than 1.56% for all values of oscillation amplitude, while this relative error is 0.30% for the second iteration and as low as 0.057% when the third-order approximation is considered. Comparison of the result obtained using this method with those obtained by different harmonic balance methods reveals that He's homotopy perturbation method is very effective and convenient

  7. Comparative analysis of approximations used in the methods of Faddeev equations and hyperspherical harmonics

    International Nuclear Information System (INIS)

    Mukhtarova, M.I.

    1988-01-01

    Comparative analysis of approximations, used in the methods of Faddeev equations and hyperspherical harmonics (MHH) was conducted. The differences in solutions of these methods, related with introduction of approximation of sufficient partial states into the three-nucleon problem, is shown. MHH method is preferred. It is shown that MHH advantage can be manifested clearly when studying new classes of interactions: three-particle, Δ-isobar, nonlocal and other interactions

  8. Development of approximate shielding calculation method for high energy cosmic radiation on LEO satellites

    International Nuclear Information System (INIS)

    Sin, M. W.; Kim, M. H.

    2002-01-01

    To calculate total dose effect on semi-conductor devices in satellite for a period of space mission effectively, two approximate calculation models for a comic radiation shielding were proposed. They are a sectoring method and a chord-length distribution method. When an approximate method was applied in this study, complex structure of satellite was described into multiple 1-dimensional slabs, structural materials were converted to reference material(aluminum), and the pre-calculated dose-depth conversion function was introduced to simplify the calculation process. Verification calculation was performed for orbit location and structure geometry of KITSAT-1 and compared with detailed 3-dimensional calculation results and experimental values. The calculation results from approximate method were estimated conservatively with acceptable error. However, results for satellite mission simulation were underestimated in total dose rate compared with experimental values

  9. Development of approximate shielding calculation method for high energy cosmic radiation on LEO satellites

    Energy Technology Data Exchange (ETDEWEB)

    Sin, M. W.; Kim, M. H. [Kyunghee Univ., Yongin (Korea, Republic of)

    2002-10-01

    To calculate total dose effect on semi-conductor devices in satellite for a period of space mission effectively, two approximate calculation models for a comic radiation shielding were proposed. They are a sectoring method and a chord-length distribution method. When an approximate method was applied in this study, complex structure of satellite was described into multiple 1-dimensional slabs, structural materials were converted to reference material(aluminum), and the pre-calculated dose-depth conversion function was introduced to simplify the calculation process. Verification calculation was performed for orbit location and structure geometry of KITSAT-1 and compared with detailed 3-dimensional calculation results and experimental values. The calculation results from approximate method were estimated conservatively with acceptable error. However, results for satellite mission simulation were underestimated in total dose rate compared with experimental values.

  10. [Reason for dormancy of Cuscuta chinensis seed and solving method].

    Science.gov (United States)

    Wang, Xuemin; He, Jiaqing; Cai, Jing; Dong, Zhenguo

    2010-02-01

    To study the reason for the deep dormancy of the aged Cuscuta chinensis seed and find the solving method. The separated and combined treatments were applied in the orthogonal designed experiments. The aged seed had well water-absorbency; the water and ethanol extracts of the seeds showed an inhibition effect on germination capacity of the seeds. The main reason for the deep dormancy of aged C. chinensis seed is the inhibitors existed in seed. There are two methods to solve the problem. The seeds is immersed in 98% of H2SO4 for 2 min followed by 500 mg x L(-1) of GA3 treatment for 60 min, or in 100 mg x L(-1) of NaOH for 20 min followed by 500 mg x L(-1) of GA3 treatment for 120 min.

  11. A local adaptive method for the numerical approximation in seismic wave modelling

    Directory of Open Access Journals (Sweden)

    Galuzzi Bruno G.

    2017-12-01

    Full Text Available We propose a new numerical approach for the solution of the 2D acoustic wave equation to model the predicted data in the field of active-source seismic inverse problems. This method consists in using an explicit finite difference technique with an adaptive order of approximation of the spatial derivatives that takes into account the local velocity at the grid nodes. Testing our method to simulate the recorded seismograms in a marine seismic acquisition, we found that the low computational time and the low approximation error of the proposed approach make it suitable in the context of seismic inversion problems.

  12. Comparison of approximate methods for multiple scattering in high-energy collisions. II

    International Nuclear Information System (INIS)

    Nolan, A.M.; Tobocman, W.; Werby, M.F.

    1976-01-01

    The scattering in one dimension of a particle by a target of N like particles in a bound state has been studied. The exact result for the transmission probability has been compared with the predictions of the Glauber theory, the Watson optical potential model, and the adiabatic (or fixed scatterer) approximation. The approximate methods optical potential model is second best. The Watson method is found to work better when the kinematics suggested by Foldy and Walecka are used rather than that suggested by Watson, that is to say, when the two-body of the nucleon-nucleon reduced mass

  13. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  14. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming; Cheng, Yichen; Song, Qifan; Park, Jincheol; Yang, Ping

    2013-01-01

    large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate

  15. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    Science.gov (United States)

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  16. An approximation method for nonlinear integral equations of Hammerstein type

    International Nuclear Information System (INIS)

    Chidume, C.E.; Moore, C.

    1989-05-01

    The solution of a nonlinear integral equation of Hammerstein type in Hilbert spaces is approximated by means of a fixed point iteration method. Explicit error estimates are given and, in some cases, convergence is shown to be at least as fast as a geometric progression. (author). 25 refs

  17. An Approximate Redistributed Proximal Bundle Method with Inexact Data for Minimizing Nonsmooth Nonconvex Functions

    Directory of Open Access Journals (Sweden)

    Jie Shen

    2015-01-01

    Full Text Available We describe an extension of the redistributed technique form classical proximal bundle method to the inexact situation for minimizing nonsmooth nonconvex functions. The cutting-planes model we construct is not the approximation to the whole nonconvex function, but to the local convexification of the approximate objective function, and this kind of local convexification is modified dynamically in order to always yield nonnegative linearization errors. Since we only employ the approximate function values and approximate subgradients, theoretical convergence analysis shows that an approximate stationary point or some double approximate stationary point can be obtained under some mild conditions.

  18. Higher order analytical approximate solutions to the nonlinear pendulum by He's homotopy method

    International Nuclear Information System (INIS)

    Belendez, A; Pascual, C; Alvarez, M L; Mendez, D I; Yebra, M S; Hernandez, A

    2009-01-01

    A modified He's homotopy perturbation method is used to calculate the periodic solutions of a nonlinear pendulum. The method has been modified by truncating the infinite series corresponding to the first-order approximate solution and substituting a finite number of terms in the second-order linear differential equation. As can be seen, the modified homotopy perturbation method works very well for high values of the initial amplitude. Excellent agreement of the analytical approximate period with the exact period has been demonstrated not only for small but also for large amplitudes A (the relative error is less than 1% for A < 152 deg.). Comparison of the result obtained using this method with the exact ones reveals that this modified method is very effective and convenient.

  19. Approximation by rational functions as processing method, analysis and transformation of neutron data

    International Nuclear Information System (INIS)

    Gaj, E.V.; Badikov, S.A.; Gusejnov, M.A.; Rabotnov, N.S.

    1988-01-01

    Possible applications of rational functions in the analysis of neutron cross sections, angular distributions and neutron constants generation are described. Results of investigations made in this direction, which have been obtained after the preceding conference in Kiev, are presented: the method of simultaneous treatment of several cross sections for one compound nucleus in the resonance range; the use of the Pade approximation for elastically scattered neutron angular distribution approximation; obtaining of subgroup constants on the basis of rational approximation of cross section functional dependence on dilution cross section; the first experience in function approximation by two variables

  20. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  1. Proportional Reasoning and the Visually Impaired

    Science.gov (United States)

    Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia

    2012-01-01

    Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…

  2. Approximate solution of the transport equation by methods of Galerkin type

    International Nuclear Information System (INIS)

    Pitkaranta, J.

    1977-01-01

    Questions of the existence, uniqueness, and convergence of approximate solutions of transport equations by methods of the Galerkin type (where trial and weighting functions are the same) are discussed. The results presented do not exclude the infinite-dimensional case. Two strategies can be followed in the variational approximation of the transport operator: one proceeds from the original form of the transport equation, while the other is based on the partially symmetrized equation. Both principles are discussed in this paper. The transport equation is assumed in a discretized multigroup form

  3. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    Science.gov (United States)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  4. Calculating Resonance Positions and Widths Using the Siegert Approximation Method

    Science.gov (United States)

    Rapedius, Kevin

    2011-01-01

    Here, we present complex resonance states (or Siegert states) that describe the tunnelling decay of a trapped quantum particle from an intuitive point of view that naturally leads to the easily applicable Siegert approximation method. This can be used for analytical and numerical calculations of complex resonances of both the linear and nonlinear…

  5. Modified method of perturbed stationary states. II. Semiclassical and low-velocity quantal approximations

    International Nuclear Information System (INIS)

    Green, T.A.

    1978-10-01

    For one-electron heteropolar systems, the wave-theoretic Lagrangian of Paper I 2 is simplified in two distinct approximations. The first is semiclassical; the second is quantal, for velocities below those for which the semiclassical treatment is reliable. For each approximation, unitarity and detailed balancing are discussed. Then, the variational method as described by Demkov is used to determine the coupled equations for the radial functions and the Euler-Lagrange equations for the translational factors which are part of the theory. Specific semiclassical formulae for the translational factors are given in a many-state approximation. Low-velocity quantal formulae are obtained in a one-state approximation. The one-state results of both approximations agree with an earlier determination by Riley. 14 references

  6. Methods for solving reasoning problems in abstract argumentation – A survey

    Science.gov (United States)

    Charwat, Günther; Dvořák, Wolfgang; Gaggl, Sarah A.; Wallner, Johannes P.; Woltran, Stefan

    2015-01-01

    Within the last decade, abstract argumentation has emerged as a central field in Artificial Intelligence. Besides providing a core formalism for many advanced argumentation systems, abstract argumentation has also served to capture several non-monotonic logics and other AI related principles. Although the idea of abstract argumentation is appealingly simple, several reasoning problems in this formalism exhibit high computational complexity. This calls for advanced techniques when it comes to implementation issues, a challenge which has been recently faced from different angles. In this survey, we give an overview on different methods for solving reasoning problems in abstract argumentation and compare their particular features. Moreover, we highlight available state-of-the-art systems for abstract argumentation, which put these methods to practice. PMID:25737590

  7. 12 CFR 334.25 - Reasonable and simple methods of opting out.

    Science.gov (United States)

    2010-01-01

    ... STATEMENTS OF GENERAL POLICY FAIR CREDIT REPORTING Affiliate Marketing § 334.25 Reasonable and simple methods... or processed at an Internet Web site, if the consumer agrees to the electronic delivery of... opt-out under the Act, and the affiliate marketing opt-out under the Act, by a single method, such as...

  8. Reasonable designing method for fillet welding leg length

    Energy Technology Data Exchange (ETDEWEB)

    Kiso, T; Michiyuki, T; Nagao, S; Yoshikawa, M; Miyazaki, S

    1976-12-01

    In VLCC and ULCC vessels, the scantling of structural members, especially the thickness of web plate, increases naturally. The present rule of each classification society generally prescribes that welding leg length should be based on the thickness of the web plate. Welding leg length between this web plate and skin plate such as shell plate, deck plate, etc., or face plate, increases according to increase of the thickness of the web plate. We investigated the method to decide reasonable welding leg length and its programming by using the results of finite element method structural analysis, without adhering to the above rule about welding leg length. As a result of applying this method to actual ships under classification societies' approval, the amount of welding decreased by from about 10 percent to 15 percent compared with that required by the above rule. The rationality of the method has been already confirmed by successful results of the application to several vessels in service.

  9. Relations between Inductive Reasoning and Deductive Reasoning

    Science.gov (United States)

    Heit, Evan; Rotello, Caren M.

    2010-01-01

    One of the most important open questions in reasoning research is how inductive reasoning and deductive reasoning are related. In an effort to address this question, we applied methods and concepts from memory research. We used 2 experiments to examine the effects of logical validity and premise-conclusion similarity on evaluation of arguments.…

  10. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  11. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  12. Higher-order approximate solutions to the relativistic and Duffing-harmonic oscillators by modified He's homotopy methods

    International Nuclear Information System (INIS)

    Belendez, A; Pascual, C; Fernandez, E; Neipp, C; Belendez, T

    2008-01-01

    A modified He's homotopy perturbation method is used to calculate higher-order analytical approximate solutions to the relativistic and Duffing-harmonic oscillators. The He's homotopy perturbation method is modified by truncating the infinite series corresponding to the first-order approximate solution before introducing this solution in the second-order linear differential equation, and so on. We find this modified homotopy perturbation method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. The approximate formulae obtained show excellent agreement with the exact solutions, and are valid for small as well as large amplitudes of oscillation, including the limiting cases of amplitude approaching zero and infinity. For the relativistic oscillator, only one iteration leads to high accuracy of the solutions with a maximal relative error for the approximate frequency of less than 1.6% for small and large values of oscillation amplitude, while this relative error is 0.65% for two iterations with two harmonics and as low as 0.18% when three harmonics are considered in the second approximation. For the Duffing-harmonic oscillator the relative error is as low as 0.078% when the second approximation is considered. Comparison of the result obtained using this method with those obtained by the harmonic balance methods reveals that the former is very effective and convenient

  13. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  14. APPROX, 1-D and 2-D Function Approximation by Polynomials, Splines, Finite Elements Method

    International Nuclear Information System (INIS)

    Tollander, Bengt

    1975-01-01

    1 - Nature of physical problem solved: Approximates one- and two- dimensional functions using different forms of the approximating function, as polynomials, rational functions, Splines and (or) the finite element method. Different kinds of transformations of the dependent and (or) the independent variables can easily be made by data cards using a FORTRAN-like language. 2 - Method of solution: Approximations by polynomials, Splines and (or) the finite element method are made in L2 norm using the least square method by which the answer is directly given. For rational functions in one dimension the result given in L(infinite) norm is achieved by iterations moving the zero points of the error curve. For rational functions in two dimensions, the norm is L2 and the result is achieved by iteratively changing the coefficients of the denominator and then solving the coefficients of the numerator by the least square method. The transformation of the dependent and (or) independent variables is made by compiling the given transform data card(s) to an array of integers from which the transformation can be made

  15. Approximate solution of generalized Ginzburg-Landau-Higgs system via homotopy perturbation method

    Energy Technology Data Exchange (ETDEWEB)

    Lu Juhong [School of Physics and Electromechanical Engineering, Shaoguan Univ., Guangdong (China); Dept. of Information Engineering, Coll. of Lishui Professional Tech., Zhejiang (China); Zheng Chunlong [School of Physics and Electromechanical Engineering, Shaoguan Univ., Guangdong (China); Shanghai Inst. of Applied Mathematics and Mechanics, Shanghai Univ., SH (China)

    2010-04-15

    Using the homotopy perturbation method, a class of nonlinear generalized Ginzburg-Landau-Higgs systems (GGLH) is considered. Firstly, by introducing a homotopic transformation, the nonlinear problem is changed into a system of linear equations. Secondly, by selecting a suitable initial approximation, the approximate solution with arbitrary degree accuracy to the generalized Ginzburg-Landau-Higgs system is derived. Finally, another type of homotopic transformation to the generalized Ginzburg-Landau-Higgs system reported in previous literature is briefly discussed. (orig.)

  16. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  17. The Pade approximate method for solving problems in plasma kinetic theory

    International Nuclear Information System (INIS)

    Jasperse, J.R.; Basu, B.

    1992-01-01

    The method of Pade Approximates has been a powerful tool in solving for the time dependent propagator (Green function) in model quantum field theories. We have developed a modified Pade method which we feel has promise for solving linearized collisional and weakly nonlinear problems in plasma kinetic theory. In order to illustrate the general applicability of the method, in this paper we discuss Pade solutions for the linearized collisional propagator and the collisional dielectric function for a model collisional problem. (author) 3 refs., 2 tabs

  18. Arrival-time picking method based on approximate negentropy for microseismic data

    Science.gov (United States)

    Li, Yue; Ni, Zhuo; Tian, Yanan

    2018-05-01

    Accurate and dependable picking of the first arrival time for microseismic data is an important part in microseismic monitoring, which directly affects analysis results of post-processing. This paper presents a new method based on approximate negentropy (AN) theory for microseismic arrival time picking in condition of much lower signal-to-noise ratio (SNR). According to the differences in information characteristics between microseismic data and random noise, an appropriate approximation of negentropy function is selected to minimize the effect of SNR. At the same time, a weighted function of the differences between maximum and minimum value of AN spectrum curve is designed to obtain a proper threshold function. In this way, the region of signal and noise is distinguished to pick the first arrival time accurately. To demonstrate the effectiveness of AN method, we make many experiments on a series of synthetic data with different SNR from -1 dB to -12 dB and compare it with previously published Akaike information criterion (AIC) and short/long time average ratio (STA/LTA) methods. Experimental results indicate that these three methods can achieve well picking effect when SNR is from -1 dB to -8 dB. However, when SNR is as low as -8 dB to -12 dB, the proposed AN method yields more accurate and stable picking result than AIC and STA/LTA methods. Furthermore, the application results of real three-component microseismic data also show that the new method is superior to the other two methods in accuracy and stability.

  19. A unified approach to the Darwin approximation

    International Nuclear Information System (INIS)

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-01-01

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting

  20. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    Science.gov (United States)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  1. Perturbation methods and closure approximations in nonlinear systems

    International Nuclear Information System (INIS)

    Dubin, D.H.E.

    1984-01-01

    In the first section of this thesis, Hamiltonian theories of guiding center and gyro-center motion are developed using modern symplectic methods and Lie transformations. Littlejohn's techniques, combined with the theory of resonant interaction and island overlap, are used to explore the problem of adiabatic invariance and onset of stochasticity. As an example, the breakdown of invariance due to resonance between drift motion and gyromotion in a tokamak is considered. A Hamiltonian is developed for motion in a straight magnetic field with electrostatic perturbations in the gyrokinetic ordering, from which nonlinear gyrokinetic equations are constructed which have the property of phase-space preservation, useful for computer simulation. Energy invariants are found and various limits of the equations are considered. In the second section, statistical closure theories are applied to simple dynamical systems. The logistic map is used as an example because of its universal properties and simple quadratic nonlinearity. The first closure considered is the direct interaction approximation of Kraichnan, which is found to fail when applied to the logistic map because it cannot approximate the bounded support of the map's equilibrium distribution. By imposing a periodically constraint on a Langevin form of the DIA a new stable closure is developed

  2. Design of A Cyclone Separator Using Approximation Method

    Science.gov (United States)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  3. The approximate thermal-model-testing method for non-stationary temperature fields in central zones of fast reactor assemblies

    International Nuclear Information System (INIS)

    Mikhin, V.I.; Matukhin, N.M.

    2000-01-01

    The approach to generalization of the non-stationary heat exchange data for the central zones of the nuclear reactor fuel assemblies and the approximate thermal-model-testing criteria are proposed. The fuel assemblies of fast and water-cooled reactors with different fuel compositions have been investigated. The reason of the non-stationary heat exchange is the fuel-energy-release time dependence. (author)

  4. Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...

  5. Some approximate calculations in SU2 lattice mean field theory

    International Nuclear Information System (INIS)

    Hari Dass, N.D.; Lauwers, P.G.

    1981-12-01

    Approximate calculations are performed for small Wilson loops of SU 2 lattice gauge theory in mean field approximation. Reasonable agreement is found with Monte Carlo data. Ways of improving these calculations are discussed. (Auth.)

  6. Approximation of the unsteady Brinkman-Forchheimer equations by the pressure stabilization method

    KAUST Repository

    Louaked, Mohammed; Seloula, Nour; Trabelsi, Saber

    2017-01-01

    In this work, we propose and analyze the pressure stabilization method for the unsteady incompressible Brinkman-Forchheimer equations. We present a time discretization scheme which can be used with any consistent finite element space approximation. Second-order error estimate is proven. Some numerical results are also given.© 2017 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2017

  7. Approximation of the unsteady Brinkman-Forchheimer equations by the pressure stabilization method

    KAUST Repository

    Louaked, Mohammed

    2017-07-20

    In this work, we propose and analyze the pressure stabilization method for the unsteady incompressible Brinkman-Forchheimer equations. We present a time discretization scheme which can be used with any consistent finite element space approximation. Second-order error estimate is proven. Some numerical results are also given.© 2017 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2017

  8. Parabolic approximation method for fast magnetosonic wave propagation in tokamaks

    International Nuclear Information System (INIS)

    Phillips, C.K.; Perkins, F.W.; Hwang, D.Q.

    1985-07-01

    Fast magnetosonic wave propagation in a cylindrical tokamak model is studied using a parabolic approximation method in which poloidal variations of the wave field are considered weak in comparison to the radial variations. Diffraction effects, which are ignored by ray tracing mthods, are included self-consistently using the parabolic method since continuous representations for the wave electromagnetic fields are computed directly. Numerical results are presented which illustrate the cylindrical convergence of the launched waves into a diffraction-limited focal spot on the cyclotron absorption layer near the magnetic axis for a wide range of plasma confinement parameters

  9. Born approximation to a perturbative numerical method for the solution of the Schrodinger equation

    International Nuclear Information System (INIS)

    Adam, Gh.

    1978-05-01

    A perturbative numerical (PN) method is given for the solution of a regular one-dimensional Cauchy problem arising from the Schroedinger equation. The present method uses a step function approximation for the potential. Global, free of scaling difficulty, forward and backward PN algorithms are derived within first order perturbation theory (Born approximation). A rigorous analysis of the local truncation errors is performed. This shows that the order of accuracy of the method is equal to four. In between the mesh points, the global formula for the wavefunction is accurate within O(h 4 ), while that for the first order derivative is accurate within O(h 3 ). (author)

  10. Bayesian Reasoning in Data Analysis A Critical Introduction

    CERN Document Server

    D'Agostini, Giulio

    2003-01-01

    This book provides a multi-level introduction to Bayesian reasoning (as opposed to "conventional statistics") and its applications to data analysis. The basic ideas of this "new" approach to the quantification of uncertainty are presented using examples from research and everyday life. Applications covered include: parametric inference; combination of results; treatment of uncertainty due to systematic errors and background; comparison of hypotheses; unfolding of experimental distributions; upper/lower bounds in frontier-type measurements. Approximate methods for routine use are derived and ar

  11. An overview on polynomial approximation of NP-hard problems

    Directory of Open Access Journals (Sweden)

    Paschos Vangelis Th.

    2009-01-01

    Full Text Available The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.

  12. Rocksalt or cesium chloride: Investigating the relative stability of the cesium halide structures with random phase approximation based methods

    Science.gov (United States)

    Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.

    2018-03-01

    The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.

  13. An Intuitionistic Fuzzy Stochastic Decision-Making Method Based on Case-Based Reasoning and Prospect Theory

    Directory of Open Access Journals (Sweden)

    Peng Li

    2017-01-01

    Full Text Available According to the case-based reasoning method and prospect theory, this paper mainly focuses on finding a way to obtain decision-makers’ preferences and the criterion weights for stochastic multicriteria decision-making problems and classify alternatives. Firstly, we construct a new score function for an intuitionistic fuzzy number (IFN considering the decision-making environment. Then, we aggregate the decision-making information in different natural states according to the prospect theory and test decision-making matrices. A mathematical programming model based on a case-based reasoning method is presented to obtain the criterion weights. Moreover, in the original decision-making problem, we integrate all the intuitionistic fuzzy decision-making matrices into an expectation matrix using the expected utility theory and classify or rank the alternatives by the case-based reasoning method. Finally, two illustrative examples are provided to illustrate the implementation process and applicability of the developed method.

  14. A Reasoning Method of Cyber-Attack Attribution Based on Threat Intelligence

    OpenAIRE

    Li Qiang; Yang Ze-Ming; Liu Bao-Xu; Jiang Zheng-Wei

    2016-01-01

    With the increasing complexity of cyberspace security, the cyber-attack attribution has become an important challenge of the security protection systems. The difficult points of cyber-attack attribution were forced on the problems of huge data handling and key data missing. According to this situation, this paper presented a reasoning method of cyber-attack attribution based on threat intelligence. The method utilizes the intrusion kill chain model and Bayesian network to build attack chain a...

  15. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming

    2013-03-01

    The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  16. Simple Methods to Approximate CPC Shape to Preserve Collection Efficiency

    Directory of Open Access Journals (Sweden)

    David Jafrancesco

    2012-01-01

    Full Text Available The compound parabolic concentrator (CPC is the most efficient reflective geometry to collect light to an exit port. Anyway, to allow its actual use in solar plants or photovoltaic concentration systems, a tradeoff between system efficiency and cost reduction, the two key issues for sunlight exploitation, must be found. In this work, we analyze various methods to model an approximated CPC aimed to be simpler and more cost-effective than the ideal one, as well as to preserve the system efficiency. The manufacturing easiness arises from the use of truncated conic surfaces only, which can be realized by cheap machining techniques. We compare different configurations on the basis of their collection efficiency, evaluated by means of nonsequential ray-tracing software. Moreover, due to the fact that some configurations are beam dependent and for a closer approximation of a real case, the input beam is simulated as nonsymmetric, with a nonconstant irradiance on the CPC internal surface.

  17. Approximate analytical solution of diffusion equation with fractional time derivative using optimal homotopy analysis method

    Directory of Open Access Journals (Sweden)

    S. Das

    2013-12-01

    Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.

  18. 12 CFR 222.25 - Reasonable and simple methods of opting out.

    Science.gov (United States)

    2010-01-01

    ... FEDERAL RESERVE SYSTEM FAIR CREDIT REPORTING (REGULATION V) Affiliate Marketing § 222.25 Reasonable and... electronically mailed or processed at an Internet Web site, if the consumer agrees to the electronic delivery of... opt-out under the Act, and the affiliate marketing opt-out under the Act, by a single method, such as...

  19. A method to reduce ambiguities of qualitative reasoning for conceptual design applications

    NARCIS (Netherlands)

    D'Amelio, V.; Chmarra, M.K.; Tomiyama, T.

    2013-01-01

    Qualitative reasoning can generate ambiguous behaviors due to the lack of quantitative information. Despite many different research results focusing on ambiguities reduction, fundamentally it is impossible to totally remove ambiguities with only qualitative methods and to guarantee the consistency

  20. Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients

    Directory of Open Access Journals (Sweden)

    Deming Yuan

    2014-01-01

    Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.

  1. Approximate Analytic and Numerical Solutions to Lane-Emden Equation via Fuzzy Modeling Method

    Directory of Open Access Journals (Sweden)

    De-Gang Wang

    2012-01-01

    Full Text Available A novel algorithm, called variable weight fuzzy marginal linearization (VWFML method, is proposed. This method can supply approximate analytic and numerical solutions to Lane-Emden equations. And it is easy to be implemented and extended for solving other nonlinear differential equations. Numerical examples are included to demonstrate the validity and applicability of the developed technique.

  2. Introduction to Methods of Approximation in Physics and Astronomy

    Science.gov (United States)

    van Putten, Maurice H. P. M.

    2017-04-01

    Modern astronomy reveals an evolving Universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data analysis. In realizing the full discovery potential of these multimessenger approaches, the latter increasingly involves high-performance supercomputing. These lecture notes developed out of lectures on mathematical-physics in astronomy to advanced undergraduate and beginning graduate students. They are organised to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal detection algorithms involving the Fourier transform and examples of numerical integration of ordinary differential equations and some illustrative aspects of modern computational implementation. In the applications, considerable emphasis is put on fluid dynamical problems associated with accretion flows, as these are responsible for a wealth of high energy emission phenomena in astronomy. The topics chosen are largely aimed at phenomenological approaches, to capture main features of interest by effective methods of approximation at a desired level of accuracy and resolution. Formulated in terms of a system of algebraic, ordinary or partial differential equations, this may be pursued by perturbation theory through expansions in a small parameter or by direct numerical computation. Successful application of these methods requires a robust understanding of asymptotic behavior, errors and convergence. In some cases, the number of degrees of freedom may be reduced, e.g., for the purpose of (numerical) continuation or to identify

  3. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  4. Calculation of Resonance Interaction Effects Using a Rational Approximation to the Symmetric Resonance Line Shape Function

    International Nuclear Information System (INIS)

    Haeggblom, H.

    1968-08-01

    The method of calculating the resonance interaction effect by series expansions has been studied. Starting from the assumption that the neutron flux in a homogeneous mixture is inversely proportional to the total cross section, the expression for the flux can be simplified by series expansions. Two types of expansions are investigated and it is shown that only one of them is generally applicable. It is also shown that this expansion gives sufficient accuracy if the approximate resonance line shape function is reasonably representative. An investigation is made of the approximation of the resonance shape function with a Gaussian function which in some cases has been used to calculate the interaction effect. It is shown that this approximation is not sufficiently accurate in all cases which can occur in practice. Then, a rational approximation is introduced which in the first order approximation gives the same order of accuracy as a practically exact shape function. The integrations can be made analytically in the complex plane and the method is therefore very fast compared to purely numerical integrations. The method can be applied both to statistically correlated and uncorrelated resonances

  5. Calculation of Resonance Interaction Effects Using a Rational Approximation to the Symmetric Resonance Line Shape Function

    Energy Technology Data Exchange (ETDEWEB)

    Haeggblom, H

    1968-08-15

    The method of calculating the resonance interaction effect by series expansions has been studied. Starting from the assumption that the neutron flux in a homogeneous mixture is inversely proportional to the total cross section, the expression for the flux can be simplified by series expansions. Two types of expansions are investigated and it is shown that only one of them is generally applicable. It is also shown that this expansion gives sufficient accuracy if the approximate resonance line shape function is reasonably representative. An investigation is made of the approximation of the resonance shape function with a Gaussian function which in some cases has been used to calculate the interaction effect. It is shown that this approximation is not sufficiently accurate in all cases which can occur in practice. Then, a rational approximation is introduced which in the first order approximation gives the same order of accuracy as a practically exact shape function. The integrations can be made analytically in the complex plane and the method is therefore very fast compared to purely numerical integrations. The method can be applied both to statistically correlated and uncorrelated resonances.

  6. 12 CFR 571.25 - Reasonable and simple methods of opting out.

    Science.gov (United States)

    2010-01-01

    ... CREDIT REPORTING Affiliate Marketing § 571.25 Reasonable and simple methods of opting out. (a) In general... out, such as a form that can be electronically mailed or processed at an Internet Web site, if the... (15 U.S.C. 6801 et seq.), the affiliate sharing opt-out under the Act, and the affiliate marketing opt...

  7. 16 CFR 680.25 - Reasonable and simple methods of opting out.

    Science.gov (United States)

    2010-01-01

    ... AFFILIATE MARKETING § 680.25 Reasonable and simple methods of opting out. (a) In general. You must not use... a form that can be electronically mailed or processed at an Internet Web site, if the consumer..., 15 U.S.C. 6801 et seq., the affiliate sharing opt-out under the Act, and the affiliate marketing opt...

  8. Application of the probabilistic approximate analysis method to a turbopump blade analysis. [for Space Shuttle Main Engine

    Science.gov (United States)

    Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.

    1990-01-01

    An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.

  9. Methods of Approximation Theory in Complex Analysis and Mathematical Physics

    CERN Document Server

    Saff, Edward

    1993-01-01

    The book incorporates research papers and surveys written by participants ofan International Scientific Programme on Approximation Theory jointly supervised by Institute for Constructive Mathematics of University of South Florida at Tampa, USA and the Euler International Mathematical Instituteat St. Petersburg, Russia. The aim of the Programme was to present new developments in Constructive Approximation Theory. The topics of the papers are: asymptotic behaviour of orthogonal polynomials, rational approximation of classical functions, quadrature formulas, theory of n-widths, nonlinear approximation in Hardy algebras,numerical results on best polynomial approximations, wavelet analysis. FROM THE CONTENTS: E.A. Rakhmanov: Strong asymptotics for orthogonal polynomials associated with exponential weights on R.- A.L. Levin, E.B. Saff: Exact Convergence Rates for Best Lp Rational Approximation to the Signum Function and for Optimal Quadrature in Hp.- H. Stahl: Uniform Rational Approximation of x .- M. Rahman, S.K. ...

  10. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation

  11. New finite volume methods for approximating partial differential equations on arbitrary meshes

    International Nuclear Information System (INIS)

    Hermeline, F.

    2008-12-01

    This dissertation presents some new methods of finite volume type for approximating partial differential equations on arbitrary meshes. The main idea lies in solving twice the problem to be dealt with. One addresses the elliptic equations with variable (anisotropic, antisymmetric, discontinuous) coefficients, the parabolic linear or non linear equations (heat equation, radiative diffusion, magnetic diffusion with Hall effect), the wave type equations (Maxwell, acoustics), the elasticity and Stokes'equations. Numerous numerical experiments show the good behaviour of this type of method. (author)

  12. Relations between inductive reasoning and deductive reasoning.

    Science.gov (United States)

    Heit, Evan; Rotello, Caren M

    2010-05-01

    One of the most important open questions in reasoning research is how inductive reasoning and deductive reasoning are related. In an effort to address this question, we applied methods and concepts from memory research. We used 2 experiments to examine the effects of logical validity and premise-conclusion similarity on evaluation of arguments. Experiment 1 showed 2 dissociations: For a common set of arguments, deduction judgments were more affected by validity, and induction judgments were more affected by similarity. Moreover, Experiment 2 showed that fast deduction judgments were like induction judgments-in terms of being more influenced by similarity and less influenced by validity, compared with slow deduction judgments. These novel results pose challenges for a 1-process account of reasoning and are interpreted in terms of a 2-process account of reasoning, which was implemented as a multidimensional signal detection model and applied to receiver operating characteristic data. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  13. Quenched Approximation to ΔS = 1 K Decay

    International Nuclear Information System (INIS)

    Christ, Norman H.

    2005-01-01

    The importance of explicit quark loops in the amplitudes contributing to ΔS = 1, K meson decays raises potential ambiguities when these amplitudes are evaluated in the quenched approximation. Using the factorization of these amplitudes into short- and long-distance parts provided by the standard low-energy effective weak Hamiltonian, we argue that the quenched approximation can be conventionally justified if it is applied to the long-distance portion of each amplitude. The result is a reasonably well-motivated definition of the quenched approximation that is close to that employed in the RBC and CP-PACS calculations of these quantities

  14. Approximation methods for the stability analysis of complete synchronization on duplex networks

    Science.gov (United States)

    Han, Wenchen; Yang, Junzhong

    2018-01-01

    Recently, the synchronization on multi-layer networks has drawn a lot of attention. In this work, we study the stability of the complete synchronization on duplex networks. We investigate effects of coupling function on the complete synchronization on duplex networks. We propose two approximation methods to deal with the stability of the complete synchronization on duplex networks. In the first method, we introduce a modified master stability function and, in the second method, we only take into consideration the contributions of a few most unstable transverse modes to the stability of the complete synchronization. We find that both methods work well for predicting the stability of the complete synchronization for small networks. For large networks, the second method still works pretty well.

  15. On rational approximation methods for inverse source problems

    KAUST Repository

    Rundell, William

    2011-02-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace\\'s equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  16. On rational approximation methods for inverse source problems

    KAUST Repository

    Rundell, William; Hanke, Martin

    2011-01-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace's equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  17. An approximation method for diffusion based leaching models

    International Nuclear Information System (INIS)

    Shukla, B.S.; Dignam, M.J.

    1987-01-01

    In connection with the fixation of nuclear waste in a glassy matrix equations have been derived for leaching models based on a uniform concentration gradient approximation, and hence a uniform flux, therefore requiring the use of only Fick's first law. In this paper we improve on the uniform flux approximation, developing and justifying the approach. The resulting set of equations are solved to a satisfactory approximation for a matrix dissolving at a constant rate in a finite volume of leachant to give analytical expressions for the time dependence of the thickness of the leached layer, the diffusional and dissolutional contribution to the flux, and the leachant composition. Families of curves are presented which cover the full range of all the physical parameters for this system. The same procedure can be readily extended to more complex systems. (author)

  18. A Method for Reasoning about other Agents' Beliefs from Observations

    OpenAIRE

    Nittka, Alexander; Booth, Richard

    2007-01-01

    Traditional work in belief revision deals with the question of what an agent should believe upon receiving new information. We will give an overview about what can be concluded about an agent based on an observation of its belief revision behaviour. The observation contains partial information about the revision inputs received by the agent and its beliefs upon receiving them. We will sketch a method for reasoning about past and future beliefs of the agent and predicting which inputs i...

  19. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  20. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  1. Peculiarities of cyclotron magnetic system calculation with the finite difference method using two-dimensional approximation

    International Nuclear Information System (INIS)

    Shtromberger, N.L.

    1989-01-01

    To design a cyclotron magnetic system the legitimacy of two-dimensional approximations application is discussed. In all the calculations the finite difference method is used, and the linearization method with further use of the gradient conjugation method is used to solve the set of finite-difference equations. 3 refs.; 5 figs

  2. Reasoning methods in medical consultation systems: artificial intelligence approaches.

    Science.gov (United States)

    Shortliffe, E H

    1984-01-01

    It has been argued that the problem of medical diagnosis is fundamentally ill-structured, particularly during the early stages when the number of possible explanations for presenting complaints can be immense. This paper discusses the process of clinical hypothesis evocation, contrasts it with the structured decision making approaches used in traditional computer-based diagnostic systems, and briefly surveys the more open-ended reasoning methods that have been used in medical artificial intelligence (AI) programs. The additional complexity introduced when an advice system is designed to suggest management instead of (or in addition to) diagnosis is also emphasized. Example systems are discussed to illustrate the key concepts.

  3. Comparison of the methods for discrete approximation of the fractional-order operator

    Directory of Open Access Journals (Sweden)

    Zborovjan Martin

    2003-12-01

    Full Text Available In this paper we will present some alternative types of discretization methods (discrete approximation for the fractional-order (FO differentiator and their application to the FO dynamical system described by the FO differential equation (FDE. With analytical solution and numerical solution by power series expansion (PSE method are compared two effective methods - the Muir expansion of the Tustin operator and continued fraction expansion method (CFE with the Tustin operator and the Al-Alaoui operator. Except detailed mathematical description presented are also simulation results. From the Bode plots of the FO differentiator and FDE and from the solution in the time domain we can see, that the CFE is a more effective method according to the PSE method, but there are some restrictions for the choice of the time step. The Muir expansion is almost unusable.

  4. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  5. A simple method to approximate liver size on cross-sectional images using living liver models

    International Nuclear Information System (INIS)

    Muggli, D.; Mueller, M.A.; Karlo, C.; Fornaro, J.; Marincek, B.; Frauenfelder, T.

    2009-01-01

    Aim: To assess whether a simple. diameter-based formula applicable to cross-sectional images can be used to calculate the total liver volume. Materials and methods: On 119 cross-sectional examinations (62 computed tomography and 57 magnetic resonance imaging) a simple, formula-based method to approximate the liver volume was evaluated. The total liver volume was approximated measuring the largest craniocaudal (cc), ventrodorsal (vd), and coronal (cor) diameters by two readers and implementing the equation: Vol estimated =ccxvdxcorx0.31. Inter-rater reliability, agreement, and correlation between liver volume calculation and virtual liver volumetry were analysed. Results: No significant disagreement between the two readers was found. The formula correlated significantly with the volumetric data (r > 0.85, p < 0.0001). In 81% of cases the error of the approximated volume was <10% and in 92% of cases <15% compared to the volumetric data. Conclusion: Total liver volume can be accurately estimated on cross-sectional images using a simple, diameter-based equation.

  6. Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation

    KAUST Repository

    Yokota, Rio; Ibeid, Huda; Keyes, David E.

    2018-01-01

    There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.

  7. Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation

    KAUST Repository

    Yokota, Rio

    2018-01-03

    There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.

  8. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can

  9. METHODS OF THE APPROXIMATE ESTIMATIONS OF FATIGUE DURABILITY OF COMPOSITE AIRFRAME COMPONENT TYPICAL ELEMENTS

    Directory of Open Access Journals (Sweden)

    V. E. Strizhius

    2015-01-01

    Full Text Available Methods of the approximate estimations of fatigue durability of composite airframe component typical elements which can be recommended for application at the stage of outline designing of the airplane are generated and presented.

  10. The generalized Mayer theorem in the approximating hamiltonian method

    International Nuclear Information System (INIS)

    Bakulev, A.P.; Bogoliubov, N.N. Jr.; Kurbatov, A.M.

    1982-07-01

    With the help of the generalized Mayer theorem we obtain the improved inequality for free energies of model and approximating systems, where only ''connected parts'' over the approximating hamiltonian are taken into account. For the concrete system we discuss the problem of convergency of appropriate series of ''connected parts''. (author)

  11. Study on Feasibility of Applying Function Approximation Moment Method to Achieve Reliability-Based Design Optimization

    International Nuclear Information System (INIS)

    Huh, Jae Sung; Kwak, Byung Man

    2011-01-01

    Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated

  12. An improved corrective smoothed particle method approximation for second‐order derivatives

    NARCIS (Netherlands)

    Korzilius, S.P.; Schilders, W.H.A.; Anthonissen, M.J.H.

    2013-01-01

    To solve (partial) differential equations it is necessary to have good numerical approximations. In SPH, most approximations suffer from the presence of boundaries. In this work a new approximation for the second-order derivative is derived and numerically compared with two other approximation

  13. S-curve networks and an approximate method for estimating degree distributions of complex networks

    Science.gov (United States)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  14. S-curve networks and an approximate method for estimating degree distributions of complex networks

    International Nuclear Information System (INIS)

    Guo Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)

  15. OWL-based reasoning methods for validating archetypes.

    Science.gov (United States)

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  17.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  18. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations

    International Nuclear Information System (INIS)

    Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George

    2016-01-01

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.

  19. Effect of cosine current approximation in lattice cell calculations in cylindrical geometry

    International Nuclear Information System (INIS)

    Mohanakrishnan, P.

    1978-01-01

    It is found that one-dimensional cylindrical geometry reactor lattice cell calculations using cosine angular current approximation at spatial mesh interfaces give results surprisingly close to the results of accurate neutron transport calculations as well as experimental measurements. This is especially true for tight light water moderated lattices. Reasons for this close agreement are investigated here. By re-examining the effects of reflective and white cell boundary conditions in these calculations it is concluded that one major reason is the use of white boundary condition necessitated by the approximation of the two-dimensional reactor lattice cell by a one-dimensional one. (orig.) [de

  20. Hybridization of Sensing Methods of the Search Domain and Adaptive Weighted Sum in the Pareto Approximation Problem

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available We consider the relatively new and rapidly developing class of methods to solve a problem of multi-objective optimization, based on the preliminary built finite-dimensional approximation of the set, and thereby, the Pareto front of this problem as well. The work investigates the efficiency of several modifications of the method of adaptive weighted sum (AWS. This method proposed in the paper of Ryu and Kim Van (JH. Ryu, S. Kim, H. Wan is intended to build Pareto approximation of the multi-objective optimization problem.The AWS method uses quadratic approximation of the objective functions in the current sub-domain of the search space (the area of trust based on the gradient and Hessian matrix of the objective functions. To build the (quadratic meta objective functions this work uses methods of the experimental design theory, which involves calculating the values of these functions in the grid nodes covering the area of trust (a sensing method of the search domain. There are two groups of the sensing methods under consideration: hypercube- and hyper-sphere-based methods. For each of these groups, a number of test multi-objective optimization tasks has been used to study the efficiency of the following grids: "Latin Hypercube"; grid, which is uniformly random for each measurement; grid, based on the LP  sequences.

  1. Esophageal cancer prediction based on qualitative features using adaptive fuzzy reasoning method

    Directory of Open Access Journals (Sweden)

    Raed I. Hamed

    2015-04-01

    Full Text Available Esophageal cancer is one of the most common cancers world-wide and also the most common cause of cancer death. In this paper, we present an adaptive fuzzy reasoning algorithm for rule-based systems using fuzzy Petri nets (FPNs, where the fuzzy production rules are represented by FPN. We developed an adaptive fuzzy Petri net (AFPN reasoning algorithm as a prognostic system to predict the outcome for esophageal cancer based on the serum concentrations of C-reactive protein and albumin as a set of input variables. The system can perform fuzzy reasoning automatically to evaluate the degree of truth of the proposition representing the risk degree value with a weight value to be optimally tuned based on the observed data. In addition, the implementation process for esophageal cancer prediction is fuzzily deducted by the AFPN algorithm. Performance of the composite model is evaluated through a set of experiments. Simulations and experimental results demonstrate the effectiveness and performance of the proposed algorithms. A comparison of the predictive performance of AFPN models with other methods and the analysis of the curve showed the same results with an intuitive behavior of AFPN models.

  2. On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods

    KAUST Repository

    Beck, Joakim; Tempone, Raul; Nobile, Fabio; Tamellini, Lorenzo

    2012-01-01

    In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.

  3. On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods

    KAUST Repository

    Beck, Joakim

    2012-09-01

    In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.

  4. Shape theory categorical methods of approximation

    CERN Document Server

    Cordier, J M

    2008-01-01

    This in-depth treatment uses shape theory as a ""case study"" to illustrate situations common to many areas of mathematics, including the use of archetypal models as a basis for systems of approximations. It offers students a unified and consolidated presentation of extensive research from category theory, shape theory, and the study of topological algebras.A short introduction to geometric shape explains specifics of the construction of the shape category and relates it to an abstract definition of shape theory. Upon returning to the geometric base, the text considers simplical complexes and

  5. Children's, Adolescents', and Adults' Judgments and Reasoning about Different Methods of Teaching Values

    Science.gov (United States)

    Helwig, Charles C.; Ryerson, Rachel; Prencipe, Angela

    2008-01-01

    This study investigated children's, adolescents', and young adults' judgments and reasoning about teaching two values (racial equality and patriotism) using methods that varied in provision for children's rational autonomy, active involvement, and choice. Ninety-six participants (7-8-, 10-11-, and 13-14-year-olds, and college students) evaluated…

  6. Approximate Solutions of Nonlinear Partial Differential Equations by Modified q-Homotopy Analysis Method

    Directory of Open Access Journals (Sweden)

    Shaheed N. Huseen

    2013-01-01

    Full Text Available A modified q-homotopy analysis method (mq-HAM was proposed for solving nth-order nonlinear differential equations. This method improves the convergence of the series solution in the nHAM which was proposed in (see Hassan and El-Tawil 2011, 2012. The proposed method provides an approximate solution by rewriting the nth-order nonlinear differential equation in the form of n first-order differential equations. The solution of these n differential equations is obtained as a power series solution. This scheme is tested on two nonlinear exactly solvable differential equations. The results demonstrate the reliability and efficiency of the algorithm developed.

  7. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  8. Approximate rational Jacobi elliptic function solutions of the fractional differential equations via the enhanced Adomian decomposition method

    International Nuclear Information System (INIS)

    Song Lina; Wang Weiguo

    2010-01-01

    In this Letter, an enhanced Adomian decomposition method which introduces the h-curve of the homotopy analysis method into the standard Adomian decomposition method is proposed. Some examples prove that this method can derive successfully approximate rational Jacobi elliptic function solutions of the fractional differential equations.

  9. Exact and approximate interior corner problem in neutron diffusion by integral transform methods

    International Nuclear Information System (INIS)

    Bareiss, E.H.; Chang, K.S.J.; Constatinescu, D.A.

    1976-09-01

    The mathematical solution of the neutron diffusion equation exhibits singularities in its derivatives at material corners. A mathematical treatment of the nature of these singularities and its impact on coarse network approximation methods in computational work is presented. The mathematical behavior is deduced from Green's functions, based on a generalized theory for two space dimensions, and the resulting systems of integral equations, as well as from the Kontorovich--Lebedev Transform. The effect on numerical calculations is demonstrated for finite difference and finite element methods for a two-region corner problem

  10. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  11. Efficient Method to Approximately Solve Retrial Systems with Impatience

    Directory of Open Access Journals (Sweden)

    Jose Manuel Gimenez-Guzman

    2012-01-01

    Full Text Available We present a novel technique to solve multiserver retrial systems with impatience. Unfortunately these systems do not present an exact analytic solution, so it is mandatory to resort to approximate techniques. This novel technique does not rely on the numerical solution of the steady-state Kolmogorov equations of the Continuous Time Markov Chain as it is common for this kind of systems but it considers the system in its Markov Decision Process setting. This technique, known as value extrapolation, truncates the infinite state space using a polynomial extrapolation method to approach the states outside the truncated state space. A numerical evaluation is carried out to evaluate this technique and to compare its performance with previous techniques. The obtained results show that value extrapolation greatly outperforms the previous approaches appeared in the literature not only in terms of accuracy but also in terms of computational cost.

  12. Monitoring progression of clinical reasoning skills during health sciences education using the case method - a qualitative observational study.

    Science.gov (United States)

    Orban, Kristina; Ekelin, Maria; Edgren, Gudrun; Sandgren, Olof; Hovbrandt, Pia; Persson, Eva K

    2017-09-11

    Outcome- or competency-based education is well established in medical and health sciences education. Curricula are based on courses where students develop their competences and assessment is also usually course-based. Clinical reasoning is an important competence, and the aim of this study was to monitor and describe students' progression in professional clinical reasoning skills during health sciences education using observations of group discussions following the case method. In this qualitative study students from three different health education programmes were observed while discussing clinical cases in a modified Harvard case method session. A rubric with four dimensions - problem-solving process, disciplinary knowledge, character of discussion and communication - was used as an observational tool to identify clinical reasoning. A deductive content analysis was performed. The results revealed the students' transition over time from reasoning based strictly on theoretical knowledge to reasoning ability characterized by clinical considerations and experiences. Students who were approaching the end of their education immediately identified the most important problem and then focused on this in their discussion. Practice knowledge increased over time, which was seen as progression in the use of professional language, concepts, terms and the use of prior clinical experience. The character of the discussion evolved from theoretical considerations early in the education to clinical reasoning in later years. Communication within the groups was supportive and conducted with a professional tone. Our observations revealed progression in several aspects of students' clinical reasoning skills on a group level in their discussions of clinical cases. We suggest that the case method can be a useful tool in assessing quality in health sciences education.

  13. A method for the approximate solutions of the unsteady boundary layer equations

    International Nuclear Information System (INIS)

    Abdus Sattar, Md.

    1990-12-01

    The approximate integral method proposed by Bianchini et al. to solve the unsteady boundary layer equations is considered here with a simple modification to the scale function for the similarity variable. This is done by introducing a time dependent length scale. The closed form solutions, thus obtained, give satisfactory results for the velocity profile and the skin friction to a limiting case in comparison with the results of the past investigators. (author). 7 refs, 2 figs

  14. Born approximation to a perturbative numerical method for the solution of the Schroedinger equation

    International Nuclear Information System (INIS)

    Adam, Gh.

    1978-01-01

    A step function perturbative numerical method (SF-PN method) is developed for the solution of the Cauchy problem for the second order liniar differential equation in normal form. An important point stressed in the present paper, which seems to have been previously ignored in the literature devoted to the PN methods, is the close connection between the first order perturbation theory of the PN approach and the wellknown Born approximation, and, in general, the connection between the varjous orders of the PN corrections and the Neumann series. (author)

  15. Total-energy Assisted Tight-binding Method Based on Local Density Approximation of Density Functional Theory

    Science.gov (United States)

    Fujiwara, Takeo; Nishino, Shinya; Yamamoto, Susumu; Suzuki, Takashi; Ikeda, Minoru; Ohtani, Yasuaki

    2018-06-01

    A novel tight-binding method is developed, based on the extended Hückel approximation and charge self-consistency, with referring the band structure and the total energy of the local density approximation of the density functional theory. The parameters are so adjusted by computer that the result reproduces the band structure and the total energy, and the algorithm for determining parameters is established. The set of determined parameters is applicable to a variety of crystalline compounds and change of lattice constants, and, in other words, it is transferable. Examples are demonstrated for Si crystals of several crystalline structures varying lattice constants. Since the set of parameters is transferable, the present tight-binding method may be applicable also to molecular dynamics simulations of large-scale systems and long-time dynamical processes.

  16. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  17. Approximate method for solving the velocity dependent transport equation in a slab lattice

    International Nuclear Information System (INIS)

    Ferrari, A.

    1966-01-01

    A method is described that is intended to provide an approximate solution of the transport equation in a medium simulating a water-moderated plate filled reactor core. This medium is constituted by a periodic array of water channels and absorbing plates. The velocity dependent transport equation in slab geometry is included. The computation is performed in a water channel: the absorbing plates are accounted for by the boundary conditions. The scattering of neutrons in water is assumed isotropic, which allows the use of a double Pn approximation to deal with the angular dependence. This method is able to represent the discontinuity of the angular distribution at the channel boundary. The set of equations thus obtained is dependent only on x and v and the coefficients are independent on x. This solution suggests to try solutions involving Legendre polynomials. This scheme leads to a set of equations v dependent only. To obtain an explicit solution, a thermalization model must now be chosen. Using the secondary model of Cadilhac a solution of this set is easy to get. The numerical computations were performed with a particular secondary model, the well-known model of Wigner and Wilkins. (author) [fr

  18. On Nash-Equilibria of Approximation-Stable Games

    Science.gov (United States)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  19. Reasons for using traditional methods and role of nurses in family planning.

    Science.gov (United States)

    Yurdakul, Mine; Vural, Gülsen

    2002-05-01

    The withdrawal method and other traditional methods of contraception are still used in Turkey. Ninety-eight percent of women in Turkey know about modern family planning methods and where to find contraceptives. In fact, only one in every three women uses an effective method. The aim of this descriptive and experimental study was to investigate reasons for using traditional methods and the role of nurses in family planning. The women included in the sample were visited in their homes by nurses and educated for family planning in four sessions. Overall, 53.3% of women were using an effective method. However, 54.3% of women living in the Sirintepe district and 41.6% of women living in the Yenikent district were still using the traditional methods they used before. After the education sessions, the most widely used method was found to be intrauterine device (22.8%) in Sirintepe and condom (25%) in Yenikent. There was a significant difference in family planning methods between these two districts (p < 0.001).

  20. Generalized finite polynomial approximation (WINIMAX) to the reduced partition function of isotopic molecules

    International Nuclear Information System (INIS)

    Lee, M.W.; Bigeleisen, J.

    1978-01-01

    The MINIMAX finite polynomial approximation to an arbitrary function has been generalized to include a weighting function (WINIMAX). It is suggested that an exponential is a reasonable weighting function for the logarithm of the reduced partition function of a harmonic oscillator. Comparison of the error function for finite orthogonal polynomial (FOP), MINIMAX, and WINIMAX expansions of the logarithm of the reduced vibrational partition function show WINIMAX to be the best of the three approximations. A condensed table of WINIMAX coefficients is presented. The FOP, MINIMAX, and WINIMAX approximations are compared with exact calculations of the logarithm of the reduced partition function ratios for isotopic substitution in H 2 O, CH 4 , CH 2 O, C 2 H 4 , and C 2 H 6 at 300 0 K. Both deuterium and heavy atom isotope substitution are studied. Except for a third order expansion involving deuterium substitution, the WINIMAX method is superior to FOP and MINIMAX. At the level of a second order expansion WINIMAX approximations to ln(s/s')f are good to 2.5% and 6.5% for deuterium and heavy atom substitution, respectively

  1. Approximation for the adjoint neutron spectrum

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)

  2. APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kambo, N. S.

    2012-11-01

    Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.

  3. Bayesian reasoning in high-energy physics. Principles and applications

    International Nuclear Information System (INIS)

    D'Agostini, G.

    1999-01-01

    Bayesian statistics is based on the intuitive idea that probability quantifies the degree of belief in the occurrence of an event. The choice of name is due to the key role played by Bayes' theorem, as a logical tool to update probability in the light of new pieces of information. This approach is very close to the intuitive reasoning of experienced physicists, and it allows all kinds of uncertainties to be handled in a consistent way. Many cases of evaluation of measurement uncertainty are considered in detail in this report, including uncertainty arising from systematic errors, upper/lower limits and unfolding. Approximate methods, very useful in routine applications, are provided and several standard methods are recovered for cases in which the (often hidden) assumptions on which they are based hold. (orig.)

  4. Bayesian reasoning in high-energy physics. Principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    D' Agostini, G [Rome Univ. (Italy). Dipt. di Fisica; [European Organization for Nuclear Research, Geneva (Switzerland)

    1999-07-19

    Bayesian statistics is based on the intuitive idea that probability quantifies the degree of belief in the occurrence of an event. The choice of name is due to the key role played by Bayes' theorem, as a logical tool to update probability in the light of new pieces of information. This approach is very close to the intuitive reasoning of experienced physicists, and it allows all kinds of uncertainties to be handled in a consistent way. Many cases of evaluation of measurement uncertainty are considered in detail in this report, including uncertainty arising from systematic errors, upper/lower limits and unfolding. Approximate methods, very useful in routine applications, are provided and several standard methods are recovered for cases in which the (often hidden) assumptions on which they are based hold. (orig.)

  5. Quantitative Algebraic Reasoning

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Panangaden, Prakash; Plotkin, Gordon

    2016-01-01

    We develop a quantitative analogue of equational reasoning which we call quantitative algebra. We define an equality relation indexed by rationals: a =ε b which we think of as saying that “a is approximately equal to b up to an error of ε”. We have 4 interesting examples where we have a quantitative...... equational theory whose free algebras correspond to well known structures. In each case we have finitary and continuous versions. The four cases are: Hausdorff metrics from quantitive semilattices; pWasserstein metrics (hence also the Kantorovich metric) from barycentric algebras and also from pointed...

  6. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  7. Evidence reasoning method for constructing conditional probability tables in a Bayesian network of multimorbidity.

    Science.gov (United States)

    Du, Yuanwei; Guo, Yubin

    2015-01-01

    The intrinsic mechanism of multimorbidity is difficult to recognize and prediction and diagnosis are difficult to carry out accordingly. Bayesian networks can help to diagnose multimorbidity in health care, but it is difficult to obtain the conditional probability table (CPT) because of the lack of clinically statistical data. Today, expert knowledge and experience are increasingly used in training Bayesian networks in order to help predict or diagnose diseases, but the CPT in Bayesian networks is usually irrational or ineffective for ignoring realistic constraints especially in multimorbidity. In order to solve these problems, an evidence reasoning (ER) approach is employed to extract and fuse inference data from experts using a belief distribution and recursive ER algorithm, based on which evidence reasoning method for constructing conditional probability tables in Bayesian network of multimorbidity is presented step by step. A multimorbidity numerical example is used to demonstrate the method and prove its feasibility and application. Bayesian network can be determined as long as the inference assessment is inferred by each expert according to his/her knowledge or experience. Our method is more effective than existing methods for extracting expert inference data accurately and is fused effectively for constructing CPTs in a Bayesian network of multimorbidity.

  8. An efficient computer based wavelets approximation method to solve Fuzzy boundary value differential equations

    Science.gov (United States)

    Alam Khan, Najeeb; Razzaq, Oyoon Abdul

    2016-03-01

    In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.

  9. Mobile Monitoring and Reasoning Methods to Prevent Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Diego López-de-Ipiña

    2013-05-01

    Full Text Available With the recent technological advances, it is possible to monitor vital signs using Bluetooth-enabled biometric mobile devices such as smartphones, tablets or electric wristbands. In this manuscript, we present a system to estimate the risk of cardiovascular diseases in Ambient Assisted Living environments. Cardiovascular disease risk is obtained from the monitoring of the blood pressure by means of mobile devices in combination with other clinical factors, and applying reasoning techniques based on the Systematic Coronary Risk Evaluation Project charts. We have developed an end-to-end software application for patients and physicians and a rule-based reasoning engine. We have also proposed a conceptual module to integrate recommendations to patients in their daily activities based on information proactively inferred through reasoning techniques and context-awareness. To evaluate the platform, we carried out usability experiments and performance benchmarks.

  10. A fast approximation method for reliability analysis of cold-standby systems

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Amari, Suprasad V.

    2012-01-01

    Analyzing reliability of large cold-standby systems has been a complicated and time-consuming task, especially for systems with components having non-exponential time-to-failure distributions. In this paper, an approximation model, which is based on the central limit theorem, is presented for the reliability analysis of binary cold-standby systems. The proposed model can estimate the reliability of large cold-standby systems with binary-state components having arbitrary time-to-failure distributions in an efficient and easy way. The accuracy and efficiency of the proposed method are illustrated using several different types of distributions for both 1-out-of-n and k-out-of-n cold-standby systems.

  11. Clinical Reasoning: Survey of Teaching Methods, Integration, and Assessment in Entry-Level Physical Therapist Academic Education.

    Science.gov (United States)

    Christensen, Nicole; Black, Lisa; Furze, Jennifer; Huhn, Karen; Vendrely, Ann; Wainwright, Susan

    2017-02-01

    Although clinical reasoning abilities are important learning outcomes of physical therapist entry-level education, best practice standards have not been established to guide clinical reasoning curricular design and learning assessment. This research explored how clinical reasoning is currently defined, taught, and assessed in physical therapist entry-level education programs. A descriptive, cross-sectional survey was administered to physical therapist program representatives. An electronic 24-question survey was distributed to the directors of 207 programs accredited by the Commission on Accreditation in Physical Therapy Education. Descriptive statistical analysis and qualitative content analysis were performed. Post hoc demographic and wave analyses revealed no evidence of nonresponse bias. A response rate of 46.4% (n=96) was achieved. All respondents reported that their programs incorporated clinical reasoning into their curricula. Only 25% of respondents reported a common definition of clinical reasoning in their programs. Most respondents (90.6%) reported that clinical reasoning was explicit in their curricula, and 94.8% indicated that multiple methods of curricular integration were used. Instructor-designed materials were most commonly used to teach clinical reasoning (83.3%). Assessment of clinical reasoning included practical examinations (99%), clinical coursework (94.8%), written examinations (87.5%), and written assignments (83.3%). Curricular integration of clinical reasoning-related self-reflection skills was reported by 91%. A large number of incomplete surveys affected the response rate, and the program directors to whom the survey was sent may not have consulted the faculty members who were most knowledgeable about clinical reasoning in their curricula. The survey construction limited some responses and application of the results. Although clinical reasoning was explicitly integrated into program curricula, it was not consistently defined, taught, or

  12. The generalized successive approximation and Padé Approximants method for solving an elasticity problem of based on the elastic ground with variable coefficients

    Directory of Open Access Journals (Sweden)

    Mustafa Bayram

    2017-01-01

    Full Text Available In this study, we have applied a generalized successive numerical technique to solve the elasticity problem of based on the elastic ground with variable coefficient. In the first stage, we have calculated the generalized successive approximation of being given BVP and in the second stage we have transformed it into Padé series. At the end of study a test problem has been given to clarify the method.

  13. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  14. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  15. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  16. A summary of methods for approximating salt creep and disposal room closure in numerical models of multiphase flow

    Energy Technology Data Exchange (ETDEWEB)

    Freeze, G.A.; Larson, K.W. [INTERA, Inc., Albuquerque, NM (United States); Davies, P.B. [Sandia National Labs., Albuquerque, NM (United States)

    1995-10-01

    Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time.

  17. A summary of methods for approximating salt creep and disposal room closure in numerical models of multiphase flow

    International Nuclear Information System (INIS)

    Freeze, G.A.; Larson, K.W.; Davies, P.B.

    1995-10-01

    Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time

  18. Reasons and Methods to Learn the Management

    Science.gov (United States)

    Li, Hongxin; Ding, Mengchun

    2010-01-01

    Reasons for learning the management include (1) perfecting the knowledge structure, (2) the management is the base of all organizations, (3) one person may be the manager or the managed person, (4) the management is absolutely not simple knowledge, and (5) the learning of the theoretical knowledge of the management can not be replaced by the…

  19. Short overview of PSA quantification methods, pitfalls on the road from approximate to exact results

    International Nuclear Information System (INIS)

    Banov, Reni; Simic, Zdenko; Sterc, Davor

    2014-01-01

    Over time the Probabilistic Safety Assessment (PSA) models have become an invaluable companion in the identification and understanding of key nuclear power plant (NPP) vulnerabilities. PSA is an effective tool for this purpose as it assists plant management to target resources where the largest benefit for plant safety can be obtained. PSA has quickly become an established technique to numerically quantify risk measures in nuclear power plants. As complexity of PSA models increases, the computational approaches become more or less feasible. The various computational approaches can be basically classified in two major groups: approximate and exact (BDD based) methods. In recent time modern commercially available PSA tools started to provide both methods for PSA model quantification. Besides availability of both methods in proven PSA tools the usage must still be taken carefully since there are many pitfalls which can drive to wrong conclusions and prevent efficient usage of PSA tool. For example, typical pitfalls involve the usage of higher precision approximation methods and getting a less precise result, or mixing minimal cuts and prime implicants in the exact computation method. The exact methods are sensitive to selected computational paths in which case a simple human assisted rearrangement may help and even switch from computationally non-feasible to feasible methods. Further improvements to exact method are possible and desirable which opens space for a new research. In this paper we will show how these pitfalls may be detected and how carefully actions must be done especially when working with large PSA models. (authors)

  20. Approximate Analytical Solutions for Mathematical Model of Tumour Invasion and Metastasis Using Modified Adomian Decomposition and Homotopy Perturbation Methods

    Directory of Open Access Journals (Sweden)

    Norhasimah Mahiddin

    2014-01-01

    Full Text Available The modified decomposition method (MDM and homotopy perturbation method (HPM are applied to obtain the approximate solution of the nonlinear model of tumour invasion and metastasis. The study highlights the significant features of the employed methods and their ability to handle nonlinear partial differential equations. The methods do not need linearization and weak nonlinearity assumptions. Although the main difference between MDM and Adomian decomposition method (ADM is a slight variation in the definition of the initial condition, modification eliminates massive computation work. The approximate analytical solution obtained by MDM logically contains the solution obtained by HPM. It shows that HPM does not involve the Adomian polynomials when dealing with nonlinear problems.

  1. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    Science.gov (United States)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  2. The spectral element method for static neutron transport in AN approximation. Part I

    International Nuclear Information System (INIS)

    Barbarino, A.; Dulla, S.; Mund, E.H.; Ravetto, P.

    2013-01-01

    Highlights: ► Spectral elements methods (SEMs) are extended for the neutronics of nuclear reactor cores. ► The second-order, A N formulation of neutron trasport is adopted. ► Results for classical benchmark cases in 2D are presented and compared to finite elements. ► The advantages of SEM in terms of precision and convergence rate are illustrated. ► SEM consitutes a promising approach for the solution of neutron transport problems. - Abstract: Spectral elements methods provide very accurate solutions of elliptic problems. In this paper we apply the method to the A N (i.e. SP 2N−1 ) approximation of neutron transport. Numerical results for classical benchmark cases highlight its performance in comparison with finite element computations, in terms of accuracy per degree of freedom and convergence rate. All calculations presented in this paper refer to two-dimensional problems. The method can easily be extended to three-dimensional cases. The results illustrate promising features of the method for more complex transport problems

  3. Combined Forecasting Method of Landslide Deformation Based on MEEMD, Approximate Entropy, and WLS-SVM

    Directory of Open Access Journals (Sweden)

    Shaofeng Xie

    2017-01-01

    Full Text Available Given the chaotic characteristics of the time series of landslides, a new method based on modified ensemble empirical mode decomposition (MEEMD, approximate entropy and the weighted least square support vector machine (WLS-SVM was proposed. The method mainly started from the chaotic sequence of time-frequency analysis and improved the model performance as follows: first a deformation time series was decomposed into a series of subsequences with significantly different complexity using MEEMD. Then the approximate entropy method was used to generate a new subsequence for the combination of subsequences with similar complexity, which could effectively concentrate the component feature information and reduce the computational scale. Finally the WLS-SVM prediction model was established for each new subsequence. At the same time, phase space reconstruction theory and the grid search method were used to select the input dimension and the optimal parameters of the model, and then the superposition of each predicted value was the final forecasting result. Taking the landslide deformation data of Danba as an example, the experiments were carried out and compared with wavelet neural network, support vector machine, least square support vector machine and various combination schemes. The experimental results show that the algorithm has high prediction accuracy. It can ensure a better prediction effect even in landslide deformation periods of rapid fluctuation, and it can also better control the residual value and effectively reduce the error interval.

  4. Beam shape coefficients calculation for an elliptical Gaussian beam with 1-dimensional quadrature and localized approximation methods

    Science.gov (United States)

    Wang, Wei; Shen, Jianqi

    2018-06-01

    The use of a shaped beam for applications relying on light scattering depends much on the ability to evaluate the beam shape coefficients (BSC) effectively. Numerical techniques for evaluating the BSCs of a shaped beam, such as the quadrature, the localized approximation (LA), the integral localized approximation (ILA) methods, have been developed within the framework of generalized Lorenz-Mie theory (GLMT). The quadrature methods usually employ the 2-/3-dimensional integrations. In this work, the expressions of the BSCs for an elliptical Gaussian beam (EGB) are simplified into the 1-dimensional integral so as to speed up the numerical computation. Numerical results of BSCs are used to reconstruct the beam field and the fidelity of the reconstructed field to the given beam field is estimated. It is demonstrated that the proposed method is much faster than the 2-dimensional integrations and it can acquire more accurate results than the LA method. Limitations of the quadrature method and also the LA method in the numerical calculation are analyzed in detail.

  5. Solution of two-dimensional equations of neutron transport in 4P0-approximation of spherical harmonics method

    International Nuclear Information System (INIS)

    Polivanskij, V.P.

    1989-01-01

    The method to solve two-dimensional equations of neutron transport using 4P 0 -approximation is presented. Previously such approach was efficiently used for the solution of one-dimensional problems. New an attempt is made to apply the approach to solution of two-dimensional problems. Algorithm of the solution is given, as well as results of test neutron-physical calculations. A considerable as compared with diffusion approximation is shown. 11 refs

  6. Application of the N-quantum approximation method to bound state problems

    International Nuclear Information System (INIS)

    Raychaudhuri, A.

    1977-01-01

    The N-quantum approximation (NQA) method is examined in the light of its application to bound state problems. Bound state wave functions are obtained as expansion coefficients in a truncated Haag expansion. From the equations of motion for the Heisenberg field and the NQA expansion, an equation satisfied by the wave function is derived. Two different bound state systems are considered. In one case, the bound state problem of two identical scalars by scalar exchange is analyzed using the NQA. An integral equation satisfied by the wave function is derived. In the nonrelativistic limit, the equation is shown to reduce to the Schroedinger equation. The equation is solved numerically, and the results compared with those obtained for this system by other methods. The NQA method is also applied to the bound state of two spin 1/2 particles with electromagnetic interaction. The integral equation for the wave function is shown to agree with the corresponding Bethe Salpeter equation in the nonrelativistic limit. Using the Dirac (4 x 4) matrices the wave function is expanded in terms of structure functions and the equation for the wave function is reduced to two disjoint sets of coupled equation for the structure functions

  7. On the Application of Iterative Methods of Nondifferentiable Optimization to Some Problems of Approximation Theory

    Directory of Open Access Journals (Sweden)

    Stefan M. Stefanov

    2014-01-01

    Full Text Available We consider the data fitting problem, that is, the problem of approximating a function of several variables, given by tabulated data, and the corresponding problem for inconsistent (overdetermined systems of linear algebraic equations. Such problems, connected with measurement of physical quantities, arise, for example, in physics, engineering, and so forth. A traditional approach for solving these two problems is the discrete least squares data fitting method, which is based on discrete l2-norm. In this paper, an alternative approach is proposed: with each of these problems, we associate a nondifferentiable (nonsmooth unconstrained minimization problem with an objective function, based on discrete l1- and/or l∞-norm, respectively; that is, these two norms are used as proximity criteria. In other words, the problems under consideration are solved by minimizing the residual using these two norms. Respective subgradients are calculated, and a subgradient method is used for solving these two problems. The emphasis is on implementation of the proposed approach. Some computational results, obtained by an appropriate iterative method, are given at the end of the paper. These results are compared with the results, obtained by the iterative gradient method for the corresponding “differentiable” discrete least squares problems, that is, approximation problems based on discrete l2-norm.

  8. Solution of the point kinetics equations in the presence of Newtonian temperature feedback by Pade approximations via the analytical inversion method

    International Nuclear Information System (INIS)

    Aboanber, A E; Nahla, A A

    2002-01-01

    A method based on the Pade approximations is applied to the solution of the point kinetics equations with a time varying reactivity. The technique consists of treating explicitly the roots of the inhour formula. A significant improvement has been observed by treating explicitly the most dominant roots of the inhour equation, which usually would make the Pade approximation inaccurate. Also the analytical inversion method which permits a fast inversion of polynomials of the point kinetics matrix is applied to the Pade approximations. Results are presented for several cases of Pade approximations using various options of the method with different types of reactivity. The formalism is applicable equally well to non-linear problems, where the reactivity depends on the neutron density through temperature feedback. It was evident that the presented method is particularly good for cases in which the reactivity can be represented by a series of steps and performed quite well for more general cases

  9. Case-Based FCTF Reasoning System

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-10-01

    Full Text Available Case-based reasoning uses old information to infer the answer of new problems. In case-based reasoning, a reasoner firstly records the previous cases, then searches the previous case list that is similar to the current one and uses that to solve the new case. Case-based reasoning means adapting old solving solutions to new situations. This paper proposes a reasoning system based on the case-based reasoning method. To begin, we show the theoretical structure and algorithm of from coarse to fine (FCTF reasoning system, and then demonstrate that it is possible to successfully learn and reason new information. Finally, we use our system to predict practical weather conditions based on previous ones and experiments show that the prediction accuracy increases with further learning of the FCTF reasoning system.

  10. Portable Rule Extraction Method for Neural Network Decisions Reasoning

    Directory of Open Access Journals (Sweden)

    Darius PLIKYNAS

    2005-08-01

    Full Text Available Neural network (NN methods are sometimes useless in practical applications, because they are not properly tailored to the particular market's needs. We focus thereinafter specifically on financial market applications. NNs have not gained full acceptance here yet. One of the main reasons is the "Black Box" problem (lack of the NN decisions explanatory power. There are though some NN decisions rule extraction methods like decompositional, pedagogical or eclectic, but they suffer from low portability of the rule extraction technique across various neural net architectures, high level of granularity, algorithmic sophistication of the rule extraction technique etc. The authors propose to eliminate some known drawbacks using an innovative extension of the pedagogical approach. The idea is exposed by the use of a widespread MLP neural net (as a common tool in the financial problems' domain and SOM (input data space clusterization. The feedback of both nets' performance is related and targeted through the iteration cycle by achievement of the best matching between the decision space fragments and input data space clusters. Three sets of rules are generated algorithmically or by fuzzy membership functions. Empirical validation of the common financial benchmark problems is conducted with an appropriately prepared software solution.

  11. An extension of the fenske-hall LCAO method for approximate calculations of inner-shell binding energies of molecules

    Science.gov (United States)

    Zwanziger, Ch.; Reinhold, J.

    1980-02-01

    The approximate LCAO MO method of Fenske and Hall has been extended to an all-election method allowing the calculation of inner-shell binding energies of molecules and their chemical shifts. Preliminary results are given.

  12. Approximating local observables on projected entangled pair states

    Science.gov (United States)

    Schwarz, M.; Buerschaper, O.; Eisert, J.

    2017-06-01

    Tensor network states are for good reasons believed to capture ground states of gapped local Hamiltonians arising in the condensed matter context, states which are in turn expected to satisfy an entanglement area law. However, the computational hardness of contracting projected entangled pair states in two- and higher-dimensional systems is often seen as a significant obstacle when devising higher-dimensional variants of the density-matrix renormalization group method. In this work, we show that for those projected entangled pair states that are expected to provide good approximations of such ground states of local Hamiltonians, one can compute local expectation values in quasipolynomial time. We therefore provide a complexity-theoretic justification of why state-of-the-art numerical tools work so well in practice. We finally turn to the computation of local expectation values on quantum computers, providing a meaningful application for a small-scale quantum computer.

  13. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  14. Validation of the New Interpretation of Gerasimov's Nasal Projection Method for Forensic Facial Approximation Using CT Data

    DEFF Research Database (Denmark)

    Maltais Lapointe, Genevieve; Lynnerup, Niels; Hoppa, Robert D

    2016-01-01

    The most common method to predict nasal projection for forensic facial approximation is Gerasimov's two-tangent method. Ullrich H, Stephan CN (J Forensic Sci, 2011; 56: 470) argued that the method has not being properly implemented and a revised interpretation was proposed. The aim of this study......, and Ullrich H, Stephan CN (J Forensic Sci, 2011; 56: 470) interpretation should be used instead....

  15. Registered nurses' clinical reasoning skills and reasoning process: A think-aloud study.

    Science.gov (United States)

    Lee, JuHee; Lee, Young Joo; Bae, JuYeon; Seo, Minjeong

    2016-11-01

    As complex chronic diseases are increasing, nurses' prompt and accurate clinical reasoning skills are essential. However, little is known about the reasoning skills of registered nurses. This study aimed to determine how registered nurses use their clinical reasoning skills and to identify how the reasoning process proceeds in the complex clinical situation of hospital setting. A qualitative exploratory design was used with a think-aloud method. A total of 13 registered nurses (mean years of experience=11.4) participated in the study, solving an ill-structured clinical problem based on complex chronic patients cases in a hospital setting. Data were analyzed using deductive content analysis. Findings showed that the registered nurses used a variety of clinical reasoning skills. The most commonly used skill was 'checking accuracy and reliability.' The reasoning process of registered nurses covered assessment, analysis, diagnosis, planning/implementation, and evaluation phase. It is critical that registered nurses apply appropriate clinical reasoning skills in complex clinical practice. The main focus of registered nurses' reasoning in this study was assessing a patient's health problem, and their reasoning process was cyclic, rather than linear. There is a need for educational strategy development to enhance registered nurses' competency in determining appropriate interventions in a timely and accurate fashion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Case-based reasoning a concise introduction

    CERN Document Server

    López, Beatriz

    2013-01-01

    Case-based reasoning is a methodology with a long tradition in artificial intelligence that brings together reasoning and machine learning techniques to solve problems based on past experiences or cases. Given a problem to be solved, reasoning involves the use of methods to retrieve similar past cases in order to reuse their solution for the problem at hand. Once the problem has been solved, learning methods can be applied to improve the knowledge based on past experiences. In spite of being a broad methodology applied in industry and services, case-based reasoning has often been forgotten in

  17. A Method for Generating Approximate Similarity Solutions of Nonlinear Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    Mazhar Iqbal

    2014-01-01

    Full Text Available Standard application of similarity method to find solutions of PDEs mostly results in reduction to ODEs which are not easily integrable in terms of elementary or tabulated functions. Such situations usually demand solving reduced ODEs numerically. However, there are no systematic procedures available to utilize these numerical solutions of reduced ODE to obtain the solution of original PDE. A practical and tractable approach is proposed to deal with such situations and is applied to obtain approximate similarity solutions to different cases of an initial-boundary value problem of unsteady gas flow through a semi-infinite porous medium.

  18. Approximation methods in loop quantum cosmology: from Gowdy cosmologies to inhomogeneous models in Friedmann–Robertson–Walker geometries

    International Nuclear Information System (INIS)

    Martín-Benito, Mercedes; Martín-de Blas, Daniel; Marugán, Guillermo A Mena

    2014-01-01

    We develop approximation methods in the hybrid quantization of the Gowdy model with linear polarization and a massless scalar field, for the case of three-torus spatial topology. The loop quantization of the homogeneous gravitational sector of the Gowdy model (according to the improved dynamics prescription) and the presence of inhomogeneities lead to a very complicated Hamiltonian constraint. Therefore, the extraction of physical results calls for the introduction of well justified approximations. We first show how to approximate the homogeneous part of the Hamiltonian constraint, corresponding to Bianchi I geometries, as if it described a Friedmann–Robertson–Walker (FRW) model corrected with anisotropies. This approximation is valid in the sector of high energies of the FRW geometry (concerning its contribution to the constraint) and for anisotropy profiles that are sufficiently smooth. In addition, for certain families of states related to regimes of physical interest, with negligible quantum effects of the anisotropies and small inhomogeneities, one can approximate the Hamiltonian constraint of the inhomogeneous system by that of an FRW geometry with a relatively simple matter content, and then obtain its solutions. (paper)

  19. Accurate gradient approximation for complex interface problems in 3D by an improved coupling interface method

    Energy Technology Data Exchange (ETDEWEB)

    Shu, Yu-Chen, E-mail: ycshu@mail.ncku.edu.tw [Department of Mathematics, National Cheng Kung University, Tainan 701, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (South), Tainan 701, Taiwan (China); Chern, I-Liang, E-mail: chern@math.ntu.edu.tw [Department of Applied Mathematics, National Chiao Tung University, Hsin Chu 300, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (Taipei Office), Taipei 106, Taiwan (China); Chang, Chien C., E-mail: mechang@iam.ntu.edu.tw [Institute of Applied Mechanics, National Taiwan University, Taipei 106, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China)

    2014-10-15

    Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.

  20. Legendre-tau approximations for functional differential equations

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  1. Application of simple approximate system analysis methods for reliability and availability improvement of reactor WWER-1000

    International Nuclear Information System (INIS)

    Manchev, B.; Marinova, B.; Nenkova, B.

    2001-01-01

    The method described on this report provides a set of simple, easily understood 'approximate' models applicable to a large class of system architectures. Constructing a Markov model of each redundant subsystem and its replacement after that by a pseudo-component develops the approximation models. Of equal importance, the models can be easily understood even of non-experts, including managers, high-level decision-makers and unsophisticated consumers. A necessary requirement for their application is the systems to be repairable and the mean time to repair to be much smaller than the mean time to failure. This ia a case most often met in the real practice. Results of the 'approximate' model application on a technological system of Kozloduy NPP are also presented. The results obtained can be compared quite favorably with the results obtained by using SAPHIRE software

  2. Nonstandard approximation schemes for lower dimensional quantum field theories

    International Nuclear Information System (INIS)

    Fitzpatrick, D.A.

    1981-01-01

    The purpose of this thesis has been to apply two different nonstandard approximation schemes to a variety of lower-dimensional schemes. In doing this, we show their applicability where (e.g., Feynman or Rayleigh-Schroedinger) approximation schemes are inapplicable. We have applied the well-known mean-field approximation scheme by Guralnik et al. to general lower dimensional theories - the phi 4 field theory in one dimension, and the massive and massless Thirring models in two dimensions. In each case, we derive a bound-state propagator and then expand the theory in terms of the original and bound-state propagators. The results obtained can be compared with previously known results thereby show, in general, reasonably good convergence. In the second half of the thesis, we develop a self-consistent quantum mechanical approximation scheme. This can be applied to any monotonic polynomial potential. It has been applied in detail to the anharmonic oscillator, and the results in several analytical domains are very good, including extensive tables of numerical results

  3. The auxiliary field method and approximate analytical solutions of the Schroedinger equation with exponential potentials

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre-Brac, Bernard [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France); Semay, Claude; Buisseret, Fabien [Groupe de Physique Nucleaire Theorique, Universite de Mons-Hainaut, Academie universitaire Wallonie-Bruxelles, Place du Parc 20, B-7000 Mons (Belgium)], E-mail: silvestre@lpsc.in2p3.fr, E-mail: claude.semay@umh.ac.be, E-mail: fabien.buisseret@umh.ac.be

    2009-06-19

    The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -{alpha}r{sup {lambda}}exp(-{beta}r) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential.

  4. The auxiliary field method and approximate analytical solutions of the Schroedinger equation with exponential potentials

    International Nuclear Information System (INIS)

    Silvestre-Brac, Bernard; Semay, Claude; Buisseret, Fabien

    2009-01-01

    The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -αr λ exp(-βr) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential

  5. Wave equation dispersion inversion using a difference approximation to the dispersion-curve misfit gradient

    KAUST Repository

    Zhang, Zhendong

    2016-07-26

    We present a surface-wave inversion method that inverts for the S-wave velocity from the Rayleigh wave dispersion curve using a difference approximation to the gradient of the misfit function. We call this wave equation inversion of skeletonized surface waves because the skeletonized dispersion curve for the fundamental-mode Rayleigh wave is inverted using finite-difference solutions to the multi-dimensional elastic wave equation. The best match between the predicted and observed dispersion curves provides the optimal S-wave velocity model. Our method can invert for lateral velocity variations and also can mitigate the local minimum problem in full waveform inversion with a reasonable computation cost for simple models. Results with synthetic and field data illustrate the benefits and limitations of this method. © 2016 Elsevier B.V.

  6. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  7. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  8. Approximate spin projected spin-unrestricted density functional theory method: Application to diradical character dependences of second hyperpolarizabilities

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, Masayoshi, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Minami, Takuya, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Fukui, Hitoshi, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Yoneda, Kyohei, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Shigeta, Yasuteru, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Kishi, Ryohei, E-mail: mnaka@cheng.es.osaka-u.ac.jp [Department of Materials Engineering Science, Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531 (Japan); Champagne, Benoît; Botek, Edith [Laboratoire de Chimie Théorique, Facultés Universitaires Notre-Dame de la Paix (FUNDP), rue de Bruxelles, 61, 5000 Namur (Belgium)

    2015-01-22

    We develop a novel method for the calculation and the analysis of the one-electron reduced densities in open-shell molecular systems using the natural orbitals and approximate spin projected occupation numbers obtained from broken symmetry (BS), i.e., spin-unrestricted (U), density functional theory (DFT) calculations. The performance of this approximate spin projection (ASP) scheme is examined for the diradical character dependence of the second hyperpolarizability (γ) using several exchange-correlation functionals, i.e., hybrid and long-range corrected UDFT schemes. It is found that the ASP-LC-UBLYP method with a range separating parameter μ = 0.47 reproduces semi-quantitatively the strongly-correlated [UCCSD(T)] result for p-quinodimethane, i.e., the γ variation as a function of the diradical character.

  9. Sparse approximation of multilinear problems with applications to kernel-based methods in UQ

    KAUST Repository

    Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximations of different accuracy and computational work of the arguments of this map. We propose and analyze a generalized version of Smolyak’s algorithm, which provides sparse approximation formulas with convergence rates that mitigate the curse of dimension that appears in multilinear approximation problems with a large number of arguments. We apply the general framework to response surface approximation and optimization under uncertainty for parametric partial differential equations using kernel-based approximation. The theoretical results are supplemented by numerical experiments.

  10. Sparse approximation of multilinear problems with applications to kernel-based methods in UQ

    KAUST Repository

    Nobile, Fabio

    2017-11-16

    We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximations of different accuracy and computational work of the arguments of this map. We propose and analyze a generalized version of Smolyak’s algorithm, which provides sparse approximation formulas with convergence rates that mitigate the curse of dimension that appears in multilinear approximation problems with a large number of arguments. We apply the general framework to response surface approximation and optimization under uncertainty for parametric partial differential equations using kernel-based approximation. The theoretical results are supplemented by numerical experiments.

  11. Function approximation of tasks by neural networks

    International Nuclear Information System (INIS)

    Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.

    2008-01-01

    For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem

  12. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    W. Romeijnders; L. Stougie (Leen); M. van der Vlerk

    2014-01-01

    htmlabstractApproximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value.

  13. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    Romeijnders, W.; Stougie, L.; van der Vlerk, M.H.

    2014-01-01

    Approximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value. However,

  14. Linguistic hesitant fuzzy multi-criteria decision-making method based on evidential reasoning

    Science.gov (United States)

    Zhou, Huan; Wang, Jian-qiang; Zhang, Hong-yu; Chen, Xiao-hong

    2016-01-01

    Linguistic hesitant fuzzy sets (LHFSs), which can be used to represent decision-makers' qualitative preferences as well as reflect their hesitancy and inconsistency, have attracted a great deal of attention due to their flexibility and efficiency. This paper focuses on a multi-criteria decision-making approach that combines LHFSs with the evidential reasoning (ER) method. After reviewing existing studies of LHFSs, a new order relationship and Hamming distance between LHFSs are introduced and some linguistic scale functions are applied. Then, the ER algorithm is used to aggregate the distributed assessment of each alternative. Subsequently, the set of aggregated alternatives on criteria are further aggregated to get the overall value of each alternative. Furthermore, a nonlinear programming model is developed and genetic algorithms are used to obtain the optimal weights of the criteria. Finally, two illustrative examples are provided to show the feasibility and usability of the method, and comparison analysis with the existing method is made.

  15. Fuzzy reasoning on Horn Set

    International Nuclear Information System (INIS)

    Liu, X.; Fang, K.

    1986-01-01

    A theoretical study in fuzzy reasoning on Horn Set is presented in this paper. The authors first introduce the concepts of λ-Horn Set of clauses and λ-Input Half Lock deduction. They then use the λ-resolution method to discuss fuzzy reasoning on λ-Horn set of clauses. It is proved that the proposed λ-Input Half Lock resolution method is complete with the rules in certain format

  16. Approximate solutions of the two-dimensional integral transport equation by collision probability methods

    International Nuclear Information System (INIS)

    Sanchez, Richard

    1977-01-01

    A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the Interface Current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding and water, or homogenized structural material. The cells are divided into zones which are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is made by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: the first uses a cylindrical cell model and one or three terms for the flux expansion; the second uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark pr

  17. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  18. Approximated calculation of the vacuum wave function and vacuum energy of the LGT with RPA method

    International Nuclear Information System (INIS)

    Hui Ping

    2004-01-01

    The coupled cluster method is improved with the random phase approximation (RPA) to calculate vacuum wave function and vacuum energy of 2 + 1 - D SU(2) lattice gauge theory. In this calculating, the trial wave function composes of single-hollow graphs. The calculated results of vacuum wave functions show very good scaling behaviors at weak coupling region l/g 2 >1.2 from the third order to the sixth order, and the vacuum energy obtained with RPA method is lower than the vacuum energy obtained without RPA method, which means that this method is a more efficient one

  19. A Comparison between Effective Cross Section Calculations using the Intermediate Resonance Approximation and More Exact Methods

    Energy Technology Data Exchange (ETDEWEB)

    Haeggblom, H

    1969-02-15

    In order to investigate some aspects of the 'Intermediate Resonance Approximation' developed by Goldstein and Cohen, comparative calculations have been made using this method together with more accurate methods. The latter are as follows: a) For homogeneous materials the slowing down equation is solved in the fundamental mode approximation with the computer programme SPENG. All cross sections are given point by point. Because the spectrum can be calculated for at most 2000 energy points, the energy regions where the resonances are accurately described are limited. Isolated resonances in the region 100 to 240 eV are studied for {sup 238}U/Fe and {sup 238}U/Fe/Na mixtures. In the regions 161 to 251 eV and 701 to 1000 eV, mixtures of {sup 238}U and Na are investigated. {sup 239}Pu/Na and {sup 239}Pu/{sup 238}U/Na mixtures are studied in the region 161 to 251 eV. b) For heterogeneous compositions in slab geometry the integral transport equation is solved using the FLIS programme in 22 energy groups. Thus, only one resonance can be considered in each calculation. Two resonances are considered, namely those belonging to {sup 238}U at 190 and 937 eV. The compositions are lattices of {sup 238}U and Fe plates. The computer programme DORIX is used for the calculations using the Intermediate Resonance Approximation. Calculations of reaction rates and effective cross sections are made at 0, 300 and 1100 deg K for homogeneous media and at 300 deg K for heterogeneous media. The results are compared to those obtained by using the programmes SPENG and FLIS and using the narrow resonance approximation.

  20. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  1. 3-D numerical investigation of subsurface flow in anisotropic porous media using multipoint flux approximation method

    KAUST Repository

    Negara, Ardiansyah

    2013-01-01

    Anisotropy of hydraulic properties of subsurface geologic formations is an essential feature that has been established as a consequence of the different geologic processes that they undergo during the longer geologic time scale. With respect to petroleum reservoirs, in many cases, anisotropy plays significant role in dictating the direction of flow that becomes no longer dependent only on the pressure gradient direction but also on the principal directions of anisotropy. Furthermore, in complex systems involving the flow of multiphase fluids in which the gravity and the capillarity play an important role, anisotropy can also have important influences. Therefore, there has been great deal of motivation to consider anisotropy when solving the governing conservation laws numerically. Unfortunately, the two-point flux approximation of finite difference approach is not capable of handling full tensor permeability fields. Lately, however, it has been possible to adapt the multipoint flux approximation that can handle anisotropy to the framework of finite difference schemes. In multipoint flux approximation method, the stencil of approximation is more involved, i.e., it requires the involvement of 9-point stencil for the 2-D model and 27-point stencil for the 3-D model. This is apparently challenging and cumbersome when making the global system of equations. In this work, we apply the equation-type approach, which is the experimenting pressure field approach that enables the solution of the global problem breaks into the solution of multitude of local problems that significantly reduce the complexity without affecting the accuracy of numerical solution. This approach also leads in reducing the computational cost during the simulation. We have applied this technique to a variety of anisotropy scenarios of 3-D subsurface flow problems and the numerical results demonstrate that the experimenting pressure field technique fits very well with the multipoint flux approximation

  2. On Approximation of Hyper-geometric Function Values of a Special Class

    Directory of Open Access Journals (Sweden)

    P. L. Ivankov

    2017-01-01

    Full Text Available Investigations of arithmetic properties of the hyper-geometric function values make it possible to single out two trends, namely, Siegel’s method and methods based on the effective construction of a linear approximating form. There are also methods combining both approaches mentioned.  The Siegel’s method allows obtaining the most general results concerning the abovementioned problems. In many cases it was used to establish the algebraic independence of the values of corresponding functions. Although the effective methods do not allow obtaining propositions of such generality they have nevertheless some advantages. Among these advantages one can distinguish at least two: a higher precision of the quantitative results obtained by effective methods and a possibility to study the hyper-geometric functions with irrational parameters.In this paper we apply the effective construction to estimate a measure of the linear independence of the hyper-geometric function values over the imaginary quadratic field. The functions themselves were chosen by a special way so that it could be possible to demonstrate a new approach to the effective construction of a linear approximating form. This approach makes it possible also to extend the well-known effective construction methods of the linear approximating forms for poly-logarithms to the functions of more general type.To obtain the arithmetic result we had to establish a linear independence of the functions under consideration over the field of rational functions. It is apparently impossible to apply directly known theorems containing sufficient (and in some cases needful and sufficient conditions for the system of functions appearing in the theorems mentioned. For this reason, a special technique has been developed to solve this problem.The paper presents the obtained arithmetic results concerning the values of integral functions, but, with appropriate alterations, the theorems proved can be adapted to

  3. WKB approximation in atomic physics

    International Nuclear Information System (INIS)

    Karnakov, Boris Mikhailovich

    2013-01-01

    Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.

  4. Approximate method for calculating heat conditions in the magnetic circuits of transformers and betatrons

    International Nuclear Information System (INIS)

    Loginov, V.S.

    1986-01-01

    A technique for engineering design of two-dimensional stationary temperature field of rectangular cross section blending pile with inner heat release under nonsymmetrical cooling conditions is suggested. Area of its practical application is determined on the basis of experimental data known in literature. Different methods for calculating temperature distribution in betatron magnetic circuit are compared. Graph of maximum temperature calculation error on the basis of approximated expressions with respect to exact solution is given

  5. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    International Nuclear Information System (INIS)

    Biondo, Elliott D.; Wilson, Paul P. H.

    2017-01-01

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation of an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 _± 5 • _1_0_"_4 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.

  6. A point-value enhanced finite volume method based on approximate delta functions

    Science.gov (United States)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  7. Mixed multiscale finite element methods using approximate global information based on partial upscaling

    KAUST Repository

    Jiang, Lijian

    2009-10-02

    The use of limited global information in multiscale simulations is needed when there is no scale separation. Previous approaches entail fine-scale simulations in the computation of the global information. The computation of the global information is expensive. In this paper, we propose the use of approximate global information based on partial upscaling. A requirement for partial homogenization is to capture long-range (non-local) effects present in the fine-scale solution, while homogenizing some of the smallest scales. The local information at these smallest scales is captured in the computation of basis functions. Thus, the proposed approach allows us to avoid the computations at the scales that can be homogenized. This results in coarser problems for the computation of global fields. We analyze the convergence of the proposed method. Mathematical formalism is introduced, which allows estimating the errors due to small scales that are homogenized. The proposed method is applied to simulate two-phase flows in heterogeneous porous media. Numerical results are presented for various permeability fields, including those generated using two-point correlation functions and channelized permeability fields from the SPE Comparative Project (Christie and Blunt, SPE Reserv Evalu Eng 4:308-317, 2001). We consider simple cases where one can identify the scales that can be homogenized. For more general cases, we suggest the use of upscaling on the coarse grid with the size smaller than the target coarse grid where multiscale basis functions are constructed. This intermediate coarse grid renders a partially upscaled solution that contains essential non-local information. Numerical examples demonstrate that the use of approximate global information provides better accuracy than purely local multiscale methods. © 2009 Springer Science+Business Media B.V.

  8. Low-complexity computation of plate eigenmodes with Vekua approximations and the method of particular solutions

    Science.gov (United States)

    Chardon, Gilles; Daudet, Laurent

    2013-11-01

    This paper extends the method of particular solutions (MPS) to the computation of eigenfrequencies and eigenmodes of thin plates, in the framework of the Kirchhoff-Love plate theory. Specific approximation schemes are developed, with plane waves (MPS-PW) or Fourier-Bessel functions (MPS-FB). This framework also requires a suitable formulation of the boundary conditions. Numerical tests, on two plates with various boundary conditions, demonstrate that the proposed approach provides competitive results with standard numerical schemes such as the finite element method, at reduced complexity, and with large flexibility in the implementation choices.

  9. Approximate Dispersion Relations for Waves on Arbitrary Shear Flows

    Science.gov (United States)

    Ellingsen, S. À.; Li, Y.

    2017-12-01

    method is robust and works well in situations where the tool currently used will fail. In addition to predicting the speed of waves of different lengths and directions, it is important to know something about how accurate the prediction is, and as a worst case, whether it is reasonable at all. This has not been possible before, but we provide a way to answer both questions in a straightforward manner.

  10. A public health decision support system model using reasoning methods.

    Science.gov (United States)

    Mera, Maritza; González, Carolina; Blobel, Bernd

    2015-01-01

    Public health programs must be based on the real health needs of the population. However, the design of efficient and effective public health programs is subject to availability of information that can allow users to identify, at the right time, the health issues that require special attention. The objective of this paper is to propose a case-based reasoning model for the support of decision-making in public health. The model integrates a decision-making process and case-based reasoning, reusing past experiences for promptly identifying new population health priorities. A prototype implementation of the model was performed, deploying the case-based reasoning framework jColibri. The proposed model contributes to solve problems found today when designing public health programs in Colombia. Current programs are developed under uncertain environments, as the underlying analyses are carried out on the basis of outdated and unreliable data.

  11. Icon arrays help younger children's proportional reasoning.

    Science.gov (United States)

    Ruggeri, Azzurra; Vagharchakian, Laurianne; Xu, Fei

    2018-06-01

    We investigated the effects of two context variables, presentation format (icon arrays or numerical frequencies) and time limitation (limited or unlimited time), on the proportional reasoning abilities of children aged 7 and 10 years, as well as adults. Participants had to select, between two sets of tokens, the one that offered the highest likelihood of drawing a gold token, that is, the set of elements with the greater proportion of gold tokens. Results show that participants performed better in the unlimited time condition. Moreover, besides a general developmental improvement in accuracy, our results show that younger children performed better when proportions were presented as icon arrays, whereas older children and adults were similarly accurate in the two presentation format conditions. Statement of contribution What is already known on this subject? There is a developmental improvement in proportional reasoning accuracy. Icon arrays facilitate reasoning in adults with low numeracy. What does this study add? Participants were more accurate when they were given more time to make the proportional judgement. Younger children's proportional reasoning was more accurate when they were presented with icon arrays. Proportional reasoning abilities correlate with working memory, approximate number system, and subitizing skills. © 2018 The British Psychological Society.

  12. How Do High School Students Solve Probability Problems? A Mixed Methods Study on Probabilistic Reasoning

    Science.gov (United States)

    Heyvaert, Mieke; Deleye, Maarten; Saenen, Lore; Van Dooren, Wim; Onghena, Patrick

    2018-01-01

    When studying a complex research phenomenon, a mixed methods design allows to answer a broader set of research questions and to tap into different aspects of this phenomenon, compared to a monomethod design. This paper reports on how a sequential equal status design (QUAN ? QUAL) was used to examine students' reasoning processes when solving…

  13. Approximation of the inverse G-frame operator

    Indian Academy of Sciences (India)

    ... projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.

  14. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  15. Direct application of Padé approximant for solving nonlinear differential equations.

    Science.gov (United States)

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  16. Integrated Case Based and Rule Based Reasoning for Decision Support

    OpenAIRE

    Eshete, Azeb Bekele

    2009-01-01

    This project is a continuation of my specialization project which was focused on studying theoretical concepts related to case based reasoning method, rule based reasoning method and integration of them. The integration of rule-based and case-based reasoning methods has shown a substantial improvement with regards to performance over the individual methods. Verdande Technology As wants to try integrating the rule based reasoning method with an existing case based system. This project focu...

  17. The local quantum-mechanical stress tensor in Thomas-Fermi approximation and gradient expansion method

    International Nuclear Information System (INIS)

    Kaschner, R.; Graefenstein, J.; Ziesche, P.

    1988-12-01

    From the local momentum balance using density functional theory an expression for the local quantum-mechanical stress tensor (or stress field) σ(r) of non-relativistic Coulomb systems is found out within the Thomas-Fermi approximation and its generalizations including gradient expansion method. As an illustration the stress field σ(r) is calculated for the jellium model of the interface K-Cs, containing especially the adhesive force between the two half-space jellia. (author). 23 refs, 1 fig

  18. Magnus approximation in the adiabatic picture

    International Nuclear Information System (INIS)

    Klarsfeld, S.; Oteo, J.A.

    1991-01-01

    A simple approximate nonperturbative method is described for treating time-dependent problems that works well in the intermediate regime far from both the sudden and the adiabatic limits. The method consists of applying the Magnus expansion after transforming to the adiabatic basis defined by the eigenstates of the instantaneous Hamiltonian. A few exactly soluble examples are considered in order to assess the domain of validity of the approximation. (author) 32 refs., 4 figs

  19. Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation

    International Nuclear Information System (INIS)

    Costa, Carlos A N; Campos, Itamara S; Costa, Jessé C; Neto, Francisco A; Schleicher, Jörg; Novais, Amélia

    2013-01-01

    Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality. (paper)

  20. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  1. Simultaneous approximation in scales of Banach spaces

    International Nuclear Information System (INIS)

    Bramble, J.H.; Scott, R.

    1978-01-01

    The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods

  2. Stable Same-Sex Friendships with Higher Achieving Partners Promote Mathematical Reasoning in Lower Achieving Primary School Children

    Science.gov (United States)

    DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik

    2015-01-01

    This study is designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and one year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal Actor-Partner Interdependence Models) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. PMID:26402901

  3. Analytical approximation of neutron physics data

    International Nuclear Information System (INIS)

    Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.

    1984-01-01

    The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy

  4. Influence of Three Different Methods of Teaching Physics on the Gain in Students' Development of Reasoning

    Science.gov (United States)

    Marušić, Mirko; Sliško, Josip

    2012-01-01

    The Lawson Classroom Test of Scientific Reasoning (LCTSR) was used to gauge the relative effectiveness of three different methods of pedagogy, Reading, Presenting, and Questioning (RPQ), Experimenting and Discussion (ED), and Traditional Methods (TM), on increasing students' level of scientific thinking. The data of a one-semester-long senior high-school project indicate that, for the LCTSR: (a) the RPQ group (n = 91) achieved effect-sizes d = 0.30 and (b) the ED group (n  =  85) attained effect-sizes d = 0.64. These methods have shown that the Piagetian and Vygotskian visions on learning and teaching can go hand in hand and as such achieve respectable results. To do so, it is important to challenge the students and thus encourage the shift towards higher levels of reasoning. This aim is facilitated through class management which recognizes the importance of collaborative learning. Carrying out Vygotsky's original intention to use teaching to promote cognitive development as well as subject concepts, this research has shown that it is better to have students experience cognitive conflict from directly observed experiments than by reflecting on reported experience from popularization papers or writings found on the internet.

  5. Linguistic variables, approximate reasoning and dispositions

    Energy Technology Data Exchange (ETDEWEB)

    Zadeh, L.A.

    1983-07-01

    Test-score semantics is applied to the representation of meaning of dispositions, that is, propositions with suppressed fuzzy quantifiers, e.g. overeating causes obesity, icy roads are slippery, young men like young women, etc. The concept of a disposition plays an especially important role in the representation of commonsense knowledge. 45 references.

  6. On approximation of non-Newtonian fluid flow by the finite element method

    Science.gov (United States)

    Svácek, Petr

    2008-08-01

    In this paper the problem of numerical approximation of non-Newtonian fluid flow with free surface is considered. Namely, the flow of fresh concrete is addressed. Industrial mixtures often behaves like non-Newtonian fluids exhibiting a yield stress that needs to be overcome for the flow to take place, cf. [R.B. Bird, R.C. Armstrong, O. Hassager, Dynamics of Polymeric Liquids, vol. 1, Fluid Mechanics, Wiley, New York, 1987; R.P. Chhabra, J.F. Richardson, Non-Newtonian Flow in the Process Industries, Butterworth-Heinemann, London, 1999]. The main interest is paid to the mathematical formulation of the problem and to discretization with the aid of finite element method. The described numerical procedure is applied onto the solution of several problems.

  7. Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

    Science.gov (United States)

    Pazner, Will; Persson, Per-Olof

    2018-02-01

    In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O (p2d) storage and O (p3d) computational work, where p is the degree of basis polynomials used, and d is the spatial dimension. Our SVD-based tensor-product preconditioner requires O (p d + 1) storage, O (p d + 1) work in two spatial dimensions, and O (p d + 2) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in p per degree of freedom in 2D, and reduce the computational complexity from O (p9) to O (p5) in 3D. Numerical results are shown in 2D and 3D for the advection, Euler, and Navier-Stokes equations, using polynomials of degree up to p = 30. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees p.

  8. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Stochastic approximation methods-Powerful tools for simulation and optimization: A survey of some recent work on multi-agent systems and cyber-physical systems

    International Nuclear Information System (INIS)

    Yin, George; Wang, Le Yi; Zhang, Hongwei

    2014-01-01

    Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided

  10. Minimal entropy approximation for cellular automata

    International Nuclear Information System (INIS)

    Fukś, Henryk

    2014-01-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)

  11. Classical and Quantum Models in Non-Equilibrium Statistical Mechanics: Moment Methods and Long-Time Approximations

    Directory of Open Access Journals (Sweden)

    Ramon F. Alvarez-Estrada

    2012-02-01

    Full Text Available We consider non-equilibrium open statistical systems, subject to potentials and to external “heat baths” (hb at thermal equilibrium at temperature T (either with ab initio dissipation or without it. Boltzmann’s classical equilibrium distributions generate, as Gaussian weight functions in momenta, orthogonal polynomials in momenta (the position-independent Hermite polynomialsHn’s. The moments of non-equilibrium classical distributions, implied by the Hn’s, fulfill a hierarchy: for long times, the lowest moment dominates the evolution towards thermal equilibrium, either with dissipation or without it (but under certain approximation. We revisit that hierarchy, whose solution depends on operator continued fractions. We review our generalization of that moment method to classical closed many-particle interacting systems with neither a hb nor ab initio dissipation: with initial states describing thermal equilibrium at T at large distances but non-equilibrium at finite distances, the moment method yields, approximately, irreversible thermalization of the whole system at T, for long times. Generalizations to non-equilibrium quantum interacting systems meet additional difficulties. Three of them are: (i equilibrium distributions (represented through Wigner functions are neither Gaussian in momenta nor known in closed form; (ii they may depend on dissipation; and (iii the orthogonal polynomials in momenta generated by them depend also on positions. We generalize the moment method, dealing with (i, (ii and (iii, to some non-equilibrium one-particle quantum interacting systems. Open problems are discussed briefly.

  12. Using Differential Transform Method and Padé Approximant for Solving MHD Flow in a Laminar Liquid Film from a Horizontal Stretching Surface

    Directory of Open Access Journals (Sweden)

    Mohammad Mehdi Rashidi

    2010-01-01

    Full Text Available The purpose of this study is to approximate the stream function and temperature distribution of the MHD flow in a laminar liquid film from a horizontal stretching surface. In this paper DTM-Padé method was used which is a combination of differential transform method (DTM and Padé approximant. The DTM solutions are only valid for small values of independent variables. Comparison between the solutions obtained by the DTM and the DTM-Padé with numerical solution (fourth-order Runge–Kutta revealed that the DTM-Padé method is an excellent method for solving MHD boundary-layer equations.

  13. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Directory of Open Access Journals (Sweden)

    Danilo ePezo

    2014-11-01

    Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.

  14. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Science.gov (United States)

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  15. Information Uncertainty to Compare Qualitative Reasoning Security Risk Assessment Results

    Energy Technology Data Exchange (ETDEWEB)

    Chavez, Gregory M [Los Alamos National Laboratory; Key, Brian P [Los Alamos National Laboratory; Zerkle, David K [Los Alamos National Laboratory; Shevitz, Daniel W [Los Alamos National Laboratory

    2009-01-01

    The security risk associated with malevolent acts such as those of terrorism are often void of the historical data required for a traditional PRA. Most information available to conduct security risk assessments for these malevolent acts is obtained from subject matter experts as subjective judgements. Qualitative reasoning approaches such as approximate reasoning and evidential reasoning are useful for modeling the predicted risk from information provided by subject matter experts. Absent from these approaches is a consistent means to compare the security risk assessment results. Associated with each predicted risk reasoning result is a quantifiable amount of information uncertainty which can be measured and used to compare the results. This paper explores using entropy measures to quantify the information uncertainty associated with conflict and non-specificity in the predicted reasoning results. The measured quantities of conflict and non-specificity can ultimately be used to compare qualitative reasoning results which are important in triage studies and ultimately resource allocation. Straight forward extensions of previous entropy measures are presented here to quantify the non-specificity and conflict associated with security risk assessment results obtained from qualitative reasoning models.

  16. PWL approximation of nonlinear dynamical systems, part I: structural stability

    International Nuclear Information System (INIS)

    Storace, M; De Feo, O

    2005-01-01

    This paper and its companion address the problem of the approximation/identification of nonlinear dynamical systems depending on parameters, with a view to their circuit implementation. The proposed method is based on a piecewise-linear approximation technique. In particular, this paper describes the approximation method and applies it to some particularly significant dynamical systems (topological normal forms). The structural stability of the PWL approximations of such systems is investigated through a bifurcation analysis (via continuation methods)

  17. Clinical reasoning: concept analysis.

    Science.gov (United States)

    Simmons, Barbara

    2010-05-01

    This paper is a report of a concept analysis of clinical reasoning in nursing. Clinical reasoning is an ambiguous term that is often used synonymously with decision-making and clinical judgment. Clinical reasoning has not been clearly defined in the literature. Healthcare settings are increasingly filled with uncertainty, risk and complexity due to increased patient acuity, multiple comorbidities, and enhanced use of technology, all of which require clinical reasoning. Data sources. Literature for this concept analysis was retrieved from several databases, including CINAHL, PubMed, PsycINFO, ERIC and OvidMEDLINE, for the years 1980 to 2008. Rodgers's evolutionary method of concept analysis was used because of its applicability to concepts that are still evolving. Multiple terms have been used synonymously to describe the thinking skills that nurses use. Research in the past 20 years has elucidated differences among these terms and identified the cognitive processes that precede judgment and decision-making. Our concept analysis defines one of these terms, 'clinical reasoning,' as a complex process that uses cognition, metacognition, and discipline-specific knowledge to gather and analyse patient information, evaluate its significance, and weigh alternative actions. This concept analysis provides a middle-range descriptive theory of clinical reasoning in nursing that helps clarify meaning and gives direction for future research. Appropriate instruments to operationalize the concept need to be developed. Research is needed to identify additional variables that have an impact on clinical reasoning and what are the consequences of clinical reasoning in specific situations.

  18. Analyzing the errors of DFT approximations for compressed water systems

    International Nuclear Information System (INIS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-01-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the

  19. The exact solutions and approximate analytic solutions of the (2 + 1)-dimensional KP equation based on symmetry method.

    Science.gov (United States)

    Gai, Litao; Bilige, Sudao; Jie, Yingmo

    2016-01-01

    In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.

  20. A New Approximation Method for Solving Variational Inequalities and Fixed Points of Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Klin-eam Chakkrid

    2009-01-01

    Full Text Available Abstract A new approximation method for solving variational inequalities and fixed points of nonexpansive mappings is introduced and studied. We prove strong convergence theorem of the new iterative scheme to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for the inverse-strongly monotone mapping which solves some variational inequalities. Moreover, we apply our main result to obtain strong convergence to a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping in a Hilbert space.

  1. Complex-valued derivative propagation method with approximate Bohmian trajectories: Application to electronic nonadiabatic dynamics

    Science.gov (United States)

    Wang, Yu; Chou, Chia-Chun

    2018-05-01

    The coupled complex quantum Hamilton-Jacobi equations for electronic nonadiabatic transitions are approximately solved by propagating individual quantum trajectories in real space. Equations of motion are derived through use of the derivative propagation method for the complex actions and their spatial derivatives for wave packets moving on each of the coupled electronic potential surfaces. These equations for two surfaces are converted into the moving frame with the same grid point velocities. Excellent wave functions can be obtained by making use of the superposition principle even when nodes develop in wave packet scattering.

  2. Performance approximation of pick-to-belt orderpicking systems

    NARCIS (Netherlands)

    M.B.M. de Koster (René)

    1994-01-01

    textabstractIn this paper, an approximation method is discussed for the analysis of pick-to-belt orderpicking systems. The aim of the approximation method is to provide an instrument for obtaining rapid insight in the performance of designs of pick-to-belt orderpicking systems. It can be used to

  3. Reasons Internalism and the function of normative reasons

    OpenAIRE

    Sinclair, Neil

    2017-01-01

    What is the connection between reasons and motives? According to Reasons Internalism there is a non-trivial conceptual connection between normative reasons and the possibility of rationally accessing relevant motivation. Reasons Internalism is attractive insofar as it captures the thought that reasons are for reasoning with and repulsive insofar as it fails to generate sufficient critical distance between reasons and motives. Rather than directly adjudicate this dispute, I extract from it two...

  4. Reasons for discontinuation of contraceptive methods among couples with different family size and educational status.

    Science.gov (United States)

    Rizvi, Farwa; Irfan, Ghazia

    2012-01-01

    High rates of contraceptive discontinuation for reasons other than the desire for pregnancy are a public health concern because of their association with negative reproductive health outcomes. The objective of this study was to determine reasons for discontinuation of contraceptive methods among couples with different family size and educational status. This cross-sectional study was carried out at the Obstetrics/Gynaecology Out-Patient Department of Pakistan Institute of Medical Sciences, Islamabad from April-September 2012. Patients (241) were selected by consecutive sampling after informed written consent and acquiring approval of Ethical Committee. The survey interview tool was a semi-structured questionnaire. Majority (68%) of women belonged to urban, and the rest were from rural areas. Mean age of these women was 29.43 +/- 5.384 year. Reasons for discontinuation of contraceptives included fear of injectable contraceptives (2.9%), contraceptive failure/pregnancy (7.46%), desire to become pregnant (63.48%), husband away at job (2.49%), health concerns/side effects (16.18%), affordability (0.83%), inconvenient to use (1.24%), acceptability (0.83%) and accessibility/lack of information (4.56%). Association of different reasons of discontinuation (chi square test) with the family size (actual number of children) was significant (p = 0.019) but was not significant with husband's or wife's educational status (p = 0.33 and 0.285 respectively). Keeping in mind the complex socioeconomic conditions in our country, Family planning programmers and stake holders need to identify women who strongly want to avoid a pregnancy and finding ways to help the couples successfully initiate and maintain appropriate contraceptive use.

  5. Using Chebyshev polynomials and approximate inverse triangular factorizations for preconditioning the conjugate gradient method

    Science.gov (United States)

    Kaporin, I. E.

    2012-02-01

    In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.

  6. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  7. Semantic reasoning with XML-based biomedical information models.

    Science.gov (United States)

    O'Connor, Martin J; Das, Amar

    2010-01-01

    The Extensible Markup Language (XML) is increasingly being used for biomedical data exchange. The parallel growth in the use of ontologies in biomedicine presents opportunities for combining the two technologies to leverage the semantic reasoning services provided by ontology-based tools. There are currently no standardized approaches for taking XML-encoded biomedical information models and representing and reasoning with them using ontologies. To address this shortcoming, we have developed a workflow and a suite of tools for transforming XML-based information models into domain ontologies encoded using OWL. In this study, we applied semantics reasoning methods to these ontologies to automatically generate domain-level inferences. We successfully used these methods to develop semantic reasoning methods for information models in the HIV and radiological image domains.

  8. Application of the random phase approximation to some atoms with ns2 ground state configurations

    International Nuclear Information System (INIS)

    Wright, L.A.

    1975-01-01

    Atomic bound state properties such as excitation energies and oscillator strengths were calculated by the Random Phase Approximation (RPA), also known as the Time Dependent Hartree-Fock Approximation (TDHFA). The RPA is equivalent to describing excited states as the creation of particle-hole pairs and the application to atoms is important for two reasons: the wide range of densities in an atom will cause the physical interpretation and mathematical approximations to be much different than with a uniform density system, such as an electron gas; this method could detect the existence of collective states in atoms similar to those responsible for the giant dipole resonances in nuclei. The method is shown to be superior to the H-F method in three basic ways: (1) The RPA contains explicit correlations between the excited and ground states. These are not included in the H-F theory. One can apply this method to large atoms since only these correlations are explicitly included. (2) The RPA calculates excitation energies directly without recourse to highly correlated ground state wavefunctions. This is in contrast to the method of configuration mixing which is known to have slow convergence properties. (3) Oscillator strengths and photoionization cross sections can be calculated by finding the eigenvectors corresponding excitation energy eigenvalues. The strength of the RPA is that the excitation energies and oscillator strengths, which are relative quantities, are calculated directly. The results for the oscillator strengths show an improvement of up to 45 percent over the H-F values and an improvement over the RPA done with Hartree wavefunctions by as much as 65 percent. The work was limited to atoms with an ns 2 ground state configuration. These atoms were He, Be, Mg and Ca

  9. Approximation of Surfaces by Cylinders

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1998-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  10. A new look at the statistical assessment of approximate and rigorous methods for the estimation of stabilized formation temperatures in geothermal and petroleum wells

    International Nuclear Information System (INIS)

    Espinoza-Ojeda, O M; Santoyo, E; Andaverde, J

    2011-01-01

    Approximate and rigorous solutions of seven heat transfer models were statistically examined, for the first time, to estimate stabilized formation temperatures (SFT) of geothermal and petroleum boreholes. Constant linear and cylindrical heat source models were used to describe the heat flow (either conductive or conductive/convective) involved during a borehole drilling. A comprehensive statistical assessment of the major error sources associated with the use of these models was carried out. The mathematical methods (based on approximate and rigorous solutions of heat transfer models) were thoroughly examined by using four statistical analyses: (i) the use of linear and quadratic regression models to infer the SFT; (ii) the application of statistical tests of linearity to evaluate the actual relationship between bottom-hole temperatures and time function data for each selected method; (iii) the comparative analysis of SFT estimates between the approximate and rigorous predictions of each analytical method using a β ratio parameter to evaluate the similarity of both solutions, and (iv) the evaluation of accuracy in each method using statistical tests of significance, and deviation percentages between 'true' formation temperatures and SFT estimates (predicted from approximate and rigorous solutions). The present study also enabled us to determine the sensitivity parameters that should be considered for a reliable calculation of SFT, as well as to define the main physical and mathematical constraints where the approximate and rigorous methods could provide consistent SFT estimates

  11. An approximate method for calculating electron-phonon matrix element of a disordered transition metal and relevant comments on superconductivity

    International Nuclear Information System (INIS)

    Zhang, L.

    1981-08-01

    A method based on the tight-binding approximation is developed to calculate the electron-phonon matrix element for the disordered transition metals. With the method as a basis the experimental Tsub(c) data of the amorphous transition metal superconductors are re-analysed. Some comments on the superconductivity of the disordered materials are given

  12. Prospective Middle-School Mathematics Teachers' Quantitative Reasoning and Their Support for Students' Quantitative Reasoning

    Science.gov (United States)

    Kabael, Tangul; Akin, Ayca

    2018-01-01

    The aim of this research is to examine prospective mathematics teachers' quantitative reasoning, their support for students' quantitative reasoning and the relationship between them, if any. The teaching experiment was used as the research method in this qualitatively designed study. The data of the study were collected through a series of…

  13. Back-propagation neural network-based approximate analysis of true stress-strain behaviors of high-strength metallic material

    International Nuclear Information System (INIS)

    Doh, Jaeh Yeok; Lee, Jong Soo; Lee, Seung Uk

    2016-01-01

    In this study, a Back-propagation neural network (BPN) is employed to conduct an approximation of a true stress-strain curve using the load-displacement experimental data of DP590, a high-strength material used in automobile bodies and chassis. The optimized interconnection weights are obtained with hidden layers and output layers of the BPN through intelligent learning and training of the experimental data; by using these weights, a mathematical model of the material's behavior is suggested through this feed-forward neural network. Generally, the material properties from the tensile test cannot be acquired until the fracture regions, since it is difficult to measure the cross-section area of a specimen after diffusion necking. For this reason, the plastic properties of the true stress-strain are extrapolated using the weighted-average method after diffusion necking. The accuracies of BPN-based meta-models for predicting material properties are validated in terms of the Root mean square error (RMSE). By applying the approximate material properties, the reliable finite element solution can be obtained to realize the different shapes of the finite element models. Furthermore, the sensitivity analysis of the approximate meta-model is performed using the first-order approximate derivatives of the BPN and is compared with the results of the finite difference method. In addition, we predict the tension velocity's effect on the material property through a first-order sensitivity analysis.

  14. [Clinical reasoning in nursing, concept analysis].

    Science.gov (United States)

    Côté, Sarah; St-Cyr Tribble, Denise

    2012-12-01

    Nurses work in situations of complex care requiring great clinical reasoning abilities. In literature, clinical reasoning is often confused with other concepts and it has no consensual definition. To conduct a concept analysis of a nurse's clinical reasoning in order to clarify, define and distinguish it from the other concepts as well as to better understand clinical reasoning. Rodgers's method of concept analysis was used, after literature was retrieved with the use of clinical reasoning, concept analysis, nurse, intensive care and decision making as key-words. The use of cognition, cognitive strategies, a systematic approach of analysis and data interpretation, generating hypothesis and alternatives are attributes of clinical reasoning. The antecedents are experience, knowledge, memory, cues, intuition and data collection. The consequences are decision making, action, clues and problem resolution. This concept analysis helped to define clinical reasoning, to distinguish it from other concepts used synonymously and to guide future research.

  15. Approximate particle number projection in hot nuclei

    International Nuclear Information System (INIS)

    Kosov, D.S.; Vdovin, A.I.

    1995-01-01

    Heated finite systems like, e.g., hot atomic nuclei have to be described by the canonical partition function. But this is a quite difficult technical problem and, as a rule, the grand canonical partition function is used in the studies. As a result, some shortcomings of the theoretical description appear because of the thermal fluctuations of the number of particles. Moreover, in nuclei with pairing correlations the quantum number fluctuations are introduced by some approximate methods (e.g., by the standard BCS method). The exact particle number projection is very cumbersome and an approximate number projection method for T ≠ 0 basing on the formalism of thermo field dynamics is proposed. The idea of the Lipkin-Nogami method to perform any operator as a series in the number operator powers is used. The system of equations for the coefficients of this expansion is written and the solution of the system in the next approximation after the BCS one is obtained. The method which is of the 'projection after variation' type is applied to a degenerate single j-shell model. 14 refs., 1 tab

  16. Methods of legitimation: how ethics committees decide which reasons count in public policy decision-making.

    Science.gov (United States)

    Edwards, Kyle T

    2014-07-01

    In recent years, liberal democratic societies have struggled with the question of how best to balance expertise and democratic participation in the regulation of emerging technologies. This study aims to explain how national deliberative ethics committees handle the practical tension between scientific expertise, ethical expertise, expert patient input, and lay public input by explaining two institutions' processes for determining the legitimacy or illegitimacy of reasons in public policy decision-making: that of the United Kingdom's Human Fertilisation and Embryology Authority (HFEA) and the United States' American Society for Reproductive Medicine (ASRM). The articulation of these 'methods of legitimation' draws on 13 in-depth interviews with HFEA and ASRM members and staff conducted in January and February 2012 in London and over Skype, as well as observation of an HFEA deliberation. This study finds that these two institutions employ different methods in rendering certain arguments legitimate and others illegitimate: while the HFEA attempts to 'balance' competing reasons but ultimately legitimizes arguments based on health and welfare concerns, the ASRM seeks to 'filter' out arguments that challenge reproductive autonomy. The notably different structures and missions of each institution may explain these divergent approaches, as may what Sheila Jasanoff (2005) terms the distinctive 'civic epistemologies' of the US and the UK. Significantly for policy makers designing such deliberative committees, each method differs substantially from that explicitly or implicitly endorsed by the institution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. The Analysis of Students Scientific Reasoning Ability in Solving the Modified Lawson Classroom Test of Scientific Reasoning (MLCTSR Problems by Applying the Levels of Inquiry

    Directory of Open Access Journals (Sweden)

    N. Novia

    2017-04-01

    Full Text Available This study aims to determine the students’ achievement in answering modified lawson classroom test of scientific reasoning (MLCTSR questions in overall science teaching and by every aspect of scientific reasoning abilities. There are six aspects related to the scientific reasoning abilities that were measured; they are conservatorial reasoning, proportional reasoning, controlling variables, combinatorial reasoning, probabilistic reasoning, correlational reasoning. The research is also conducted to see the development of scientific reasoning by using levels of inquiry models. The students reasoning ability was measured using the Modified Lawson Classroom Test of Scientific Reasoning (MLCTSR. MLCTSR is a test developed based on the test of scientific reasoning of Lawson’s Classroom Test of Scientific Reasoning (LCTSR in 2000 which amounted to 12 multiple-choice questions. The research method chosen in this study is descriptive quantitative research methods. The research design used is One Group Pretest-Posttest Design. The population of this study is the entire junior high students class VII the academic year 2014/2015 in one junior high school in Bandung. The samples in this study are one of class VII, which is class VII C. The sampling method used in this research is purposive sampling. The results showed that there is an increase in quantitative scientific reasoning although its value is not big.

  18. Delay in a tandem queueing model with mobile queues: An analytical approximation

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.

    In this paper, we analyze the end-to-end delay performance of a tandem queueing system with mobile queues. Due to state-space explosion, there is no hope for a numerical exact analysis for the joint-queue-length distribution. For this reason, we present an analytical approximation that is based on

  19. Resummation of perturbative QCD by pade approximants

    International Nuclear Information System (INIS)

    Gardi, E.

    1997-01-01

    In this lecture I present some of the new developments concerning the use of Pade Approximants (PA's) for resuming perturbative series in QCD. It is shown that PA's tend to reduce the renormalization scale and scheme dependence as compared to truncated series. In particular it is proven that in the limit where the β function is dominated by the 1-loop contribution, there is an exact symmetry that guarantees invariance of diagonal PA's under changing the renormalization scale. In addition it is shown that in the large β 0 approximation diagonal PA's can be interpreted as a systematic method for approximating the flow of momentum in Feynman diagrams. This corresponds to a new multiple scale generalization of the Brodsky-Lepage-Mackenzie (BLM) method to higher orders. I illustrate the method with the Bjorken sum rule and the vacuum polarization function. (author)

  20. Simultaneous perturbation stochastic approximation for tidal models

    KAUST Repository

    Altaf, M.U.

    2011-05-12

    The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.

  1. Simultaneous perturbation stochastic approximation for tidal models

    KAUST Repository

    Altaf, M.U.; Heemink, A.W.; Verlaan, M.; Hoteit, Ibrahim

    2011-01-01

    The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.

  2. Finite approximations in fluid mechanics

    International Nuclear Information System (INIS)

    Hirschel, E.H.

    1986-01-01

    This book contains twenty papers on work which was conducted between 1983 and 1985 in the Priority Research Program ''Finite Approximations in Fluid Mechanics'' of the German Research Society (Deutsche Forschungsgemeinschaft). Scientists from numerical mathematics, fluid mechanics, and aerodynamics present their research on boundary-element methods, factorization methods, higher-order panel methods, multigrid methods for elliptical and parabolic problems, two-step schemes for the Euler equations, etc. Applications are made to channel flows, gas dynamical problems, large eddy simulation of turbulence, non-Newtonian flow, turbomachine flow, zonal solutions for viscous flow problems, etc. The contents include: multigrid methods for problems from fluid dynamics, development of a 2D-Transonic Potential Flow Solver; a boundary element spectral method for nonstationary viscous flows in 3 dimensions; navier-stokes computations of two-dimensional laminar flows in a channel with a backward facing step; calculations and experimental investigations of the laminar unsteady flow in a pipe expansion; calculation of the flow-field caused by shock wave and deflagration interaction; a multi-level discretization and solution method for potential flow problems in three dimensions; solutions of the conservation equations with the approximate factorization method; inviscid and viscous flow through rotating meridional contours; zonal solutions for viscous flow problems

  3. Effective medium super-cell approximation for interacting disordered systems: an alternative real-space derivation of generalized dynamical cluster approximation

    International Nuclear Information System (INIS)

    Moradian, Rostam

    2006-01-01

    We develop a generalized real-space effective medium super-cell approximation (EMSCA) method to treat the electronic states of interacting disordered systems. This method is general and allows randomness both in the on-site energies and in the hopping integrals. For a non-interacting disordered system, in the special case of randomness in the on-site energies, this method is equivalent to the non-local coherent potential approximation (NLCPA) derived previously. Also, for an interacting system the EMSCA method leads to the real-space derivation of the generalized dynamical cluster approximation (DCA) for a general lattice structure. We found that the original DCA and the NLCPA are two simple cases of this technique, so the EMSCA is equivalent to the generalized DCA where there is included interaction and randomness in the on-site energies and in the hopping integrals. All of the equations of this formalism are derived by using the effective medium theory in real space

  4. The Application of Approximate Entropy Theory in Defects Detecting of IGBT Module

    Directory of Open Access Journals (Sweden)

    Shengqi Zhou

    2012-01-01

    Full Text Available Defect is one of the key factors in reducing the reliability of the insulated gate bipolar transistor (IGBT module, so developing the diagnostic method for defects inside the IGBT module is an important measure to avoid catastrophic failure and improves the reliability of power electronic converters. For this reason, a novel diagnostic method based on the approximate entropy (ApEn theory is presented in this paper, which can provide statistical diagnosis and allow the operator to replace defective IGBT modules timely. The proposed method is achieved by analyzing the cross ApEn of the gate voltages before and after the occurring of defects. Due to the local damage caused by aging, the intrinsic parasitic parameters of packaging materials or silicon chips inside the IGBT module such as parasitic inductances and capacitances may change over time, which will make remarkable variation in the gate voltage. That is to say the gate voltage is close coupled with the defects. Therefore, the variation is quantified and used as a precursor parameter to evaluate the health status of the IGBT module. Experimental results validate the correctness of the proposed method.

  5. Finite elements and approximation

    CERN Document Server

    Zienkiewicz, O C

    2006-01-01

    A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o

  6. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  7. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  8. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  9. Brain Imaging, Forward Inference, and Theories of Reasoning

    Science.gov (United States)

    Heit, Evan

    2015-01-01

    This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities. PMID:25620926

  10. Brain imaging, forward inference, and theories of reasoning.

    Science.gov (United States)

    Heit, Evan

    2014-01-01

    This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities.

  11. Semi-implicit iterative methods for low Mach number turbulent reacting flows: Operator splitting versus approximate factorization

    Science.gov (United States)

    MacArt, Jonathan F.; Mueller, Michael E.

    2016-12-01

    Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.

  12. Informing Tobacco Cessation Benefit Use Interventions for Unionized Blue-Collar Workers: A Mixed-Methods Reasoned Action Approach.

    Science.gov (United States)

    Yzer, Marco; Weisman, Susan; Mejia, Nicole; Hennrikus, Deborah; Choi, Kelvin; DeSimone, Susan

    2015-08-01

    Blue-collar workers typically have high rates of tobacco use but low rates of using tobacco cessation resources available through their health benefits. Interventions to motivate blue-collar tobacco users to use effective cessation support are needed. Reasoned action theory is useful in this regard as it can identify the beliefs that shape tobacco cessation benefit use intentions. However, conventional reasoned action research cannot speak to how those beliefs can best be translated into intervention messages. In the present work, we expand the reasoned action approach by adding additional qualitative inquiry to better understand blue-collar smokers' beliefs about cessation benefit use. Across three samples of unionized blue-collar tobacco users, we identified (1) the 35 attitudinal, normative, and control beliefs that represented tobacco users' belief structure about cessation benefit use; (2) instrumental attitude as most important in explaining cessation intention; (3) attitudinal beliefs about treatment options' efficacy, health effects, and monetary implications of using benefits as candidates for message design; (4) multiple interpretations of cessation beliefs (e.g., short and long-term health effects); and (5) clear implications of these interpretations for creative message design. Taken together, the findings demonstrate how a mixed-method reasoned action approach can inform interventions that promote the use of tobacco cessation health benefits.

  13. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  14. Approximate thermodynamic state relations in partially ionized gas mixtures

    International Nuclear Information System (INIS)

    Ramshaw, John D.

    2004-01-01

    Thermodynamic state relations for mixtures of partially ionized nonideal gases are often approximated by artificially partitioning the mixture into compartments or subvolumes occupied by the pure partially ionized constituent gases, and requiring these subvolumes to be in temperature and pressure equilibrium. This intuitively reasonable procedure is easily shown to reproduce the correct thermal and caloric state equations for a mixture of neutral (nonionized) ideal gases. The purpose of this paper is to point out that (a) this procedure leads to incorrect state equations for a mixture of partially ionized ideal gases, whereas (b) the alternative procedure of requiring that the subvolumes all have the same temperature and free electron density reproduces the correct thermal and caloric state equations for such a mixture. These results readily generalize to the case of partially degenerate and/or relativistic electrons, to a common approximation used to represent pressure ionization effects, and to two-temperature plasmas. This suggests that equating the subvolume electron number densities or chemical potentials instead of pressures is likely to provide a more accurate approximation in nonideal plasma mixtures

  15. Nuclear data processing, analysis, transformation and storage with Pade-approximants

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gay, E.V.; Guseynov, M.A.; Rabotnov, N.S.

    1992-01-01

    A method is described to generate rational approximants of high order with applications to neutron data handling. The problems considered are: The approximations of neutron cross-sections in resonance region producing the parameters for Adler-Adler type formulae; calculations of resulting rational approximants' errors given in analytical form allowing to compute the error at any energy point inside the interval of approximation; calculations of the correlation coefficient of error values in two arbitrary points provided that experimental errors are independent and normally distributed; a method of simultaneous generation of a few rational approximants with identical set of poles; functionals other than LSM; two-dimensional approximation. (orig.)

  16. An approximate but efficient method to calculate free energy trends by computer simulation: Application to dihydrofolate reductase-inhibitor complexes

    Science.gov (United States)

    Gerber, Paul R.; Mark, Alan E.; van Gunsteren, Wilfred F.

    1993-06-01

    Derivatives of free energy differences have been calculated by molecular dynamics techniques. The systems under study were ternary complexes of Trimethoprim (TMP) with dihydrofolate reductases of E. coli and chicken liver, containing the cofactor NADPH. Derivatives are taken with respect to modification of TMP, with emphasis on altering the 3-, 4- and 5-substituents of the phenyl ring. A linear approximation allows the encompassing of a whole set of modifications in a single simulation, as opposed to a full perturbation calculation, which requires a separate simulation for each modification. In the case considered here, the proposed technique requires a factor of 1000 less computing effort than a full free energy perturbation calculation. For the linear approximation to yield a significant result, one has to find ways of choosing the perturbation evolution, such that the initial trend mirrors the full calculation. The generation of new atoms requires a careful treatment of the singular terms in the non-bonded interaction. The result can be represented by maps of the changed molecule, which indicate whether complex formation is favoured under movement of partial charges and change in atom polarizabilities. Comparison with experimental measurements of inhibition constants reveals fair agreement in the range of values covered. However, detailed comparison fails to show a significant correlation. Possible reasons for the most pronounced deviations are given.

  17. Square well approximation to the optical potential

    International Nuclear Information System (INIS)

    Jain, A.K.; Gupta, M.C.; Marwadi, P.R.

    1976-01-01

    Approximations for obtaining T-matrix elements for a sum of several potentials in terms of T-matrices for individual potentials are studied. Based on model calculations for S-wave for a sum of two separable non-local potentials of Yukawa type form factors and a sum of two delta function potentials, it is shown that the T-matrix for a sum of several potentials can be approximated satisfactorily over all the energy regions by the sum of T-matrices for individual potentials. Based on this, an approximate method for finding T-matrix for any local potential by approximating it by a sum of suitable number of square wells is presented. This provides an interesting way to calculate the T-matrix for any arbitary potential in terms of Bessel functions to a good degree of accuracy. The method is applied to the Saxon-Wood potentials and good agreement with exact results is found. (author)

  18. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  19. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  20. Theory of Endorsements and Reasoning With Uncertainty, January 1984 - January 1986

    Science.gov (United States)

    1987-11-01

    Languages, SIGPLAN Notices 12(8), and SIGART Newsletter, 64, pp. 116-125. Dempster, A. P. A generalization of Bayesian inference. Journal of the Royal ...Zadeh, L. A. (1975). Fuzzy logic and approximate reasoning. Synthese, 30, pp. 407- 428. 154 DISTRIBUTION LIST addresses number of copies Chulan -Chuian

  1. A holistic method to assess building energy efficiency combining D-S theory and the evidential reasoning approach

    International Nuclear Information System (INIS)

    Yao Runming; Yang Yulan; Li Baizhan

    2012-01-01

    The assessment of building energy efficiency is one of the most effective measures for reducing building energy consumption. This paper proposes a holistic method (HMEEB) for assessing and certifying energy efficiency of buildings based on the D-S (Dempster-Shafer) theory of evidence and the Evidential Reasoning (ER) approach. HMEEB has three main features: (i) it provides both a method to assess and certify building energy efficiency, and exists as an analytical tool to identify improvement opportunities; (ii) it combines a wealth of information on building energy efficiency assessment, including identification of indicators and a weighting mechanism; and (iii) it provides a method to identify and deal with inherent uncertainties within the assessment procedure. This paper demonstrates the robustness, flexibility and effectiveness of the proposed method, using two examples to assess the energy efficiency of two residential buildings, both located in the ‘Hot Summer and Cold Winter’ zone in China. The proposed certification method provides detailed recommendations for policymakers in the context of carbon emission reduction targets and promoting energy efficiency in the built environment. The method is transferable to other countries and regions, using an indicator weighting system to modify local climatic, economic and social factors. - Highlights: ► Assessing energy efficiency of buildings holistically; ► Applying the D-S (Dempster-Shafer) theory of evidence and the Evidential Reasoning (ER) approach; ► Involving large information and uncertainties in the energy efficiency decision-making process. ► rigorous measures for policymakers to meet carbon emission reduction targets.

  2. SAM revisited: uniform semiclassical approximation with absorption

    International Nuclear Information System (INIS)

    Hussein, M.S.; Pato, M.P.

    1986-01-01

    The uniform semiclassical approximation is modified to take into account strong absorption. The resulting theory, very similar to the one developed by Frahn and Gross is used to discuss heavy-ion elastic scattering at intermediate energies. The theory permits a reasonably unambiguos separation of refractive and diffractive effects. The systems 12 C+ 12 C and 12 C+ 16 O, which seem to exhibit a remnant of a nuclear rainbow at E=20 Mev/N, are analysed with theory which is built directly on a model for the S-matrix. Simple relations between the fit S-matrix and the underlying complex potential are derived. (Author) [pt

  3. Accelerating Approximate Bayesian Computation with Quantile Regression: application to cosmological redshift distributions

    Science.gov (United States)

    Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.

    2018-02-01

    Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.

  4. Suboptimal control of pressurized water reactor power plant using approximate model-following method

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Ogawa, Yuichi

    1987-01-01

    We attempted to develop an effective control system that can successfully manage the nuclear steam supply (NSS) system of a PWR power plant in an operational mode requiring relatively small variations of power. A procedure is proposed for synthesizing control system that is a simple, yet practiced, suboptimal control system. The suboptimal control system is designed in two steps; application of the optimal control theory, based on the linear state-feedback control and the use of an approximate model-following method. This procedure can appreciably reduce the complexity of the structure of the controller by accepting a slight deviation from the optimality and by the use of the output-feedback control. This eliminates the engineering difficulty caused by an incompletely state-feedback that is sometimes encountered in practical applications of the optimal state-feedback control theory to complex large-scale dynamical systems. Digital simulations and graphical studies based on the Bode-diagram demonstrate the effectiveness of the suboptimal control, and the applicability of the proposed design method as well. (author)

  5. An outer approximation method for the road network design problem.

    Science.gov (United States)

    Asadi Bagloee, Saeed; Sarvi, Majid

    2018-01-01

    Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well.

  6. An approximate method to estimate the minimum critical mass of fissile nuclides

    International Nuclear Information System (INIS)

    Wright, R.Q.; Jordan, W.C.

    1999-01-01

    When evaluating systems in criticality safety, it is important to approximate the answer before any analysis is performed. There is currently interest in establishing the minimum critical parameters for fissile actinides. The purpose is to describe the OB-1 method for estimating the minimum critical mass for thermal systems based on one-group calculations and 235 U spheres fully reflected by water. The observation is made that for water-moderated, well-thermalized systems, the transport and leakage from the system are dominated by water. Under these conditions two fissile mixtures will have nearly the same critical volume provided the infinite media multiplication factor (k ∞ ) for the two systems is the same. This observation allows for very simple estimates of critical concentration and mass as a function of the hydrogen-to-fissile (H/X) moderation ratio by comparison to the known 235 U system

  7. How People Reason: A Grounded Theory Study of Scientific Reasoning about Global Climate Change

    Science.gov (United States)

    Liu, Shiyu

    Scientific reasoning is crucial in both scientific inquiry and everyday life. While the majority of researchers have studied "how people reason" by focusing on their cognitive processes, factors related to the underpinnings of scientific reasoning are still under-researched. The present study aimed to develop a grounded theory that captures not only the cognitive processes during reasoning but also their underpinnings. In particular, the grounded theory and phenomenographic methodologies were integrated to explore how undergraduate students reason about competing theories and evidence on global climate change. Twenty-six undergraduate students were recruited through theoretical sampling. Constant comparative analysis of responses from interviews and written assessments revealed that participants were mostly drawn to the surface features when reasoning about evidence. While prior knowledge might not directly contribute to participants' performance on evidence evaluation, it affected their level of engagement when reading and evaluating competing arguments on climate issues. More importantly, even though all participants acknowledged the relative correctness of multiple perspectives, they predominantly favored arguments that supported their own beliefs with weak scientific reasoning about the opposing arguments. Additionally, factors such as personal interests, religious beliefs, and reading capacity were also found to have bearings on the way participants evaluated evidence and arguments. In all, this work contributes to the current endeavors in exploring the nature of scientific reasoning. Taking a holistic perspective, it provides an in-depth discussion of factors that may affect or relate to scientific reasoning processes. Furthermore, in comparison with traditional methods used in the literature, the methodological approach employed in this work brought an innovative insight into the investigation of scientific reasoning. Last but not least, this research may

  8. The Hartree-Fock seniority approximation

    International Nuclear Information System (INIS)

    Gomez, J.M.G.; Prieto, C.

    1986-01-01

    A new self-consistent method is used to take into account the mean-field and the pairing correlations in nuclei at the same time. We call it the Hartree-Fock seniority approximation, because the long-range and short-range correlations are treated in the frameworks of Hartree-Fock theory and the seniority scheme. The method is developed in detail for a minimum-seniority variational wave function in the coordinate representation for an effective interaction of the Skyrme type. An advantage of the present approach over the Hartree-Fock-Bogoliubov theory is the exact conservation of angular momentum and particle number. Furthermore, the computational effort required in the Hartree-Fock seniority approximation is similar to that ofthe pure Hartree-Fock picture. Some numerical calculations for Ca isotopes are presented. (orig.)

  9. The second-order polarization propagator approximation (SOPPA) method coupled to the polarizable continuum model

    DEFF Research Database (Denmark)

    Eriksen, Janus Juul; Solanko, Lukasz Michal; Nåbo, Lina J.

    2014-01-01

    2) wave function coupled to PCM, we introduce dynamical PCM solvent effects only in the Random Phase Approximation (RPA) part of the SOPPA response equations while the static solvent contribution is kept in both the RPA terms as well as in the higher order correlation matrix components of the SOPPA...... response equations. By dynamic terms, we refer to contributions that describe a change in environmental polarization which, in turn, reflects a change in the core molecular charge distribution upon an electronic excitation. This new combination of methods is termed PCM-SOPPA/RPA. We apply this newly...... defined method to the challenging cases of solvent effects on the lowest and intense electronic transitions in o-, m- and p-nitroaniline and o-, m- and p-nitrophenol and compare the performance of PCM-SOPPA/RPA with more conventional approaches. Compared to calculations based on time-dependent density...

  10. Approximate k-NN delta test minimization method using genetic algorithms: Application to time series

    CERN Document Server

    Mateo, F; Gadea, Rafael; Sovilj, Dusan

    2010-01-01

    In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...

  11. CONFLICTING REASONS

    OpenAIRE

    Parfit, Derek

    2016-01-01

    Sidgwick believed that, when impartial reasons conflict with self-interested reasons, there are no truths about their relative strength. There are such truths, I claim, but these truths are imprecise. Many self-interested reasons are decisively outweighed by conflicting impar-tial moral reasons. But we often have sufficient self-interested reasons to do what would make things go worse, and we sometimes have sufficient self-interested reasons to act wrongly. If we reject Act Consequentialism, ...

  12. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  13. Speed of reasoning and its relation to reasoning ability

    NARCIS (Netherlands)

    Goldhammer, F.; Klein Entink, R.H.

    2011-01-01

    The study investigates empirical properties of reasoning speed which is conceived as the fluency of solving reasoning problems. Responses and response times in reasoning tasks are modeled jointly to clarify the covariance structure of reasoning speed and reasoning ability. To determine underlying

  14. Lagrangians for plasmas in drift-fluid approximation

    International Nuclear Information System (INIS)

    Pfirsch, D.; Correa-Restrepo, D.

    1996-10-01

    For drift waves and related instabilities conservation laws can play a crucial role. In an ideal theory these conservation laws are guaranteed when a Lagrangian can be found from which the equations for the various quantities result by Hamilton's principle. Such a Lagrangian for plasmas in drift-fluid approximation was obtained by a heuristic method in a recent paper by Pfirsch and Correa-Restrepo. In the present paper the same Lagrangian is derived from the exact multi-fluid Lagrangian via an iterative approximation procedure which resembles the standard method usually applied to the equations of motion. That method, however, does not guarantee all the conservation laws to hold. (orig.)

  15. On the WKBJ approximation

    International Nuclear Information System (INIS)

    El Sawi, M.

    1983-07-01

    A simple approach employing properties of solutions of differential equations is adopted to derive an appropriate extension of the WKBJ method. Some of the earlier techniques that are commonly in use are unified, whereby the general approximate solution to a second-order homogeneous linear differential equation is presented in a standard form that is valid for all orders. In comparison to other methods, the present one is shown to be leading in the order of iteration, and thus possibly has the ability of accelerating the convergence of the solution. The method is also extended for the solution of inhomogeneous equations. (author)

  16. Diagonal Pade approximations for initial value problems

    International Nuclear Information System (INIS)

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab

  17. Switching Service Providers: Reasons, Service Types, and Sequences

    African Journals Online (AJOL)

    In Keaveney.s (1995) landmark study on the reasons for switching service providers, data were gathered using critical incident technique (CIT); here the original findings are tested using survey method. Keaveney.s typology of reasons for switching is supported across a range of categories but, in this new study, the reasons ...

  18. Reasoning about Codata

    Science.gov (United States)

    Hinze, Ralf

    Programmers happily use induction to prove properties of recursive programs. To show properties of corecursive programs they employ coinduction, but perhaps less enthusiastically. Coinduction is often considered a rather low-level proof method, in particular, as it departs quite radically from equational reasoning. Corecursive programs are conveniently defined using recursion equations. Suitably restricted, these equations possess unique solutions. Uniqueness gives rise to a simple and attractive proof technique, which essentially brings equational reasoning to the coworld. We illustrate the approach using two major examples: streams and infinite binary trees. Both coinductive types exhibit a rich structure: they are applicative functors or idioms, and they can be seen as memo-tables or tabulations. We show that definitions and calculations benefit immensely from this additional structure.

  19. Research on conflict resolution of collaborative design with fuzzy case-based reasoning method

    Institute of Scientific and Technical Information of China (English)

    HOU Jun-ming; SU Chong; LIANG Shuang; WANG Wan-shan

    2009-01-01

    Collaborative design is a new style for modern mechanical design to meet the requirement of increasing competition. Designers of different places complete the same work, but the conflict appears in the process of design which may interface the design. Case-based reasoning (CBR) method is applied to the problem of conflict resolution, which is in the artificial intelligence field. However, due to the uncertainties in knowledge representation, attribute description, and similarity measures of CBR, it is very difficult to find the similar cases from case database. A fuzzy CBR method was proposed to solve the problem of conflict resolution in collaborative design. The process of fuzzy CBR was introduced. Based on the feature attributes and their relative weights determined by a fuzzy technique, a fuzzy CBR retrieving mechanism was developed to retrieve conflict resolution cases that tend to enhance the functions of the database. By indexing, calculating the weight and defuzzicating of the cases, the case similarity can be obtained. Then the case consistency was measured to keep the right result. Finally, the fuzzy CBR method for conflict resolution was demonstrated by means of a case study. The prototype system based on web is developed to illustrate the methodology.

  20. Effective Summation and Interpolation of Series by Self-Similar Root Approximants

    Directory of Open Access Journals (Sweden)

    Simon Gluzman

    2015-06-01

    Full Text Available We describe a simple analytical method for effective summation of series, including divergent series. The method is based on self-similar approximation theory resulting in self-similar root approximants. The method is shown to be general and applicable to different problems, as is illustrated by a number of examples. The accuracy of the method is not worse, and in many cases better, than that of Padé approximants, when the latter can be defined.

  1. Clinical Reasoning in Medicine: A Concept Analysis

    Directory of Open Access Journals (Sweden)

    Shahram Yazdani

    2018-01-01

    Full Text Available Background: Clinical reasoning plays an important role in the ability of physicians to make diagnoses and decisions. It is considered the physician’s most critical competence, but it is an ambiguous conceptin medicine that needs a clear analysis and definition. Our aim was to clarify the concept of clinical reasoning in medicine by identifying its components and to differentiate it from other similar concepts.It is necessary to have an operational definition of clinical reasoning, and its components must be precisely defined in order to design successful interventions and use it easily in future research.Methods: McKenna’s nine-step model was applied to facilitate the clarification of the concept of clinical reasoning. The literature for this concept analysis was retrieved from several databases, including Scopus, Elsevier, PubMed, ISI, ISC, Medline, and Google Scholar, for the years 1995– 2016 (until September 2016. An extensive search of the literature was conducted using the electronic database. Accordingly, 17 articles and one book were selected for the review. We applied McKenna’s method of concept analysis in studying clinical reasoning, so that definitional attributes, antecedents, and consequences of this concept were extracted.Results: Clinical reasoning has nine major attributes in medicine. These attributes include: (1 clinical reasoning as a cognitive process; (2 knowledge acquisition and application of different types of knowledge; (3 thinking as a part of the clinical reasoning process; (4 patient inputs; (5 contextdependent and domain-specific processes; (6 iterative and complex processes; (7 multi-modal cognitive processes; (8 professional principles; and (9 health system mandates. These attributes are influenced by the antecedents of workplace context, practice frames of reference, practice models of the practitioner, and clinical skills. The consequences of clinical reasoning are the metacognitive improvement of

  2. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    Science.gov (United States)

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  3. Application of the first approximation of the K-harmonics method to the O+ states of 16O

    International Nuclear Information System (INIS)

    Silveira, H.V. da.

    1977-01-01

    The first (also called basic) approximation of the K-harmonics method is applied to the nucleus of 16 O taken as a system of 8 protons and 8 neutrons interacting through nuclear and coulomb two-body potentials, in order to obtain the spectrum of the O + states of 16 O, and also the charge form factor and the root mean square charge radius [pt

  4. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  5. Spherical anharmonic oscillator in self-similar approximation

    International Nuclear Information System (INIS)

    Yukalova, E.P.; Yukalov, V.I.

    1992-01-01

    The method of self-similar approximation is applied here for calculating the eigenvalues of the three-dimensional spherical anharmonic oscillator. The advantage of this method is in its simplicity and high accuracy. The comparison with other known analytical methods proves that this method is more simple and accurate. 25 refs

  6. The mathematical structure of the approximate linear response relation

    International Nuclear Information System (INIS)

    Yasuda, Muneki; Tanaka, Kazuyuki

    2007-01-01

    In this paper, we study the mathematical structures of the linear response relation based on Plefka's expansion and the cluster variation method in terms of the perturbation expansion, and we show how this linear response relation approximates the correlation functions of the specified system. Moreover, by comparing the perturbation expansions of the correlation functions estimated by the linear response relation based on these approximation methods with exact perturbative forms of the correlation functions, we are able to explain why the approximate techniques using the linear response relation work well

  7. A mathematical model and an approximate method for calculating the fracture characteristics of nonmetallic materials during laser cutting

    Energy Technology Data Exchange (ETDEWEB)

    Smorodin, F.K.; Druzhinin, G.V.

    1991-01-01

    A mathematical model is proposed which describes the fracture behavior of amorphous materials during laser cutting. The model, which is based on boundary layer equations, is reduced to ordinary differential equations with the corresponding boundary conditions. The reduced model is used to develop an approximate method for calculating the fracture characteristics of nonmetallic materials.

  8. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  9. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    Science.gov (United States)

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  10. Conditional Density Approximations with Mixtures of Polynomials

    DEFF Research Database (Denmark)

    Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre

    2015-01-01

    Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...

  11. Mathematical analysis, approximation theory and their applications

    CERN Document Server

    Gupta, Vijay

    2016-01-01

    Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.

  12. Merging Belief Propagation and the Mean Field Approximation

    DEFF Research Database (Denmark)

    Riegler, Erwin; Kirkelund, Gunvor Elisabeth; Manchón, Carles Navarro

    2010-01-01

    We present a joint message passing approach that combines belief propagation and the mean field approximation. Our analysis is based on the region-based free energy approximation method proposed by Yedidia et al., which allows to use the same objective function (Kullback-Leibler divergence......) as a starting point. In this method message passing fixed point equations (which correspond to the update rules in a message passing algorithm) are then obtained by imposing different region-based approximations and constraints on the mean field and belief propagation parts of the corresponding factor graph....... Our results can be applied, for example, to algorithms that perform joint channel estimation and decoding in iterative receivers. This is demonstrated in a simple example....

  13. What variables can influence clinical reasoning?

    Directory of Open Access Journals (Sweden)

    Vahid Ashoorion

    2012-01-01

    Full Text Available Background: Clinical reasoning is one of the most important competencies that a physician should achieve. Many medical schools and licensing bodies try to predict it based on some general measures such as critical thinking, personality, and emotional intelligence. This study aimed at providing a model to design the relationship between the constructs. Materials and Methods: Sixty-nine medical students participated in this study. A battery test devised that consist four parts: Clinical reasoning measures, personality NEO inventory, Bar-On EQ inventory, and California critical thinking questionnaire. All participants completed the tests. Correlation and multiple regression analysis consumed for data analysis. Results: There is low to moderate correlations between clinical reasoning and other variables. Emotional intelligence is the only variable that contributes clinical reasoning construct (r=0.17-0.34 (R 2 chnage = 0.46, P Value = 0.000. Conclusion: Although, clinical reasoning can be considered as a kind of thinking, no significant correlation detected between it and other constructs. Emotional intelligence (and its subscales is the only variable that can be used for clinical reasoning prediction.

  14. A Gaussian Approximation Potential for Silicon

    Science.gov (United States)

    Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor

    We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.

  15. IMPROVEMENT OF ACCURACY OF RADIATIVE HEAT TRANSFER DIFFERENTIAL APPROXIMATION METHOD FOR MULTI DIMENSIONAL SYSTEMS BY MEANS OF AUTO-ADAPTABLE BOUNDARY CONDITIONS

    Directory of Open Access Journals (Sweden)

    K. V. Dobrego

    2015-01-01

    Full Text Available Differential approximation is derived from radiation transfer equation by averaging over the solid angle. It is one of the more effective methods for engineering calculations of radia- tive heat transfer in complex three-dimensional thermal power systems with selective and scattering media. The new method for improvement of accuracy of the differential approximation based on using of auto-adaptable boundary conditions is introduced in the paper. The  efficiency  of  the  named  method  is  proved  for  the  test  2D-systems.  Self-consistent auto-adaptable boundary conditions taking into consideration the nonorthogonal component of the incident to the boundary radiation flux are formulated. It is demonstrated that taking in- to consideration of the non- orthogonal incident flux in multi-dimensional systems, such as furnaces, boilers, combustion chambers improves the accuracy of the radiant flux simulations and to more extend in the zones adjacent to the edges of the chamber.Test simulations utilizing the differential approximation method with traditional boundary conditions, new self-consistent boundary conditions and “precise” discrete ordinates method were performed. The mean square errors of the resulting radiative fluxes calculated along the boundary of rectangular and triangular test areas were decreased 1.5–2 times by using auto- adaptable boundary conditions. Radiation flux gaps in the corner points of non-symmetric sys- tems are revealed by using auto-adaptable boundary conditions which can not be obtained by using the conventional boundary conditions.

  16. Augmenting Ordinal Methods of Attribute Weight Approximation

    DEFF Research Database (Denmark)

    Daneilson, Mats; Ekenberg, Love; He, Ying

    2014-01-01

    of the obstacles and methods for introducing so-called surrogate weights have proliferated in the form of ordinal ranking methods for criteria weights. Considering the decision quality, one main problem is that the input information allowed in ordinal methods is sometimes too restricted. At the same time, decision...... makers often possess more background information, for example, regarding the relative strengths of the criteria, and might want to use that. We propose combined methods for facilitating the elicitation process and show how this provides a way to use partial information from the strength of preference...

  17. Communication: On the consistency of approximate quantum dynamics simulation methods for vibrational spectra in the condensed phase.

    Science.gov (United States)

    Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele

    2014-11-14

    Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm(-1). Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.

  18. PWL approximation of nonlinear dynamical systems, part II: identification issues

    International Nuclear Information System (INIS)

    De Feo, O; Storace, M

    2005-01-01

    This paper and its companion address the problem of the approximation/identification of nonlinear dynamical systems depending on parameters, with a view to their circuit implementation. The proposed method is based on a piecewise-linear approximation technique. In particular, this paper describes a black-box identification method based on state space reconstruction and PWL approximation, and applies it to some particularly significant dynamical systems (two topological normal forms and the Colpitts oscillator)

  19. A test of the mean density approximation for Lennard-Jones mixtures with large size ratios

    International Nuclear Information System (INIS)

    Ely, J.F.

    1986-01-01

    The mean density approximation for mixture radial distribution functions plays a central role in modern corresponding-states theories. This approximation is reasonably accurate for systems that do not differ widely in size and energy ratios and which are nearly equimolar. As the size ratio increases, however, or if one approaches an infinite dilution of one of the components, the approximation becomes progressively worse, especially for the small molecule pair. In an attempt to better understand and improve this approximation, isothermal molecular dynamics simulations have been performed on a series of Lennard-Jones mixtures. Thermodynamic properties, including the mixture radial distribution functions, have been obtained at seven compositions ranging from 5 to 95 mol%. In all cases the size ratio was fixed at two and three energy ratios were investigated, 22 / 11 =0.5, 1.0, and 1.5. The results of the simulations are compared with the mean density approximation and a modification to integrals evaluated with the mean density approximation is proposed

  20. Explicit Knowledge-based Reasoning for Visual Question Answering

    OpenAIRE

    Wang, Peng; Wu, Qi; Shen, Chunhua; Hengel, Anton van den; Dick, Anthony

    2015-01-01

    We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperform...

  1. Analytical approximate solutions for a general class of nonlinear delay differential equations.

    Science.gov (United States)

    Căruntu, Bogdan; Bota, Constantin

    2014-01-01

    We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.

  2. Quasi-fractional approximation to the Bessel functions

    International Nuclear Information System (INIS)

    Guerrero, P.M.L.

    1989-01-01

    In this paper the authors presents a simple Quasi-Fractional Approximation for Bessel Functions J ν (x), (- 1 ≤ ν < 0.5). This has been obtained by extending a method published which uses simultaneously power series and asymptotic expansions. Both functions, exact and approximated, coincide in at least two digits for positive x, and ν between - 1 and 0,4

  3. Reason with me : 'Confabulation' and interpersonal moral reasoning

    NARCIS (Netherlands)

    Nyholm, S.R.

    2015-01-01

    According to Haidt’s ‘social intuitionist model’, empirical moral psychology supports the following conclusion: intuition comes first, strategic reasoning second. Critics have responded by arguing that intuitions can depend on non-conscious reasons, that not being able to articulate one’s reasons

  4. Structure of the optimized effective Kohn-Sham exchange potential and its gradient approximations

    International Nuclear Information System (INIS)

    Gritsenko, O.; Van Leeuwen, R.; Baerends, E.J.

    1996-01-01

    An analysis of the structure of the optimized effective Kohn-Sham exchange potential v, and its gradient approximations is presented. The potential is decomposed into the Slater potential v s and the response of v s to density variations, v resp . The latter exhibits peaks that reflect the atomic shell structure. Kohn-Sham exchange potentials derived from current gradient approaches for the exchange energy are shown to be quite reasonable for the Slater potential, but they fail to approximate the response part, which leads to poor overall potentials. Improved potentials are constructed by a direct fit of v x with a gradient-dependent Pade approximant form. The potentials obtained possess proper asymptotic and scaling properties and reproduce the shell structure of the exact v x . 44 refs., 7 figs., 4 tabs

  5. Validation of a patient interview for assessing reasons for antipsychotic discontinuation and continuation

    Directory of Open Access Journals (Sweden)

    Matza LS

    2012-07-01

    Full Text Available Louis S Matza,1 Glenn A Phillips,2 Dennis A Revicki,1 Haya Ascher-Svanum,3 Karen G Malley,4 Andrew C Palsgrove,1 Douglas E Faries,3 Virginia Stauffer,3 Bruce J Kinon,3 A George Awad,5 Richard SE Keefe,6 Dieter Naber71Outcomes Research, United BioSource Corporation, Bethesda, MD, 2Formerly with Eli Lilly and Company, Indianapolis, IN, 3Eli Lilly and Company, Indianapolis, IN, 4Malley Research Programming, Inc, Rockville, MD, USA; 5Department of Psychiatry and Behavioral Sciences; University of Toronto, Toronto, Canada; 6Duke University Medical Center, Durham NC, USA; 7Universitaetsklinikum Hamburg-Eppendorf, Hamburg, GermanyIntroduction: The Reasons for Antipsychotic Discontinuation Interview (RAD-I was developed to assess patients’ perceptions of reasons for discontinuing or continuing an antipsychotic. The current study examined reliability and validity of domain scores representing three factors contributing to these treatment decisions: treatment benefits, adverse events, and distal reasons other than direct effects of the medication.Methods: Data were collected from patients with schizophrenia or schizoaffective disorder and their treating clinicians. For approximately 25% of patients, a second rater completed the RAD-I for assessment of inter-rater reliability.Results: All patients (n = 121; 81 discontinuation, 40 continuation reported at least one reason for discontinuation or continuation (mean = 2.8 reasons for discontinuation; 3.4 for continuation. Inter-rater reliability was supported (kappas = 0.63–1.0. Validity of the discontinuation domain scores was supported by associations with symptom measures (the Positive and Negative Syndrome Scale for Schizophrenia, the Clinical Global Impression – Schizophrenia Scale; r = 0.30 to 0.51; all P < 0.01, patients’ primary reasons for discontinuation, and adverse events. However, the continuation domain scores were not significantly associated with these other indicators

  6. Pisa Question and Reasoning Skill

    Directory of Open Access Journals (Sweden)

    Ersoy Esen

    2017-01-01

    Full Text Available The objective of the study is to determine the level of the reasoning skills of the secondary school students. This research has been conducted during the academic year of 2015-2016 with the participation of 51 students in total, from a province in the Black Sea region of Turkey by using random sampling method. Case study method has been used in this study, since it explains an existing situation. In this study, content analysis from the qualitative research methods was carried out. In order to ensure the validity of the scope, agreement percentage formula was used and expert opinions were sought.The problem named Holiday from the Chapter 1 of the normal units in Problem Solving Questions from PISA (Program for International Student Assessments [35] are used as the data collection tool for the study. The problem named Holiday consists of two questions. Applied problems were evaluated according to the mathematical reasoning stages of TIMSS (2003. The findings suggest that the students use proportional reasoning while solving the problems and use the geometric shapes to facilitate the solution of the problem. When they come across problems related to each other, it is observed that they create connections between the problems based on the results of the previous problem. In conclusion, the students perform crosscheck to ensure that their solutions to the problems are accurate.

  7. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  8. The triangular density to approximate the normal density: decision rules-of-thumb

    International Nuclear Information System (INIS)

    Scherer, William T.; Pomroy, Thomas A.; Fuller, Douglas N.

    2003-01-01

    In this paper we explore the approximation of the normal density function with the triangular density function, a density function that has extensive use in risk analysis. Such an approximation generates a simple piecewise-linear density function and a piecewise-quadratic distribution function that can be easily manipulated mathematically and that produces surprisingly accurate performance under many instances. This mathematical tractability proves useful when it enables closed-form solutions not otherwise possible, as with problems involving the embedded use of the normal density. For benchmarking purposes we compare the basic triangular approximation with two flared triangular distributions and with two simple uniform approximations; however, throughout the paper our focus is on using the triangular density to approximate the normal for reasons of parsimony. We also investigate the logical extensions of using a non-symmetric triangular density to approximate a lognormal density. Several issues associated with using a triangular density as a substitute for the normal and lognormal densities are discussed, and we explore the resulting numerical approximation errors for the normal case. Finally, we present several examples that highlight simple decision rules-of-thumb that the use of the approximation generates. Such rules-of-thumb, which are useful in risk and reliability analysis and general business analysis, can be difficult or impossible to extract without the use of approximations. These examples include uses of the approximation in generating random deviates, uses in mixture models for risk analysis, and an illustrative decision analysis problem. It is our belief that this exploratory look at the triangular approximation to the normal will provoke other practitioners to explore its possible use in various domains and applications

  9. To Reason or Not to Reason: Is Autobiographical Reasoning Always Beneficial?

    Science.gov (United States)

    McLean, Kate C.; Mansfield, Cade D.

    2011-01-01

    Autobiographical reasoning has been found to be a critical process in identity development; however, the authors suggest that existing research shows that such reasoning may not always be critical to another important outcome: well-being. The authors describe characteristics of people such as personality and age, contexts such as conversations,…

  10. Developpement of a numerical method for Navier-Stokes equations in anelastic approximation: application to Rayleigh-Taylor instabilities

    International Nuclear Information System (INIS)

    Hammouch, Z.

    2012-01-01

    The 'anelastic' approximation allows us to filter the acoustic waves thanks to an asymptotic development of the Navier-Stokes equations, so increasing the averaged time step, during the numerical simulation of hydrodynamic instabilities development. So, the anelastic equations for a two fluid mixture in case of Rayleigh-Taylor instability are established.The linear stability of Rayleigh-Taylor flow is studied, for the first time, for perfect fluids in the anelastic approximation. We define the Stokes problem resulting from Navier-Stokes equations without the non linear terms (a part of the buoyancy is considered); the ellipticity is demonstrated, the Eigenmodes and the invariance related to the pressure are detailed. The Uzawa's method is extended to the anelastic approximation and shows the decoupling speeds in 3D, the particular case k = 0 and the spurious modes of pressure. Passing to multi-domain allowed to establish the transmission conditions.The algorithms and the implementation in the existing program are validated by comparing the Uzawa's operator in Fortran and Mathematica languages, to an experiment with incompressible fluids and results from anelastic and compressible numerical simulations. The study of the influence of the initial stratification of both fluids on the development of the Rayleigh-Taylor instability is initiated. (author) [fr

  11. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  12. On Born approximation in black hole scattering

    Science.gov (United States)

    Batic, D.; Kelkar, N. G.; Nowakowski, M.

    2011-12-01

    A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordström and Reissner-Nordström-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes.

  13. Low-temperature excitations within the Bethe approximation

    International Nuclear Information System (INIS)

    Biazzo, I; Ramezanpour, A

    2013-01-01

    We propose the variational quantum cavity method to construct a minimal energy subspace of wavevectors that are used to obtain some upper bounds for the energy cost of the low-temperature excitations. Given a trial wavefunction we use the cavity method of statistical physics to estimate the Hamiltonian expectation and to find the optimal variational parameters in the subspace of wavevectors orthogonal to the lower-energy wavefunctions. To this end, we write the overlap between two wavefunctions within the Bethe approximation, which allows us to replace the global orthogonality constraint with some local constraints on the variational parameters. The method is applied to the transverse Ising model and different levels of approximations are compared with the exact numerical solutions for small systems. (paper)

  14. The high intensity approximation applied to multiphoton ionization

    International Nuclear Information System (INIS)

    Brandi, H.S.; Davidovich, L.; Zagury, N.

    1980-08-01

    It is shown that the most commonly used high intensity approximations as applied to ionization by strong electromagnetic fields are related. The applicability of the steepest descent method in these approximations, and the relation between them and first-order perturbation theory, are also discussed. (Author) [pt

  15. Application of improved Vogel’s approximation method in minimization of rice distribution costs of Perum BULOG

    Science.gov (United States)

    Nahar, J.; Rusyaman, E.; Putri, S. D. V. E.

    2018-03-01

    This research was conducted at Perum BULOG Sub-Divre Medan which is the implementing institution of Raskin program for several regencies and cities in North Sumatera. Raskin is a program of distributing rice to the poor. In order to minimize rice distribution costs then rice should be allocated optimally. The method used in this study consists of the Improved Vogel Approximation Method (IVAM) to analyse the initial feasible solution, and Modified Distribution (MODI) to test the optimum solution. This study aims to determine whether the IVAM method can provide savings or cost efficiency of rice distribution. From the calculation with IVAM obtained the optimum cost is lower than the company's calculation of Rp945.241.715,5 while the cost of the company's calculation of Rp958.073.750,40. Thus, the use of IVAM can save rice distribution costs of Rp12.832.034,9.

  16. Gluons from logarithmic slopes of F2 in the NLL approximation

    International Nuclear Information System (INIS)

    Golec-Biernat, K.

    1994-02-01

    We make a critical, next-to-leading order, study of the accuracy of the ''Prytz'' relation, which is frequently used to extract the gluon distribution at small x from the logarithmic slopes of the structure function F 2 . We find that the simple relation is not generally valid in the HERA regime, but show that it is a reasonable approximation for gluons which are sufficiency singular at small x. (author). 9 refs, 3 figs

  17. Reasons for Whistleblowing: A Qualitative Study

    Directory of Open Access Journals (Sweden)

    Ali BALTACI

    2017-04-01

    Full Text Available Whistleblowing has become a commonly encountered concept in recent times. Negative behaviors and actions can be experienced in any organization, and whistleblowing, as a communication process, is a kind of ethical behavior. Whistleblowing is the transmission of an unfavorable situation discovered in the organization to either internal or external authorities. An examination of the reasons for the employee’s whistleblowing is important for a better understanding of this concept; hence, this research focuses on the reasons for whistleblowing. In addition, the reasons for avoiding whistleblowing were also investigated. This research, which is designed as a qualitative study, is based on the phenomenological approach. Interviews were conducted with open-ended, semi-structured interview form in the study. The research was conducted on 20 teachers, 12 administrators, and 7 inspectors. The data were analyzed using the content analysis method. As a result of the research, the individual, organizational and social reasons for whistleblowing have been differentiated. Among the individual reasons for whistleblowing are the considerations of protecting and gaining interests. Organizational reasons include business ethics and the expectation of subsequent promotion. Social reasons encompass social benefits, social justice, and religious belief. Reasons for avoiding whistleblowing vary based on retaliation and worry. This research is considered important because as it is believed to be the first qualitative research to approach the reasons for whistleblowing. The results of this research have revealed gaps in the understanding of this area for future studies.

  18. A concept analysis of abductive reasoning.

    Science.gov (United States)

    Mirza, Noeman A; Akhtar-Danesh, Noori; Noesgaard, Charlotte; Martin, Lynn; Staples, Eric

    2014-09-01

    To describe an analysis of the concept of abductive reasoning. In the discipline of nursing, abductive reasoning has received only philosophical attention and remains a vague concept. In addition to deductive and inductive reasoning, abductive reasoning is not recognized even in prominent nursing knowledge development literature. Therefore, what abductive reasoning is and how it can inform nursing practice and education was explored. Concept analysis. Combinations of specific keywords were searched in Web of Science, CINAHL, PsychINFO, PubMed, Medline and EMBASE. The analysis was conducted in June 2012 and only literature before this period was included. No time limits were set. Rodger's evolutionary method for conducting concept analysis was used. Twelve records were included in the analysis. The most common surrogate term was retroduction, whereas related terms included intuition and pattern and similarity recognition. Antecedents consisted of a complex, puzzling situation and a clinician with creativity, experience and knowledge. Consequences included the formation of broad hypotheses that enhance understanding of care situations. Overall, abductive reasoning was described as the process of hypothesis or theory generation and evaluation. It was also viewed as inference to the best explanation. As a new approach, abductive reasoning could enhance reasoning abilities of novice clinicians. It can not only incorporate various ways of knowing but also its holistic approach to learning appears to be promising in problem-based learning. As nursing literature on abductive reasoning is predominantly philosophical, practical consequences of abductive reasoning warrant further research. © 2014 John Wiley & Sons Ltd.

  19. Effects of Inquiry-Based Agriscience Instruction on Student Scientific Reasoning

    Science.gov (United States)

    Thoron, Andrew C.; Myers, Brian E.

    2012-01-01

    The purpose of this study was to determine the effect of inquiry-based agriscience instruction on student scientific reasoning. Scientific reasoning is defined as the use of the scientific method, inductive, and deductive reasoning to develop and test hypothesis. Developing scientific reasoning skills can provide learners with a connection to the…

  20. Effects analysis fuzzy inference system in nuclear problems using approximate reasoning

    International Nuclear Information System (INIS)

    Guimaraes, Antonio C.F.; Franklin Lapa, Celso Marcelo

    2004-01-01

    In this paper a fuzzy inference system modeling technique applied on failure mode and effects analysis (FMEA) is introduced in reactor nuclear problems. This method uses the concept of a pure fuzzy logic system to treat the traditional FMEA parameters: probabilities of occurrence, severity and detection. The auxiliary feed-water system of a typical two-loop pressurized water reactor (PWR) was used as practical example in this analysis. The kernel result is the conceptual confrontation among the traditional risk priority number (RPN) and the fuzzy risk priority number (FRPN) obtained from experts opinion. The set of results demonstrated the great potential of the inference system and advantage of the gray approach in this class of problems

  1. Clinical reasoning and its application to nursing: concepts and research studies.

    Science.gov (United States)

    Banning, Maggi

    2008-05-01

    Clinical reasoning may be defined as "the process of applying knowledge and expertise to a clinical situation to develop a solution" [Carr, S., 2004. A framework for understanding clinical reasoning in community nursing. J. Clin. Nursing 13 (7), 850-857]. Several forms of reasoning exist each has its own merits and uses. Reasoning involves the processes of cognition or thinking and metacognition. In nursing, clinical reasoning skills are an expected component of expert and competent practise. Nurse research studies have identified concepts, processes and thinking strategies that might underpin the clinical reasoning used by pre-registration nurses and experienced nurses. Much of the available research on reasoning is based on the use of the think aloud approach. Although this is a useful method, it is dependent on ability to describe and verbalise the reasoning process. More nursing research is needed to explore the clinical reasoning process. Investment in teaching and learning methods is needed to enhance clinical reasoning skills in nurses.

  2. Identification of approximately duplicate material records in ERP systems

    Science.gov (United States)

    Zong, Wei; Wu, Feng; Chu, Lap-Keung; Sculli, Domenic

    2017-03-01

    The quality of master data is crucial for the accurate functioning of the various modules of an enterprise resource planning (ERP) system. This study addresses specific data problems arising from the generation of approximately duplicate material records in ERP databases. Such problems are mainly due to the firm's lack of unique and global identifiers for the material records, and to the arbitrary assignment of alternative names for the same material by various users. Traditional duplicate detection methods are ineffective in identifying such approximately duplicate material records because these methods typically rely on string comparisons of each field. To address this problem, a machine learning-based framework is developed to recognise semantic similarity between strings and to further identify and reunify approximately duplicate material records - a process referred to as de-duplication in this article. First, the keywords of the material records are extracted to form vectors of discriminating words. Second, a machine learning method using a probabilistic neural network is applied to determine the semantic similarity between these material records. The approach was evaluated using data from a real case study. The test results indicate that the proposed method outperforms traditional algorithms in identifying approximately duplicate material records.

  3. The adiabatic approximation in multichannel scattering

    International Nuclear Information System (INIS)

    Schulte, A.M.

    1978-01-01

    Using two-dimensional models, an attempt has been made to get an impression of the conditions of validity of the adiabatic approximation. For a nucleon bound to a rotating nucleus the Coriolis coupling is neglected and the relation between this nuclear Coriolis coupling and the classical Coriolis force has been examined. The approximation for particle scattering from an axially symmetric rotating nucleus based on a short duration of the collision, has been combined with an approximation based on the limitation of angular momentum transfer between particle and nucleus. Numerical calculations demonstrate the validity of the new combined method. The concept of time duration for quantum mechanical collisions has also been studied, as has the collective description of permanently deformed nuclei. (C.F.)

  4. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  5. The Impact of Short-Sale Constraints on Asset Allocation Strategies via the Backward Markov Chain Approximation Method

    OpenAIRE

    Carl Chiarella; Chih-Ying Hsiao

    2005-01-01

    This paper considers an asset allocation strategy over a finite period under investment uncertainty and short-sale constraints as a continuous time stochastic control problem. Investment uncertainty is characterised by a stochastic interest rate and inflation risk. If there are no short-sale constraints, the optimal asset allocation strategy can be solved analytically. We consider several kinds of short-sale constraints and employ the backward Markov chain approximation method to explore the ...

  6. On the logos a naïve view on ordinary reasoning and fuzzy logic

    CERN Document Server

    Trillas, Enric

    2017-01-01

    This book offers an inspiring and naïve view on language and reasoning. It presents a new approach to ordinary reasoning that follows the author’s former work on fuzzy logic. Starting from a pragmatic scientific view on meaning as a quantity, and the common sense reasoning from a primitive notion of inference, which is shared by both laypeople and experts, the book shows how this can evolve, through the addition of more and more suppositions, into various formal and specialized modes of precise, imprecise, and approximate reasoning. The logos are intended here as a synonym for rationality, which is usually shown by the processes of questioning, guessing, telling, and computing. Written in a discursive style and without too many technicalities, the book presents a number of reflections on the study of reasoning, together with a new perspective on fuzzy logic and Zadeh’s “computing with words” grounded in both language and reasoning. It also highlights some mathematical developments supporting this vie...

  7. An Approximate Method for Pitch-Damping Prediction

    National Research Council Canada - National Science Library

    Danberg, James

    2003-01-01

    ...) method for predicting the pitch-damping coefficients has been employed. The CFD method provides important details necessary to derive the correlation functions that are unavailable from the current experimental database...

  8. APPROXIMATE DEVELOPMENTS FOR SURFACES OF REVOLUTION

    Directory of Open Access Journals (Sweden)

    Mădălina Roxana Buneci

    2016-12-01

    Full Text Available The purpose of this paper is provide a set of Maple procedures to construct approximate developments of a general surface of revolution generalizing the well-known gore method for sphere

  9. Using Relational Reasoning Strategies to Help Improve Clinical Reasoning Practice.

    Science.gov (United States)

    Dumas, Denis; Torre, Dario M; Durning, Steven J

    2018-05-01

    Clinical reasoning-the steps up to and including establishing a diagnosis and/or therapy-is a fundamentally important mental process for physicians. Unfortunately, mounting evidence suggests that errors in clinical reasoning lead to substantial problems for medical professionals and patients alike, including suboptimal care, malpractice claims, and rising health care costs. For this reason, cognitive strategies by which clinical reasoning may be improved-and that many expert clinicians are already using-are highly relevant for all medical professionals, educators, and learners.In this Perspective, the authors introduce one group of cognitive strategies-termed relational reasoning strategies-that have been empirically shown, through limited educational and psychological research, to improve the accuracy of learners' reasoning both within and outside of the medical disciplines. The authors contend that relational reasoning strategies may help clinicians to be metacognitive about their own clinical reasoning; such strategies may also be particularly well suited for explicitly organizing clinical reasoning instruction for learners. Because the particular curricular efforts that may improve the relational reasoning of medical students are not known at this point, the authors describe the nature of previous research on relational reasoning strategies to encourage the future design, implementation, and evaluation of instructional interventions for relational reasoning within the medical education literature. The authors also call for continued research on using relational reasoning strategies and their role in clinical practice and medical education, with the long-term goal of improving diagnostic accuracy.

  10. Improved radiative corrections for (e,e'p) experiments: Beyond the peaking approximation and implications of the soft-photon approximation

    International Nuclear Information System (INIS)

    Weissbach, F.; Hencken, K.; Rohe, D.; Sick, I.; Trautmann, D.

    2006-01-01

    Analyzing (e,e ' p) experimental data involves corrections for radiative effects which change the interaction kinematics and which have to be carefully considered in order to obtain the desired accuracy. Missing momentum and energy due to bremsstrahlung have so far often been incorporated into the simulations and the experimental analyses using the peaking approximation. It assumes that all bremsstrahlung is emitted in the direction of the radiating particle. In this article we introduce a full angular Monte Carlo simulation method which overcomes this approximation. As a test, the angular distribution of the bremsstrahlung photons is reconstructed from H(e,e ' p) data. Its width is found to be underestimated by the peaking approximation and described much better by the approach developed in this work. The impact of the soft-photon approximation on the photon angular distribution is found to be minor as compared to the impact of the peaking approximation. (orig.)

  11. Approximation of the Doppler broadening function by Frobenius method; Aproximacao da funcao de alargamento doppler atraves do metodo de Frobenius

    Energy Technology Data Exchange (ETDEWEB)

    Palma, Daniel A.P. [Centro Federal de Educacao Tecnologica de Quimica de Nilopolis/RJ (CEFET), RJ (Brazil)]. E-mail: dpalma@cefeteq.br; Martinez, Aquilino S.; Silva, Fernando C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br; fernando@lmn.con.ufrj.br

    2005-07-01

    An analytical approximation of the Doppler broadening function {psi}(x,{xi}) is proposed. This approximation is based on the solution of the differential equation for {psi}(x,{xi}) using the methods of Frobenius and the parameters variation. The analytical form derived for {psi}(x,{xi}) in terms of elementary functions is very simple and precise. It can be useful for applications related to the treatment of nuclear resonances mainly for the calculations of multigroup parameters and self-protection factors of the resonances, being the last used to correct microscopic cross-sections measurements by the activation technique. (author)

  12. Finite element approximation to the even-parity transport equation

    International Nuclear Information System (INIS)

    Lewis, E.E.

    1981-01-01

    This paper studies the finite element method, a procedure for reducing partial differential equations to sets of algebraic equations suitable for solution on a digital computer. The differential equation is cast into the form of a variational principle, the resulting domain then subdivided into finite elements. The dependent variable is then approximated by a simple polynomial, and these are linked across inter-element boundaries by continuity conditions. The finite element method is tailored to a variety of transport problems. Angular approximations are formulated, and the extent of ray effect mitigation is examined. Complex trial functions are introduced to enable the inclusion of buckling approximations. The ubiquitous curved interfaces of cell calculations, and coarse mesh methods are also treated. A concluding section discusses limitations of the work to date and suggests possible future directions

  13. A rational approximation of the effectiveness factor

    DEFF Research Database (Denmark)

    Wedel, Stig; Luss, Dan

    1980-01-01

    A fast, approximate method of calculating the effectiveness factor for arbitrary rate expressions is presented. The method does not require any iterative or interpolative calculations. It utilizes the well known asymptotic behavior for small and large Thiele moduli to derive a rational function...

  14. Sherlock Holmes' methods of deductive reasoning applied to medical diagnostics.

    Science.gov (United States)

    Miller, L

    1985-03-01

    Having patterned the character of Sherlock Holmes after one of his professors, Sir Arthur Conan Doyle, himself a physician, incorporated many of the didactic qualities of the 19th century medical diagnostician into the character of Holmes. In this paper I explore Holmes's techniques of deductive reasoning and their basis in 19th and 20th century medical diagnostics.

  15. Investigating Students' Reasoning about Acid-Base Reactions

    Science.gov (United States)

    Cooper, Melanie M.; Kouyoumdjian, Hovig; Underwood, Sonia M.

    2016-01-01

    Acid-base chemistry is central to a wide range of reactions. If students are able to understand how and why acid-base reactions occur, it should provide a basis for reasoning about a host of other reactions. Here, we report the development of a method to characterize student reasoning about acid-base reactions based on their description of…

  16. Evaluation of the dynamic responses of high rise buildings with respect to the direct methods for soil-foundation-structure interaction effects and comparison with the approximate methods

    Directory of Open Access Journals (Sweden)

    Jahangir Khazaei

    2017-08-01

    Full Text Available In dynamic analysis, modeling of soil medium is ignored because of the infinity and complexity of the soil behavior and so the important effects of these terms are neglected, while the behavior of the soil under the structure plays an important role in the response of the structure during an earthquake. In fact, the soil layers and soil foundation structure interaction phenomena can increase the applied seismic forces during earthquakes that has been examined with different methods. In this paper, effects of soil foundation structure interaction on a steel high rise building has been modeled using Abaqus software for nonlinear dynamic analysis with finite element direct method and simulation of infinite boundary condition for soil medium and also approximate Cone model. In the direct method, soil, structure and foundation are modeled altogether. In other hand, for using Cone model as a simple model, dynamic stiffness coefficients have been employed to simulate soil with considering springs and dashpots in all degree of freedom. The results show that considering soil foundation structure interaction cause increase in maximum lateral displacement of structure and the friction coefficient of soil-foundation interface can alter the responses of structure. It was also observed that the results of the approximate methods have good agreement for engineering demands.

  17. Reachability in Biochemical Dynamical Systems by Quantitative Discrete Approximation (extended abstract

    Directory of Open Access Journals (Sweden)

    L. Brim

    2011-09-01

    Full Text Available In this paper, a novel computational technique for finite discrete approximation of continuous dynamical systems suitable for a significant class of biochemical dynamical systems is introduced. The method is parameterized in order to affect the imposed level of approximation provided that with increasing parameter value the approximation converges to the original continuous system. By employing this approximation technique, we present algorithms solving the reachability problem for biochemical dynamical systems. The presented method and algorithms are evaluated on several exemplary biological models and on a real case study.

  18. Research of Uncertainty Reasoning in Pineapple Disease Identification System

    Science.gov (United States)

    Liu, Liqun; Fan, Haifeng

    In order to deal with the uncertainty of evidences mostly existing in pineapple disease identification system, a reasoning model based on evidence credibility factor was established. The uncertainty reasoning method is discussed,including: uncertain representation of knowledge, uncertain representation of rules, uncertain representation of multi-evidences and update of reasoning rules. The reasoning can fully reflect the uncertainty in disease identification and reduce the influence of subjective factors on the accuracy of the system.

  19. Carlson iterating rational approximation and performance analysis of fractional operator with arbitrary order

    International Nuclear Information System (INIS)

    He Qiu-Yan; Yuan Xiao; Yu Bo

    2017-01-01

    The performance analysis of the generalized Carlson iterating process, which can realize the rational approximation of fractional operator with arbitrary order, is presented in this paper. The reasons why the generalized Carlson iterating function possesses more excellent properties such as self-similarity and exponential symmetry are also explained. K-index, P-index, O-index, and complexity index are introduced to contribute to performance analysis. Considering nine different operational orders and choosing an appropriate rational initial impedance for a certain operational order, these rational approximation impedance functions calculated by the iterating function meet computational rationality, positive reality, and operational validity. Then they are capable of having the operational performance of fractional operators and being physical realization. The approximation performance of the impedance function to the ideal fractional operator and the circuit network complexity are also exhibited. (paper)

  20. Approximate convex hull of affine iterated function system attractors

    International Nuclear Information System (INIS)

    Mishkinis, Anton; Gentil, Christian; Lanquetin, Sandrine; Sokolov, Dmitry

    2012-01-01

    Highlights: ► We present an iterative algorithm to approximate affine IFS attractor convex hull. ► Elimination of the interior points significantly reduces the complexity. ► To optimize calculations, we merge the convex hull images at each iteration. ► Approximation by ellipses increases speed of convergence to the exact convex hull. ► We present a method of the output convex hull simplification. - Abstract: In this paper, we present an algorithm to construct an approximate convex hull of the attractors of an affine iterated function system (IFS). We construct a sequence of convex hull approximations for any required precision using the self-similarity property of the attractor in order to optimize calculations. Due to the affine properties of IFS transformations, the number of points considered in the construction is reduced. The time complexity of our algorithm is a linear function of the number of iterations and the number of points in the output approximate convex hull. The number of iterations and the execution time increases logarithmically with increasing accuracy. In addition, we introduce a method to simplify the approximate convex hull without loss of accuracy.

  1. Self-consistent approximations beyond the CPA: Part II

    International Nuclear Information System (INIS)

    Kaplan, T.; Gray, L.J.

    1982-01-01

    This paper concentrates on a self-consistent approximation for random alloys developed by Kaplan, Leath, Gray, and Diehl. The construction of the augmented space formalism for a binary alloy is sketched, and the notation to be used derived. Using the operator methods of the augmented space, the self-consistent approximation is derived for the average Green's function, and for evaluating the self-energy, taking into account the scattering by clusters of excitations. The particular cluster approximation desired is derived by treating the scattering by the excitations with S /SUB T/ exactly. Fourier transforms on the disorder-space clustersite labels solve the self-consistent set of equations. Expansion to short range order in the alloy is also discussed. A method to reduce the problem to a computationally tractable form is described

  2. Application of the first approximation of the K-harmonics method to the O+ states of 16O

    International Nuclear Information System (INIS)

    Alcaras, J.A.C.; Silveira, H.V. da.

    1977-01-01

    The energy levels of the O + states, the charge form factor and the root mean square charge radius of the 16 O were calculated in the first approximation of the K-harmonics method. The calculation were done for six different potentials. The results obtained for the ground state energy, charge form factor and rms charge radius are in agreement with the experimental results, but this is not the case for the energies of the O + excited states

  3. Uncertain deduction and conditional reasoning.

    Science.gov (United States)

    Evans, Jonathan St B T; Thompson, Valerie A; Over, David E

    2015-01-01

    There has been a paradigm shift in the psychology of deductive reasoning. Many researchers no longer think it is appropriate to ask people to assume premises and decide what necessarily follows, with the results evaluated by binary extensional logic. Most every day and scientific inference is made from more or less confidently held beliefs and not assumptions, and the relevant normative standard is Bayesian probability theory. We argue that the study of "uncertain deduction" should directly ask people to assign probabilities to both premises and conclusions, and report an experiment using this method. We assess this reasoning by two Bayesian metrics: probabilistic validity and coherence according to probability theory. On both measures, participants perform above chance in conditional reasoning, but they do much better when statements are grouped as inferences, rather than evaluated in separate tasks.

  4. Approximated treatment of the Pauli principle effects in elastic collisons

    International Nuclear Information System (INIS)

    Schechter, H.

    1984-08-01

    Exact microscopic methods like the RGM (Resonanting Group Method) and the GCM (Generator Coordinate Method) and approximate methods like the OCM (Orthogonality Condition Model) are used to study the effects of Pauli Principle in α- 16 O elastic scattering. Using V2 and BL nucleon-nucleon interactions, nucleus-nucleus effective potentials are obtained from RGM 'exact' wave functions and also from an approximate method developed previoulsy. Using these potentials in the OCM Saito Equation phase-shifts are calculated for partial waves Λ = 0, 1, ... 11, in the energy range 0 [pt

  5. Strips of hourly power options. Approximate hedging using average-based forward contracts

    International Nuclear Information System (INIS)

    Lindell, Andreas; Raab, Mikael

    2009-01-01

    We study approximate hedging strategies for a contingent claim consisting of a strip of independent hourly power options. The payoff of the contingent claim is a sum of the contributing hourly payoffs. As there is no forward market for specific hours, the fundamental problem is to find a reasonable hedge using exchange-traded forward contracts, e.g. average-based monthly contracts. The main result is a simple dynamic hedging strategy that reduces a significant part of the variance. The idea is to decompose the contingent claim into mathematically tractable components and to use empirical estimations to derive hedging deltas. Two benefits of the method are that the technique easily extends to more complex power derivatives and that only a few parameters need to be estimated. The hedging strategy based on the decomposition technique is compared with dynamic delta hedging strategies based on local minimum variance hedging, using a correlated traded asset. (author)

  6. An inductive algorithm for smooth approximation of functions

    International Nuclear Information System (INIS)

    Kupenova, T.N.

    2011-01-01

    An inductive algorithm is presented for smooth approximation of functions, based on the Tikhonov regularization method and applied to a specific kind of the Tikhonov parametric functional. The discrepancy principle is used for estimation of the regularization parameter. The principle of heuristic self-organization is applied for assessment of some parameters of the approximating function

  7. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    , obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose

  8. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  9. Sherlock Holmes's Methods of Deductive Reasoning Applied to Medical Diagnostics

    Science.gov (United States)

    Miller, Larry

    1985-01-01

    Having patterned the character of Sherlock Holmes after one of his professors, Sir Arthur Conan Doyle, himself a physician, incorporated many of the didactic qualities of the 19th century medical diagnostician into the character of Holmes. In this paper I explore Holmes's techniques of deductive reasoning and their basis in 19th and 20th century medical diagnostics. PMID:3887762

  10. On the convergence of multigroup discrete-ordinates approximations

    International Nuclear Information System (INIS)

    Victory, H.D. Jr.; Allen, E.J.; Ganguly, K.

    1987-01-01

    Our analysis is divided into two distinct parts which we label for convenience as Part A and Part B. In Part A, we demonstrate that the multigroup discrete-ordinates approximations are well-defined and converge to the exact transport solution in any subcritical setting. For the most part, we focus on transport in two-dimensional Cartesian geometry. A Nystroem technique is used to extend the discrete ordinates multigroup approximates to all values of the angular and energy variables. Such an extension enables us to employ collectively compact operator theory to deduce stability and convergence of the approximates. In Part B, we perform a thorough convergence analysis for the multigroup discrete-ordinates method for an anisotropically-scattering subcritical medium in slab geometry. The diamond-difference and step-characteristic spatial approximation methods are each studied. The multigroup neutron fluxes are shown to converge in a Banach space setting under realistic smoothness conditions on the solution. This is the first thorough convergence analysis for the fully-discretized multigroup neutron transport equations

  11. Markov chain Monte Carlo with the Integrated Nested Laplace Approximation

    KAUST Repository

    Gómez-Rubio, Virgilio

    2017-10-06

    The Integrated Nested Laplace Approximation (INLA) has established itself as a widely used method for approximate inference on Bayesian hierarchical models which can be represented as a latent Gaussian model (LGM). INLA is based on producing an accurate approximation to the posterior marginal distributions of the parameters in the model and some other quantities of interest by using repeated approximations to intermediate distributions and integrals that appear in the computation of the posterior marginals. INLA focuses on models whose latent effects are a Gaussian Markov random field. For this reason, we have explored alternative ways of expanding the number of possible models that can be fitted using the INLA methodology. In this paper, we present a novel approach that combines INLA and Markov chain Monte Carlo (MCMC). The aim is to consider a wider range of models that can be fitted with INLA only when some of the parameters of the model have been fixed. We show how new values of these parameters can be drawn from their posterior by using conditional models fitted with INLA and standard MCMC algorithms, such as Metropolis–Hastings. Hence, this will extend the use of INLA to fit models that can be expressed as a conditional LGM. Also, this new approach can be used to build simpler MCMC samplers for complex models as it allows sampling only on a limited number of parameters in the model. We will demonstrate how our approach can extend the class of models that could benefit from INLA, and how the R-INLA package will ease its implementation. We will go through simple examples of this new approach before we discuss more advanced applications with datasets taken from the relevant literature. In particular, INLA within MCMC will be used to fit models with Laplace priors in a Bayesian Lasso model, imputation of missing covariates in linear models, fitting spatial econometrics models with complex nonlinear terms in the linear predictor and classification of data with

  12. Markov chain Monte Carlo with the Integrated Nested Laplace Approximation

    KAUST Repository

    Gó mez-Rubio, Virgilio; Rue, Haavard

    2017-01-01

    The Integrated Nested Laplace Approximation (INLA) has established itself as a widely used method for approximate inference on Bayesian hierarchical models which can be represented as a latent Gaussian model (LGM). INLA is based on producing an accurate approximation to the posterior marginal distributions of the parameters in the model and some other quantities of interest by using repeated approximations to intermediate distributions and integrals that appear in the computation of the posterior marginals. INLA focuses on models whose latent effects are a Gaussian Markov random field. For this reason, we have explored alternative ways of expanding the number of possible models that can be fitted using the INLA methodology. In this paper, we present a novel approach that combines INLA and Markov chain Monte Carlo (MCMC). The aim is to consider a wider range of models that can be fitted with INLA only when some of the parameters of the model have been fixed. We show how new values of these parameters can be drawn from their posterior by using conditional models fitted with INLA and standard MCMC algorithms, such as Metropolis–Hastings. Hence, this will extend the use of INLA to fit models that can be expressed as a conditional LGM. Also, this new approach can be used to build simpler MCMC samplers for complex models as it allows sampling only on a limited number of parameters in the model. We will demonstrate how our approach can extend the class of models that could benefit from INLA, and how the R-INLA package will ease its implementation. We will go through simple examples of this new approach before we discuss more advanced applications with datasets taken from the relevant literature. In particular, INLA within MCMC will be used to fit models with Laplace priors in a Bayesian Lasso model, imputation of missing covariates in linear models, fitting spatial econometrics models with complex nonlinear terms in the linear predictor and classification of data with

  13. A partition function approximation using elementary symmetric functions.

    Directory of Open Access Journals (Sweden)

    Ramu Anandakrishnan

    Full Text Available In statistical mechanics, the canonical partition function [Formula: see text] can be used to compute equilibrium properties of a physical system. Calculating [Formula: see text] however, is in general computationally intractable, since the computation scales exponentially with the number of particles [Formula: see text] in the system. A commonly used method for approximating equilibrium properties, is the Monte Carlo (MC method. For some problems the MC method converges slowly, requiring a very large number of MC steps. For such problems the computational cost of the Monte Carlo method can be prohibitive. Presented here is a deterministic algorithm - the direct interaction algorithm (DIA - for approximating the canonical partition function [Formula: see text] in [Formula: see text] operations. The DIA approximates the partition function as a combinatorial sum of products known as elementary symmetric functions (ESFs, which can be computed in [Formula: see text] operations. The DIA was used to compute equilibrium properties for the isotropic 2D Ising model, and the accuracy of the DIA was compared to that of the basic Metropolis Monte Carlo method. Our results show that the DIA may be a practical alternative for some problems where the Monte Carlo method converge slowly, and computational speed is a critical constraint, such as for very large systems or web-based applications.

  14. Heuristic reasoning

    CERN Document Server

    2015-01-01

    How can we advance knowledge? Which methods do we need in order to make new discoveries? How can we rationally evaluate, reconstruct and offer discoveries as a means of improving the ‘method’ of discovery itself? And how can we use findings about scientific discovery to boost funding policies, thus fostering a deeper impact of scientific discovery itself? The respective chapters in this book provide readers with answers to these questions. They focus on a set of issues that are essential to the development of types of reasoning for advancing knowledge, such as models for both revolutionary findings and paradigm shifts; ways of rationally addressing scientific disagreement, e.g. when a revolutionary discovery sparks considerable disagreement inside the scientific community; frameworks for both discovery and inference methods; and heuristics for economics and the social sciences.

  15. Summary of Time Period-Based and Other Approximation Methods for Determining the Capacity Value of Wind and Solar in the United States: September 2010 - February 2012

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, J.; Porter, K.

    2012-03-01

    This paper updates previous work that describes time period-based and other approximation methods for estimating the capacity value of wind power and extends it to include solar power. The paper summarizes various methods presented in utility integrated resource plans, regional transmission organization methodologies, regional stakeholder initiatives, regulatory proceedings, and academic and industry studies. Time period-based approximation methods typically measure the contribution of a wind or solar plant at the time of system peak - sometimes over a period of months or the average of multiple years.

  16. Extended Finite Element Method with Simplified Spherical Harmonics Approximation for the Forward Model of Optical Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Wei Li

    2012-01-01

    Full Text Available An extended finite element method (XFEM for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN. In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC method, the validation results show the merits and potential of the XFEM for optical imaging.

  17. Globally convergent optimization algorithm using conservative convex separable diagonal quadratic approximations

    NARCIS (Netherlands)

    Groenwold, A.A.; Wood, D.W.; Etman, L.F.P.; Tosserams, S.

    2009-01-01

    We implement and test a globally convergent sequential approximate optimization algorithm based on (convexified) diagonal quadratic approximations. The algorithm resides in the class of globally convergent optimization methods based on conservative convex separable approximations developed by

  18. Electronic and Optical Properties of CuO Based on DFT+U and GW Approximation

    International Nuclear Information System (INIS)

    Ahmad, F; Agusta, M K; Dipojono, H K

    2016-01-01

    We report ab initio calculations of electronic structure and optical properties of monoclinic CuO based on DFT+U and GW approximation. CuO is an antiferromagnetic material with strong electron correlations. Our calculation shows that DFT+U and GW approximation sufficiently reliable to investigate the material properties of CuO. The calculated band gap of DFT+U for reasonable value of U slightly underestimates. The use of GW approximation requires adjustment of U value to get realistic result. Hybridization Cu 3dxz, 3dyz with O 2p plays an important role in the formation of band gap. The calculated optical properties based on DFT+U and GW corrections by solving Bethe-Salpeter are in good agreement with the calculated electronic properties and the experimental result. (paper)

  19. An overview on Approximate Bayesian computation*

    Directory of Open Access Journals (Sweden)

    Baragatti Meïli

    2014-01-01

    Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.

  20. On an Approximate Solution Method for the Problem of Surface and Groundwater Combined Movement with Exact Approximation on the Section Line

    Directory of Open Access Journals (Sweden)

    L.L. Glazyrina

    2016-12-01

    Full Text Available In this paper, the initial-boundary problem for two nonlinear parabolic combined equations has been considered. One of the equations is set on the bounded domain Ω from R2, another equation is set along the curve lying in Ω. Both of the equations are parabolic equations with double degeneration. The degeneration can be present at the space operator. Furthermore, the nonlinear function which is under the sign of partial derivative with respect to the variable t, can be bound to zero. This problem has an applied character: such structure is needed to describe the process of surface and ground water combined movement. In this case, the desired function determines the level of water above the given impenetrable bottom, the section simulates the riverbed. The Bussinesk equation has been used for mathematical description of the groundwater filtration process in the domain Ω; a diffusion analogue of the Saint-Venant's system has been used on the section for description of the process of water level change in the open channel. Earlier, the authors proved the theorems of generalized solution existence and uniqueness for the considered problem from the functions classes which are called strengthened Sobolev spaces in the literature. To obtain these results, we used the technique which was created by the German mathematicians (H.W. Alt, S. Luckhaus, F. Otto to establish the correctness of the problems with a double degeneration. In this paper, we have proposed and investigated an approximate solution method for the above-stated problem. This method has been constructed using semidiscretization with respect to the variable t and the finite element method for space variables. Triangulation of the domain has been accomplished by triangles. The mesh has been set on the section line. On each segment of the line section lying between the nearby mesh points, on both side of this segment we have constructed the triangles with a common side which matches with

  1. Sharp Bounds for Symmetric and Asymmetric Diophantine Approximation

    Institute of Scientific and Technical Information of China (English)

    Cornelis KRAAIKAMP; Ionica SMEETS

    2011-01-01

    In 2004,Tong found bounds for the approximation quality of a regular continued fraction convergent to a rational number,expressed in bounds for both the previous and next approximation.The authors sharpen his results with a geometric method and give both sharp upper and lower bounds.The asymptotic frequencies that these bounds occur are also calculated.

  2. Approximate radiative solutions of the Einstein equations

    International Nuclear Information System (INIS)

    Kuusk, P.; Unt, V.

    1976-01-01

    In this paper the external field of a bounded source emitting gravitational radiation is considered. A successive approximation method is used to integrate the Einstein equations in Bondi's coordinates (Bondi et al, Proc. R. Soc.; A269:21 (1962)). A method of separation of angular variables is worked out and the approximate Einstein equations are reduced to key equations. The losses of mass, momentum, and angular momentum due to gravitational multipole radiation are found. It is demonstrated that in the case of proper treatment a real mass occurs instead of a mass aspect in a solution of the Einstein equations. In an appendix Bondi's new function is given in terms of sources. (author)

  3. Deconstructing climate misinformation to identify reasoning errors

    Science.gov (United States)

    Cook, John; Ellerton, Peter; Kinkead, David

    2018-02-01

    Misinformation can have significant societal consequences. For example, misinformation about climate change has confused the public and stalled support for mitigation policies. When people lack the expertise and skill to evaluate the science behind a claim, they typically rely on heuristics such as substituting judgment about something complex (i.e. climate science) with judgment about something simple (i.e. the character of people who speak about climate science) and are therefore vulnerable to misleading information. Inoculation theory offers one approach to effectively neutralize the influence of misinformation. Typically, inoculations convey resistance by providing people with information that counters misinformation. In contrast, we propose inoculating against misinformation by explaining the fallacious reasoning within misleading denialist claims. We offer a strategy based on critical thinking methods to analyse and detect poor reasoning within denialist claims. This strategy includes detailing argument structure, determining the truth of the premises, and checking for validity, hidden premises, or ambiguous language. Focusing on argument structure also facilitates the identification of reasoning fallacies by locating them in the reasoning process. Because this reason-based form of inoculation is based on general critical thinking methods, it offers the distinct advantage of being accessible to those who lack expertise in climate science. We applied this approach to 42 common denialist claims and find that they all demonstrate fallacious reasoning and fail to refute the scientific consensus regarding anthropogenic global warming. This comprehensive deconstruction and refutation of the most common denialist claims about climate change is designed to act as a resource for communicators and educators who teach climate science and/or critical thinking.

  4. Diagnostic reasoning using qualitative causal models

    International Nuclear Information System (INIS)

    Sudduth, A.L.

    1992-01-01

    The application of expert systems to reasoning problems involving real-time data from plant measurements has been a topic of much research, but few practical systems have been deployed. One obstacle to wider use of expert systems in applications involving real-time data is the lack of adequate knowledge representation methodologies for dynamic processes. Knowledge bases composed mainly of rules have disadvantages when applied to dynamic processes and real-time data. This paper describes a methodology for the development of qualitative causal models that can be used as knowledge bases for reasoning about process dynamic behavior. These models provide a systematic method for knowledge base construction, considerably reducing the engineering effort required. They also offer much better opportunities for verification and validation of the knowledge base, thus increasing the possibility of the application of expert systems to reasoning about mission critical systems. Starting with the Signed Directed Graph (SDG) method that has been successfully applied to describe the behavior of diverse dynamic processes, the paper shows how certain non-physical behaviors that result from abstraction may be eliminated by applying causal constraint to the models. The resulting Extended Signed Directed Graph (ESDG) may then be compiled to produce a model for use in process fault diagnosis. This model based reasoning methodology is used in the MOBIAS system being developed by Duke Power Company under EPRI sponsorship. 15 refs., 4 figs

  5. An angularly refineable phase space finite element method with approximate sweeping procedure

    International Nuclear Information System (INIS)

    Kophazi, J.; Lathouwers, D.

    2013-01-01

    An angularly refineable phase space finite element method is proposed to solve the neutron transport equation. The method combines the advantages of two recently published schemes. The angular domain is discretized into small patches and patch-wise discontinuous angular basis functions are restricted to these patches, i.e. there is no overlap between basis functions corresponding to different patches. This approach yields block diagonal Jacobians with small block size and retains the possibility for S n -like approximate sweeping of the spatially discontinuous elements in order to provide efficient preconditioners for the solution procedure. On the other hand, the preservation of the full FEM framework (as opposed to collocation into a high-order S n scheme) retains the possibility of the Galerkin interpolated connection between phase space elements at arbitrary levels of discretization. Since the basis vectors are not orthonormal, a generalization of the Riemann procedure is introduced to separate the incoming and outgoing contributions in case of unstructured meshes. However, due to the properties of the angular discretization, the Riemann procedure can be avoided at a large fraction of the faces and this fraction rapidly increases as the level of refinement increases, contributing to the computational efficiency. In this paper the properties of the discretization scheme are studied with uniform refinement using an iterative solver based on the S 2 sweep order of the spatial elements. The fourth order convergence of the scalar flux is shown as anticipated from earlier schemes and the rapidly decreasing fraction of required Riemann faces is illustrated. (authors)

  6. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  7. An improved saddlepoint approximation.

    Science.gov (United States)

    Gillespie, Colin S; Renshaw, Eric

    2007-08-01

    Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.

  8. A Bayesian method and its variational approximation for prediction of genomic breeding values in multiple traits

    Directory of Open Access Journals (Sweden)

    Hayashi Takeshi

    2013-01-01

    Full Text Available Abstract Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV. To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero

  9. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities.

    Science.gov (United States)

    Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin

    2013-12-01

    Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Parameter Estimation for Partial Differential Equations by Collage-Based Numerical Approximation

    Directory of Open Access Journals (Sweden)

    Xiaoyan Deng

    2009-01-01

    into a minimization problem of a function of several variables after the partial differential equation is approximated by a differential dynamical system. Then numerical schemes for solving this minimization problem are proposed, including grid approximation and ant colony optimization. The proposed schemes are applied to a parameter estimation problem for the Belousov-Zhabotinskii equation, and the results show that the proposed approximation method is efficient for both linear and nonlinear partial differential equations with respect to unknown parameters. At worst, the presented method provides an excellent starting point for traditional inversion methods that must first select a good starting point.

  11. Vaping Topography and Reasons of Use among Adults in Klang Valley, Malaysia

    Science.gov (United States)

    Zainol Abidin, Najihah; Abidin, Emilia Zainal; Zulkifli, Aziemah; Syed Ismail, Sharifah Norkhadijah; Karuppiah, Karmegam; Amer Nordin, Amer Siddiq; Musbah, Zuraidah; Zulkipli, Nur Fadhilah; Praveena, Sarva Mangala; Rasdi, Irniza; Abd Rahman, Anita

    2018-02-26

    Background: Consistency and accuracy of results in assessing health risks due to vaping or e-cigarette use are difficult to achieve without established consumption data. The present report covers baseline data on vaping topography and reasons for use among local users in Klang Valley, Malaysia. Methods: An 80-item survey regarding socio-demographic characteristics, smoking topography and reasons for e-cigarette use was employed to assess e-cigarette users recruited from several public universities and private organisations. The survey questionnaire was self-administered. Data were analysed using statistical software. Results: Eighty-six current e-cigarette users participated with more than half (51.2%) of them aged ≥ 25 years old. Significant proportions of the sample were single (51.2%), had a tertiary education level (63.5%) and a household income of less than USD1000 per month (65.2%). Median duration of e-cigarette use was less than a year; users drew approximately 50 puffs per day and refilled twice a day. The majority (74%) used e-liquids containing nicotine with a concentration of 6 μg/mL. Daily users spent USD18-23 per month. Reasons for using the e-cigarette included enjoyment of the products (85.9%), perception of lower toxicity than tobacco (87%), and the fact that it was a cheaper smoking alternative (61%). Conclusion: The data on e-cigarette smoking topography obtained in this study are novel. The reasons of usage were mainly users’ enjoyment of e-cigarettes, preparation for quitting smoking, perception of low toxicity and a healthier smoking substitute and cheapness in the long run. The results establish basic knowledge for the local vaping topography and reference material for future e-cigarette-related research. Creative Commons Attribution License

  12. Approximate solution to neutron transport equation with linear anisotropic scattering

    International Nuclear Information System (INIS)

    Coppa, G.; Ravetto, P.; Sumini, M.

    1983-01-01

    A method to obtain an approximate solution to the transport equation, when both sources and collisions show a linearly anisotropic behavior, is outlined and the possible implications for numerical calculations in applied neutronics as well as shielding evaluations are investigated. The form of the differential system of equations taken by the method is quite handy and looks simpler and more manageable than any other today available technique. To go deeper into the efficiency of the method, some typical calculations concerning critical dimension of multiplying systems are then performed and the results are compared with the ones coming from the classical Ssub(N) approximations. The outcome of such calculations leads us to think of interesting developments of the method which could be quite useful in alternative to other today widespread approximate procedures, for any geometry, but especially for curved ones. (author)

  13. X-ray multiaxial stress analysis by means of polynomial approximation and an application to plane stress problem

    International Nuclear Information System (INIS)

    Yoshioka, Yasuo; Sasaki, Toshihiko; Kuramoto, Makoto.

    1984-01-01

    A new polynomial approximation method was proposed for the X-ray multiaxial stress analysis, in which the effect of stress gradient along the penetration depth of X-rays was taken into account. Three basic assumptions were made; (1) the stress gradient is linear in respect to the depth from the specimen surface, (2) the ponetration depth of X-rays is a function of Sin 2 phi and (3) the strain measured by X-rays corresponds to the weighted average strain on the intensity of the diffracted X-rays. Consequently, the stress state within the thin layer near the surface was expressed by making use of three surface stresses and six stress gradients in the present method. The average strains by X-rays were approximated by the third order polynomial equations of sin 2 phi using a least square method at several phi angles on the coordinate system of specimen. Since the coefficients of these polynomials include these nine stress components mentioned above, it is possible to solve them as simultaneous equations. The calculating process of this method is simpler than that of the integral method. An X-ray plane stress problem was analyzed as an application of the present method, and the residual stress distribution on a shot-peened steel plate was actually measured by use of Cr-Kα X-rays to verify the analysis. The result showed that the compressive residual stress near the surface determined by the present method was smaller than the weighted average stress by the Sin 2 phi method because of the steep stress gradient. The present method is useful to obtain a reasonable value of stress for such a specimen with steep stress gradients near the surface. (author)

  14. The levels of the reason in Islam: An answer to the critique of the Islamic reason of Arkoun

    Directory of Open Access Journals (Sweden)

    Halilović Seid

    2016-01-01

    Full Text Available Mohammed Arkoun, who was the professor of Islamic Studies at the New Sorbonne University for many years, can be considered one of the most influential reformist thinkers of the contemporary Islamic world. In the light of his most fundamental views about the critique of the Islamic reason, he brought about many changes in the methodology of understanding of intellectual and cultural inheritance of Islam in most expert circles in the West and throughout the Islamic world. He writes in detail about this and says that the epistemological foundations and traditional analytical tools of Islam lack any kind of value today. From his epistemological standpoint, modern man, he says, sees them to be irrational. He emphasizes that traditional Muslim theologians and jurisprudents have erroneously been teaching that the Islamic reason is an absolute reason and that it is not connected to any historical contexts. In the same vein, he attempts to prove that the reason (that the Qur'an mentions is simply a practical and empirical reason. In this article, by using the philosophical analytical method, we will examine the content of some of the most important works of Arkoun. In those, he has explained in detail his critique of the Islamic reason. While answering his criticism, we will explain that the Qur'an and the totality of the Islamic scientific inheritance gives cosmological value to the different levels of the reason and that it does not in any manner reduce truth and knowledge at the level of instrumental and empirical reason. We will talk about 11 types of reasons that have been mentioned in Islam. These are the following: conceptual reason, theoretical reason, practical reason, metaphysical reason, common sense, universal reason, particular reason, empirical reason, instrumental reason, intuitive reason and sacred reason. In contrast to Arkoun, who considers Western thought to be the standard by means of which one must reconstruct the Islamic reason, we

  15. On transparent potentials: a Born approximation study

    International Nuclear Information System (INIS)

    Coudray, C.

    1980-01-01

    In the frame of the scattering inverse problem at fixed energy, a class of potentials transparent in Born approximation is obtained. All these potentials are spherically symmetric and are oscillating functions of the reduced radial variable. Amongst them, the Born approximation of the transparent potential of the Newton-Sabatier method is found. In the same class, quasi-transparent potentials are exhibited. Very general features of potentials transparent in Born approximation are then stated. And bounds are given for the exact scattering amplitudes corresponding to most of the potentials previously exhibited. These bounds, obtained at fixed energy, and for large values of the angular momentum, are found to be independent on the energy

  16. Semiclassical initial value approximation for Green's function.

    Science.gov (United States)

    Kay, Kenneth G

    2010-06-28

    A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.

  17. Motivation and Reasons to Quit: Predictive Validity among Adolescent Smokers

    Science.gov (United States)

    Turner, Lindsey R.; Mermelstein, Robin

    2004-01-01

    Objectives : To examine reasons to quit among adolescents in a smoking cessation program, and whether reasons were associated with subsequent cessation. Methods : Participants were 351 adolescents. At baseline, adolescents reported motivation, reasons to quit, and stage of change for cessation. Quit status was assessed at end of treatment. Results…

  18. Coefficients Calculation in Pascal Approximation for Passive Filter Design

    Directory of Open Access Journals (Sweden)

    George B. Kasapoglu

    2018-02-01

    Full Text Available The recently modified Pascal function is further exploited in this paper in the design of passive analog filters. The Pascal approximation has non-equiripple magnitude, in contrast of the most well-known approximations, such as the Chebyshev approximation. A novelty of this work is the introduction of a precise method that calculates the coefficients of the Pascal function. Two examples are presented for the passive design to illustrate the advantages and the disadvantages of the Pascal approximation. Moreover, the values of the passive elements can be taken from tables, which are created to define the normalized values of these elements for the Pascal approximation, as Zverev had done for the Chebyshev, Elliptic, and other approximations. Although Pascal approximation can be implemented to both passive and active filter designs, a passive filter design is addressed in this paper, and the benefits and shortcomings of Pascal approximation are presented and discussed.

  19. Approximate Bisimulation for High-Level Datapaths in Intelligent Transportation Systems

    Directory of Open Access Journals (Sweden)

    Hui Deng

    2013-01-01

    Full Text Available A relation called approximate bisimulation is proposed to achieve behavior and structure optimization for a type of high-level datapath whose data exchange processes are expressed by nonlinear polynomial systems. The high-level datapaths are divided into small blocks with a partitioning method and then represented by polynomial transition systems. A standardized form based on Ritt-Wu's method is developed to represent the equivalence relation for the high-level datapaths. Furthermore, we establish an approximate bisimulation relation within a controllable error range and express the approximation with an error control function, which is processed by Sostools. Meanwhile, the error is controlled through tuning the equivalence restrictions. An example of high-level datapaths demonstrates the efficiency of our method.

  20. Evidential reasoning research on intrusion detection

    Science.gov (United States)

    Wang, Xianpei; Xu, Hua; Zheng, Sheng; Cheng, Anyu

    2003-09-01

    In this paper, we mainly aim at D-S theory of evidence and the network intrusion detection these two fields. It discusses the method how to apply this probable reasoning as an AI technology to the Intrusion Detection System (IDS). This paper establishes the application model, describes the new mechanism of reasoning and decision-making and analyses how to implement the model based on the synscan activities detection on the network. The results suggest that if only rational probability values were assigned at the beginning, the engine can, according to the rules of evidence combination and hierarchical reasoning, compute the values of belief and finally inform the administrators of the qualities of the traced activities -- intrusions, normal activities or abnormal activities.

  1. Non-Equilibrium Liouville and Wigner Equations: Moment Methods and Long-Time Approximations

    Directory of Open Access Journals (Sweden)

    Ramon F. Álvarez-Estrada

    2014-03-01

    Full Text Available We treat the non-equilibrium evolution of an open one-particle statistical system, subject to a potential and to an external “heat bath” (hb with negligible dissipation. For the classical equilibrium Boltzmann distribution, Wc,eq, a non-equilibrium three-term hierarchy for moments fulfills Hermiticity, which allows one to justify an approximate long-time thermalization. That gives partial dynamical support to Boltzmann’s Wc,eq, out of the set of classical stationary distributions, Wc;st, also investigated here, for which neither Hermiticity nor that thermalization hold, in general. For closed classical many-particle systems without hb (by using Wc,eq, the long-time approximate thermalization for three-term hierarchies is justified and yields an approximate Lyapunov function and an arrow of time. The largest part of the work treats an open quantum one-particle system through the non-equilibrium Wigner function, W. Weq for a repulsive finite square well is reported. W’s (< 0 in various cases are assumed to be quasi-definite functionals regarding their dependences on momentum (q. That yields orthogonal polynomials, HQ,n(q, for Weq (and for stationary Wst, non-equilibrium moments, Wn, of W and hierarchies. For the first excited state of the harmonic oscillator, its stationary Wst is a quasi-definite functional, and the orthogonal polynomials and three-term hierarchy are studied. In general, the non-equilibrium quantum hierarchies (associated with Weq for the Wn’s are not three-term ones. As an illustration, we outline a non-equilibrium four-term hierarchy and its solution in terms of generalized operator continued fractions. Such structures also allow one to formulate long-time approximations, but make it more difficult to justify thermalization. For large thermal and de Broglie wavelengths, the dominant Weq and a non-equilibrium equation for W are reported: the non-equilibrium hierarchy could plausibly be a three-term one and possibly not

  2. An approximate block Newton method for coupled iterations of nonlinear solvers: Theory and conjugate heat transfer applications

    Science.gov (United States)

    Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.

    2009-12-01

    A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.

  3. Data Representations, Transformations, and Statistics for Visual Reasoning

    CERN Document Server

    Maciejewski, Ross

    2011-01-01

    Analytical reasoning techniques are methods by which users explore their data to obtain insight and knowledge that can directly support situational awareness and decision making. Recently, the analytical reasoning process has been augmented through the use of interactive visual representations and tools which utilize cognitive, design and perceptual principles. These tools are commonly referred to as visual analytics tools, and the underlying methods and principles have roots in a variety of disciplines. This chapter provides an introduction to young researchers as an overview of common visual

  4. A Linguistic Truth-Valued Temporal Reasoning Formalism and Its Implementation

    Science.gov (United States)

    Lu, Zhirui; Liu, Jun; Augusto, Juan C.; Wang, Hui

    Temporality and uncertainty are important features of many real world systems. Solving problems in such systems requires the use of formal mechanism such as logic systems, statistical methods or other reasoning and decision-making methods. In this paper, we propose a linguistic truth-valued temporal reasoning formalism to enable the management of both features concurrently using a linguistic truth valued logic and a temporal logic. We also provide a backward reasoning algorithm which allows the answering of user queries. A simple but realistic scenario in a smart home application is used to illustrate our work.

  5. The Padé approximant in theoretical physics

    CERN Document Server

    Baker, George Allen

    1970-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  6. Numerical Approximation of Elasticity Tensor Associated With Green-Naghdi Rate.

    Science.gov (United States)

    Liu, Haofei; Sun, Wei

    2017-08-01

    Objective stress rates are often used in commercial finite element (FE) programs. However, deriving a consistent tangent modulus tensor (also known as elasticity tensor or material Jacobian) associated with the objective stress rates is challenging when complex material models are utilized. In this paper, an approximation method for the tangent modulus tensor associated with the Green-Naghdi rate of the Kirchhoff stress is employed to simplify the evaluation process. The effectiveness of the approach is demonstrated through the implementation of two user-defined fiber-reinforced hyperelastic material models. Comparisons between the approximation method and the closed-form analytical method demonstrate that the former can simplify the material Jacobian evaluation with satisfactory accuracy while retaining its computational efficiency. Moreover, since the approximation method is independent of material models, it can facilitate the implementation of complex material models in FE analysis using shell/membrane elements in abaqus.

  7. Practical implementation of a higher order transverse leakage approximation

    International Nuclear Information System (INIS)

    Prinsloo, Rian H.; Tomašević

    2011-01-01

    Transverse integrated nodal diffusion methods currently represent the standard in full core neutronic simulation. The primary shortcoming in this approach, be it via the Analytic Nodal Method or Nodal Expansion Method, is the utilization of the quadratic transverse leakage approximation. This approach, although proven to work well for typical LWR problems, is not consistent with the formulation of nodal methods and can cause accuracy and convergence problems. In this work an improved, consistent quadratic leakage approximation is formulated, which derives from the class of higher order nodal methods developed some years ago. In this new approach, only information relevant to describing the transverse leak- age terms in the zero-order nodal equations are obtained from the higher order formalism. The method yields accuracy comparable to full higher order methods, but does not suffer from the same computational burden which these methods typically incur. (author)

  8. Legendre-tau approximation for functional differential equations. II - The linear quadratic optimal control problem

    Science.gov (United States)

    Ito, Kazufumi; Teglas, Russell

    1987-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  9. Characteristics of Women with Unwanted Pregnancies and Reasons for Contraceptive Methods Discontination: Sample of Rural Area

    Directory of Open Access Journals (Sweden)

    Semra Ay

    2012-06-01

    Full Text Available AIM: The aim of the study was to explore the prevalence and characteristics of women with unwanted pregnancy in rural area and to examine pregnant women’s contraceptive method preferences, satisfaction with the methods and reason the methods discontinuation. METHOD: This study was carried out in Manisa city rural area between the dates of January and June 2011 in women who agreed to participate in the study. The sample of the research is composed of 239 pregnant women. The research was a descriptive, cross-sectional and field study and the data was collected by using questionnaire which was prepared by researcher. Data was gathered through the face to face interview with the women at their home. Statistical analyses were undertaken using SPSS version 11.5. Descriptive analysis, Pearson’s Chi-square (χ² test, fisher exact test, and t-test were used statistical evaluation. RESULTS: Of the 239 pregnancies, 64 (26.8% were unwanted pregnancies. The mean age of women was respectively 25.0±5.0, and 29.0±5.4 years, for wanted, and unwanted pregnancies. Women with unwanted pregnancies were older, less educated, they had less educated husbands, had low income level, had more pregnancies, deliveries, had less than two years interval between their births. Unwanted pregnancies were observed in women using the coitus interrupts method (%53.1, effective contraceptive methods (%54.3 and not using any method (%16.3 (p<0.05. The most reasons for discontinuation were reported pregnant women as follows: side effects of methods, disapproval for husband, pregnancy occurs using the method, believe for ineffective contraceptive methods. CONCLUSION: In order to reduce the number of unwanted pregnancies and wanted abortions which reversely affect the women healthy; an appropriate contraception method must be employed. Health care providers should identify women with unwanted pregnancy to understand women's concerns and experiences using contraception. This

  10. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  11. Efficient approximation of random fields for numerical applications

    KAUST Repository

    Harbrecht, Helmut; Peters, Michael; Siebenmorgen, Markus

    2015-01-01

    We consider the rapid computation of separable expansions for the approximation of random fields. We compare approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. We provide an a-posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples validate and quantify the considered methods.

  12. Efficient approximation of random fields for numerical applications

    KAUST Repository

    Harbrecht, Helmut

    2015-01-07

    We consider the rapid computation of separable expansions for the approximation of random fields. We compare approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. We provide an a-posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples validate and quantify the considered methods.

  13. Approximating terminological queries

    NARCIS (Netherlands)

    Stuckenschmidt, Heiner; Van Harmelen, Frank

    2002-01-01

    Current proposals for languages to encode terminological knowledge in intelligent systems support logical reasoning for answering user queries about objects and classes. An application of these languages on the World Wide Web, however, is hampered by the limitations of logical reasoning in terms

  14. NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, Max La Cour [Technical Univ. of Denmark, Lyngby (Denmark); Villa, Umberto E. [Univ. of Texas, Austin, TX (United States); Engsig-Karup, Allan P. [Technical Univ. of Denmark, Lyngby (Denmark); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-22

    The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, our FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.

  15. Reasoning robots the art and science of programming robotic agents

    CERN Document Server

    Thielscher, Michael

    2005-01-01

    The book provides an in-depth and uniform treatment of a mathematical model for reasoning robotic agents. The book also contains an introduction to a programming method and system based on this model. The mathematical model, known as the "Fluent Calculus,'' describes how to use classical first-order logic to set up symbolic models of dynamic worlds and to represent knowledge of actions and their effects. Robotic agents use this knowledge and their reasoning facilities to make decisions when following high-level, long-term strategies. The book covers the issues of reasoning about sensor input, acting under incomplete knowledge and uncertainty, planning, intelligent troubleshooting, and many other topics. The mathematical model is supplemented by a programming method which allows readers to design their own reasoning robotic agents. The usage of this method, called "FLUX,'' is illustrated by many example programs. The book includes the details of an implementation of FLUX using the standard programming language...

  16. Medication Error, What Is the Reason?

    Directory of Open Access Journals (Sweden)

    Ali Banaozar Mohammadi

    2015-09-01

    Full Text Available Background: Medication errors due to different reasons may alter the outcome of all patients, especially patients with drug poisoning. We introduce one of the most common type of medication error in the present article. Case:A 48 year old woman with suspected organophosphate poisoning was died due to lethal medication error. Unfortunately these types of errors are not rare and had some preventable reasons included lack of suitable and enough training and practicing of medical students and some failures in medical students’ educational curriculum. Conclusion:Hereby some important reasons are discussed because sometimes they are tre-mendous. We found that most of them are easily preventable. If someone be aware about the method of use, complications, dosage and contraindication of drugs, we can minimize most of these fatal errors.

  17. Accounting for dropout reason in longitudinal studies with nonignorable dropout.

    Science.gov (United States)

    Moore, Camille M; MaWhinney, Samantha; Forster, Jeri E; Carlson, Nichole E; Allshouse, Amanda; Wang, Xinshuo; Routy, Jean-Pierre; Conway, Brian; Connick, Elizabeth

    2017-08-01

    Dropout is a common problem in longitudinal cohort studies and clinical trials, often raising concerns of nonignorable dropout. Selection, frailty, and mixture models have been proposed to account for potentially nonignorable missingness by relating the longitudinal outcome to time of dropout. In addition, many longitudinal studies encounter multiple types of missing data or reasons for dropout, such as loss to follow-up, disease progression, treatment modifications and death. When clinically distinct dropout reasons are present, it may be preferable to control for both dropout reason and time to gain additional clinical insights. This may be especially interesting when the dropout reason and dropout times differ by the primary exposure variable. We extend a semi-parametric varying-coefficient method for nonignorable dropout to accommodate dropout reason. We apply our method to untreated HIV-infected subjects recruited to the Acute Infection and Early Disease Research Program HIV cohort and compare longitudinal CD4 + T cell count in injection drug users to nonusers with two dropout reasons: anti-retroviral treatment initiation and loss to follow-up.

  18. Elementary School Children's Reasoning about Social Class: A Mixed-Methods Study

    Science.gov (United States)

    Mistry, Rashmita S.; Brown, Christia S.; White, Elizabeth S.; Chow, Kirby A.; Gillen-O'Neel, Cari

    2015-01-01

    The current study examined children's identification and reasoning about their subjective social status (SSS), their beliefs about social class groups (i.e., the poor, middle class, and rich), and the associations between the two. Study participants were 117 10- to 12-year-old children of diverse racial, ethnic, and socioeconomic backgrounds…

  19. Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime

    Science.gov (United States)

    Lorin, E.; Yang, X.; Antoine, X.

    2016-06-01

    The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.

  20. An adaptive meshfree method for phase-field models of biomembranes. Part I: Approximation with maximum-entropy basis functions

    OpenAIRE

    Rosolen, A.; Peco, C.; Arroyo, M.

    2013-01-01

    We present an adaptive meshfree method to approximate phase-field models of biomembranes. In such models, the Helfrich curvature elastic energy, the surface area, and the enclosed volume of a vesicle are written as functionals of a continuous phase-field, which describes the interface in a smeared manner. Such functionals involve up to second-order spatial derivatives of the phase-field, leading to fourth-order Euler–Lagrange partial differential equations (PDE). The solutions develop sharp i...