Hybridization of the probability perturbation method with gradient information
DEFF Research Database (Denmark)
Johansen, Kent; Caers, J.; Suzuki, S.
2007-01-01
Geostatistically based history-matching methods make it possible to devise history-matching strategies that will honor geologic knowledge about the reservoir. However, the performance of these methods is known to be impeded by slow convergence rates resulting from the stochastic nature of the alg...
International Nuclear Information System (INIS)
Sabouri, Pouya
2013-01-01
This thesis presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (Uncertainty Analysis for Criticality Safety Assessment Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms. (author)
Nayfeh, Ali H
2008-01-01
1. Introduction 1 2. Straightforward Expansions and Sources of Nonuniformity 23 3. The Method of Strained Coordinates 56 4. The Methods of Matched and Composite Asymptotic Expansions 110 5. Variation of Parameters and Methods of Averaging 159 6. The Method of Multiple Scales 228 7. Asymptotic Solutions of Linear Equations 308 References and Author Index 387 Subject Index 417
International Nuclear Information System (INIS)
Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.
2013-01-01
The first goal of this paper is to present an exact method able to precisely evaluate very small reactivity effects with a Monte Carlo code (<10 pcm). it has been decided to implement the exact perturbation theory in TRIPOLI-4 and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4 is described. To illustrate the efficiency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the 'direct' estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the 'direct' method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. It offers the possibility to split reactivity contributions on both isotopes and reactions. Other applications of this perturbation method are presented and tested like the calculation of exact kinetic parameters (βeff, Λeff) or sensitivity parameters
Introduction to perturbation methods
Holmes, M
1995-01-01
This book is an introductory graduate text dealing with many of the perturbation methods currently used by applied mathematicians, scientists, and engineers. The author has based his book on a graduate course he has taught several times over the last ten years to students in applied mathematics, engineering sciences, and physics. The only prerequisite for the course is a background in differential equations. Each chapter begins with an introductory development involving ordinary differential equations. The book covers traditional topics, such as boundary layers and multiple scales. However, it also contains material arising from current research interest. This includes homogenization, slender body theory, symbolic computing, and discrete equations. One of the more important features of this book is contained in the exercises. Many are derived from problems of up- to-date research and are from a wide range of application areas.
Perturbation theory and collision probability formalism. Vol. 2
Energy Technology Data Exchange (ETDEWEB)
Nasr, M [National Center for Nuclear Safety and Radiation Control, Atomic Energy Authority, Cairo (Egypt)
1996-03-01
Perturbation theory is commonly used in evaluating the activity effects, particularly those resulting from small and localized perturbation in multiplying media., e.g. in small sample reactivity measurements. The Boltzmann integral transport equation is generally used for evaluating the direct and adjoint fluxes in the heterogenous lattice cells to be used in the perturbation equations. When applying perturbation theory in this formalism, a term involving the perturbation effects on the special transfer kernel arises. This term is difficult to evaluate correctly, since it involves an integration all over the entire system. The main advantage of the perturbation theory which is the limitation of the integration procedure on the perturbation region is found to be of no practical use in such cases. In the present work, the perturbation equation in the collision probability formalism is analyzed. A mathematical treatment of the term in question is performed. A new mathematical expression for this term is derived. The new expression which can be estimated easily is derived.
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
Demand and choice probability generating functions for perturbed consumers
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2011-01-01
This paper considers demand systems for utility-maximizing consumers equipped with additive linearly perturbed utility of the form U(x)+m⋅x and faced with general budget constraints x 2 B. Given compact budget sets, the paper provides necessary as well as sufficient conditions for a demand genera...
Perturbation methods for power and reactivity reconstruction
International Nuclear Information System (INIS)
Palmiotti, G.; Salvatores, M.; Estiot, J.C.; Broccoli, U.; Bruna, G.; Gomit, J.M.
1987-01-01
This paper deals with recent developments and applications in perturbation methods. Two types of methods are used. The first one is an explicit method, which allows the explicit reconstruction of a perturbed flux using a linear combination of a library of functions. In our application, these functions are the harmonics (i.e. the high order eigenfunctions of the system). The second type is based on the Generalized Perturbation Theory GPT and needs the calculation of an importance function for each integral parameter of interest. Recent developments of a particularly useful high order formulation allows to obtain satisfactory results also for very large perturbations
Small-sample-worth perturbation methods
International Nuclear Information System (INIS)
1985-01-01
It has been assumed that the perturbed region, R/sub p/, is large enough so that: (1) even without a great deal of biasing there is a substantial probability that an average source-neutron will enter it; and (2) once having entered, the neutron is likely to make several collisions in R/sub p/ during its lifetime. Unfortunately neither assumption is valid for the typical configurations one encounters in small-sample-worth experiments. In such experiments one measures the reactivity change which is induced when a very small void in a critical assembly is filled with a sample of some test-material. Only a minute fraction of the fission-source neutrons ever gets into the sample and, of those neutrons that do, most emerge uncollided. Monte Carlo small-sample perturbations computations are described
On-Shell Methods in Perturbative QCD
International Nuclear Information System (INIS)
Bern, Zvi; Dixon, Lance J.; Kosower, David A.
2007-01-01
We review on-shell methods for computing multi-parton scattering amplitudes in perturbative QCD, utilizing their unitarity and factorization properties. We focus on aspects which are useful for the construction of one-loop amplitudes needed for phenomenological studies at the Large Hadron Collider
Reactor perturbation calculations by Monte Carlo methods
International Nuclear Information System (INIS)
Gubbins, M.E.
1965-09-01
Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)
Methods and applications of analytical perturbation theory
International Nuclear Information System (INIS)
Kirchgraber, U.; Stiefel, E.
1978-01-01
This monograph on perturbation theory is based on various courses and lectures held by the authors at the ETH, Zurich and at the University of Texas, Austin. Its principal intention is to inform application-minded mathematicians, physicists and engineers about recent developments in this field. The reader is not assumed to have mathematical knowledge beyond what is presented in standard courses on analysis and linear algebra. Chapter I treats the transformations of systems of differential equations and the integration of perturbed systems in a formal way. These tools are applied in Chapter II to celestial mechanics and to the theory of tops and gyroscopic motion. Chapter III is devoted to the discussion of Hamiltonian systems of differential equations and exposes the algebraic aspects of perturbation theory showing also the necessary modifications of the theory in case of singularities. The last chapter gives the mathematical justification for the methods developed in the previous chapters and investigates important questions such as error estimations for the solutions and asymptotic stability. Each chapter ends with useful comments and an extensive reference to the original literature. (HJ) [de
Imprecise Probability Methods for Weapons UQ
Energy Technology Data Exchange (ETDEWEB)
Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-13
Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.
International Nuclear Information System (INIS)
Yokoyama, Keiichi; Sugita, Akihiro; Yamada, Hidetaka; Teranishi, Yoshiaki; Yokoyama, Atsushi
2007-01-01
A preparatory study on the quantum control of the selective transition K(4S 1/2 ) → K(4P J ) (J=1/2, 3/2) in intense laser field is reported. To generate high average power femtosecond laser pulses with enough field intensity, a Ti:Sapphire regenerative amplifier system with a repetition rate of 1 kHz is constructed. The bandwidth and pulse energy are shown to qualify the required values for the completely selective transition with 100% population inversion. A preliminary experiment of the selective excitation shows that the fringe pattern formed by a phase related pulse pair depends on the laser intensity, indicating that the perturbative behavior of the excitation probabilities is not valid any more and the laser intensity reaches a non-perturbative region. (author)
New Methods in Non-Perturbative QCD
Energy Technology Data Exchange (ETDEWEB)
Unsal, Mithat [North Carolina State Univ., Raleigh, NC (United States)
2017-01-31
In this work, we investigate the properties of quantum chromodynamics (QCD), by using newly developing mathematics and physics formalisms. Almost all of the mass in the visible universe emerges from a quantum chromodynamics (QCD), which has a completely negligible microscopic mass content. An intimately related issue in QCD is the quark confinement problem. Answers to non-perturbative questions in QCD remained largely elusive despite much effort over the years. It is also believed that the usual perturbation theory is inadequate to address these kinds of problems. Perturbation theory gives a divergent asymptotic series (even when the theory is properly renormalized), and there are non-perturbative phenomena which never appear at any order in perturbation theory. Recently, a fascinating bridge between perturbation theory and non-perturbative effects has been found: a formalism called resurgence theory in mathematics tells us that perturbative data and non-perturbative data are intimately related. Translating this to the language of quantum field theory, it turns out that non-perturbative information is present in a coded form in perturbation theory and it can be decoded. We take advantage of this feature, which is particularly useful to understand some unresolved mysteries of QCD from first principles. In particular, we use: a) Circle compactifications which provide a semi-classical window to study confinement and mass gap problems, and calculable prototypes of the deconfinement phase transition; b) Resurgence theory and transseries which provide a unified framework for perturbative and non-perturbative expansion; c) Analytic continuation of path integrals and Lefschetz thimbles which may be useful to address sign problem in QCD at finite density.
Modified method of perturbed stationary states. I
International Nuclear Information System (INIS)
Green, T.A.
1978-10-01
The reaction coordinate approach of Mittleman is used to generalize the method of Perturbed Stationary States. A reaction coordinate is defined for each state in the scattering expansion in terms of parameters which depend on the internuclear separation. These are to be determined from a variational principle described by Demkov. The variational result agrees with that of Bates and McCarroll in the limit of separated atoms, but is generally different elsewhere. The theory is formulated for many-electron systems, and the construction of the scattering expansion is discussed for simple one-, two-, and three-electron systsm. The scattering expansion and the Lagrangian for the radial scattering functions are given in detail for a heteronuclear one-electron system. 2 figures
Monte Carlo methods to calculate impact probabilities
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
A Parameter Robust Method for Singularly Perturbed Delay Differential Equations
Directory of Open Access Journals (Sweden)
Erdogan Fevzi
2010-01-01
Full Text Available Uniform finite difference methods are constructed via nonstandard finite difference methods for the numerical solution of singularly perturbed quasilinear initial value problem for delay differential equations. A numerical method is constructed for this problem which involves the appropriate Bakhvalov meshes on each time subinterval. The method is shown to be uniformly convergent with respect to the perturbation parameter. A numerical example is solved using the presented method, and the computed result is compared with exact solution of the problem.
Probability evolution method for exit location distribution
Zhu, Jinjie; Chen, Zhen; Liu, Xianbin
2018-03-01
The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.
Perturbation method for fuel evolution and shuffling analysis
International Nuclear Information System (INIS)
Gandini, A.
1987-01-01
A perturbation methodology is described by which the behaviour of a reactor system during burnup can be analyzed making use of Generalized Perturbation Theory (GPT) codes already available in the linear domain. Typical quantities that can be studied with the proposed methodology are the amount of a specified material at the end of cycle, the fluence in a specified region, the residual reactivity at end of reactor life cycle. The potentiality of the method for fuel shuffling studies is also described. (author)
On the resolvents methods in quantum perturbation calculations
International Nuclear Information System (INIS)
Burzynski, A.
1979-01-01
This paper gives a systematic review of resolvent methods in quantum perturbation calculations. The case of discrete spectrum of hamiltonian is considered specially (in the literature this is the fewest considered case). The topics of calculations of quantum transitions by using of the resolvent formalism, quantum transitions between states from particular subspaces, the shifts of energy levels, are shown. The main ideas of stationary perturbation theory developed by Lippmann and Schwinger are considered too. (author)
Mandal, Anirban; Hunt, Katharine L. C.
2018-05-01
For a perturbed quantum system initially in the ground state, the coefficient ck(t) of excited state k in the time-dependent wave function separates into adiabatic and nonadiabatic terms. The adiabatic term ak(t) accounts for the adjustment of the original ground state to form the new ground state of the instantaneous Hamiltonian H(t), by incorporating excited states of the unperturbed Hamiltonian H0 without transitions; ak(t) follows the adiabatic theorem of Born and Fock. The nonadiabatic term bk(t) describes excitation into another quantum state k; bk(t) is obtained as an integral containing the time derivative of the perturbation. The true transition probability is given by |bk(t)|2, as first stated by Landau and Lifshitz. In this work, we contrast |bk(t)|2 and |ck(t)|2. The latter is the norm-square of the entire excited-state coefficient which is used for the transition probability within Fermi's golden rule. Calculations are performed for a perturbing pulse consisting of a cosine or sine wave in a Gaussian envelope. When the transition frequency ωk0 is on resonance with the frequency ω of the cosine wave, |bk(t)|2 and |ck(t)|2 rise almost monotonically to the same final value; the two are intertwined, but they are out of phase with each other. Off resonance (when ωk0 ≠ ω), |bk(t)|2 and |ck(t)|2 differ significantly during the pulse. They oscillate out of phase and reach different maxima but then fall off to equal final values after the pulse has ended, when ak(t) ≡ 0. If ωk0 ω. While the transition probability is rising, the midpoints between successive maxima and minima fit Gaussian functions of the form a exp[-b(t - d)2]. To our knowledge, this is the first analysis of nonadiabatic transition probabilities during a perturbing pulse.
Perturbation method for periodic solutions of nonlinear jerk equations
International Nuclear Information System (INIS)
Hu, H.
2008-01-01
A Lindstedt-Poincare type perturbation method with bookkeeping parameters is presented for determining accurate analytical approximate periodic solutions of some third-order (jerk) differential equations with cubic nonlinearities. In the process of the solution, higher-order approximate angular frequencies are obtained by Newton's method. A typical example is given to illustrate the effectiveness and simplicity of the proposed method
A semi perturbative method for QED
Jora, Renata; Schechter, Joseph
2014-01-01
We compute the QED beta function using a new method of functional integration. It turns out that in this procedure the beta function contains only the first two orders coefficients and thus corresponds to a new renormalization scheme, long time supposed to exist.
An Operator Perturbation Method of Polarized Line Transfer V ...
Indian Academy of Sciences (India)
tribpo
imate Lambda Iteration) method to the resonance scattering in spectral lines formed in the presence of weak magnetic fields. The method is based on an operator perturbation approach, and can efficiently give solutions for oriented vector magnetic fields in the solar atmosphere. Key words. ... 1999 for observational.
expansion method and travelling wave solutions for the perturbed ...
Indian Academy of Sciences (India)
Abstract. In this paper, we construct the travelling wave solutions to the perturbed nonlinear. Schrödinger's equation (NLSE) with Kerr law non-linearity by the extended (G /G)-expansion method. Based on this method, we obtain abundant exact travelling wave solutions of NLSE with. Kerr law nonlinearity with arbitrary ...
Non-perturbative methods applied to multiphoton ionization
International Nuclear Information System (INIS)
Brandi, H.S.; Davidovich, L.; Zagury, N.
1982-09-01
The use of non-perturbative methods in the treatment of atomic ionization is discussed. Particular attention is given to schemes of the type proposed by Keldysh where multiphoton ionization and tunnel auto-ionization occur for high intensity fields. These methods are shown to correspond to a certain type of expansion of the T-matrix in the intra-atomic potential; in this manner a criterium concerning the range of application of these non-perturbative schemes is suggested. A brief comparison between the ionization rate of atoms in the presence of linearly and circularly polarized light is presented. (Author) [pt
Variational configuration interaction methods and comparison with perturbation theory
International Nuclear Information System (INIS)
Pople, J.A.; Seeger, R.; Krishnan, R.
1977-01-01
A configuration interaction (CI) procedure which includes all single and double substitutions from an unrestricted Hartree-Fock single determinant is described. This has the feature that Moller-Plesset perturbation results to second and third order are obtained in the first CI iterative cycle. The procedure also avoids the necessity of a full two-electron integral transformation. A simple expression for correcting the final CI energy for lack of size consistency is proposed. Finally, calculations on a series of small molecules are presented to compare these CI methods with perturbation theory
Perturbation method for calculating impurity binding energy in an ...
Indian Academy of Sciences (India)
Nilanjan Sil
2017-12-18
Dec 18, 2017 ... Abstract. In the present paper, we have studied the binding energy of the shallow donor hydrogenic impurity, which is confined in an inhomogeneous cylindrical quantum dot (CQD) of GaAs-AlxGa1−xAs. Perturbation method is used to calculate the binding energy within the framework of effective mass ...
Application of New Variational Homotopy Perturbation Method For ...
African Journals Online (AJOL)
This paper discusses the application of the New Variational Homotopy Perturbation Method (NVHPM) for solving integro-differential equations. The advantage of the new Scheme is that it does not require discretization, linearization or any restrictive assumption of any form be fore it is applied. Several test problems are ...
Diagrammatic perturbation methods in networks and sports ranking combinatorics
International Nuclear Information System (INIS)
Park, Juyong
2010-01-01
Analytic and computational tools developed in statistical physics are being increasingly applied to the study of complex networks. Here we present recent developments in the diagrammatic perturbation methods for the exponential random graph models, and apply them to the combinatoric problem of determining the ranking of nodes in directed networks that represent pairwise competitions
Generalized perturbation theory (GPT) methods. A heuristic approach
International Nuclear Information System (INIS)
Gandini, A.
1987-01-01
Wigner first proposed a perturbation theory as early as 1945 to study fundamental quantities such as the reactivity worths of different materials. The first formulation, CPT, for conventional perturbation theory is based on universal quantum mechanics concepts. Since that early conception, significant contributions have been made to CPT, in particular, Soodak, who rendered a heuristic interpretation of the adjoint function, (referred to as the GPT method for generalized perturbation theory). The author illustrates the GPT methodology in a variety of linear and nonlinear domains encountered in nuclear reactor analysis. The author begins with the familiar linear neutron field and then generalizes the methodology to other linear and nonlinear fields, using heuristic arguments. The author believes that the inherent simplicity and elegance of the heuristic derivation, although intended here for reactor physics problems might be usefully adopted in collateral fields and includes such examples
Singular perturbations introduction to system order reduction methods with applications
Shchepakina, Elena; Mortell, Michael P
2014-01-01
These lecture notes provide a fresh approach to investigating singularly perturbed systems using asymptotic and geometrical techniques. It gives many examples and step-by-step techniques, which will help beginners move to a more advanced level. Singularly perturbed systems appear naturally in the modelling of many processes that are characterized by slow and fast motions simultaneously, for example, in fluid dynamics and nonlinear mechanics. This book’s approach consists in separating out the slow motions of the system under investigation. The result is a reduced differential system of lesser order. However, it inherits the essential elements of the qualitative behaviour of the original system. Singular Perturbations differs from other literature on the subject due to its methods and wide range of applications. It is a valuable reference for specialists in the areas of applied mathematics, engineering, physics, biology, as well as advanced undergraduates for the earlier parts of the book, and graduate stude...
Shiryaev, A N
1996-01-01
This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables
Singular perturbation methods for nonlinear dynamic systems with time delays
International Nuclear Information System (INIS)
Hu, H.Y.; Wang, Z.H.
2009-01-01
This review article surveys the recent advances in the dynamics and control of time-delay systems, with emphasis on the singular perturbation methods, such as the method of multiple scales, the method of averaging, and two newly developed methods, the energy analysis and the pseudo-oscillator analysis. Some examples are given to demonstrate the advantages of the methods. The comparisons with other methods show that these methods lead to easier computations and higher accurate prediction on the local dynamics of time-delay systems near a Hopf bifurcation.
Further comments on the sequential probability ratio testing methods
Energy Technology Data Exchange (ETDEWEB)
Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics
1997-05-23
The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).
Green's functions in quantum chemistry - I. The Σ perturbation method
International Nuclear Information System (INIS)
Sebastian, K.L.
1978-01-01
As an improvement over the Hartree-Fock approximation, a Green's Function method - the Σ perturbation method - is investigated for molecular calculations. The method is applied to the hydrogen molecule and to the π-electron system of ethylene under PPP approximation. It is found that when the algebraic approximation is used, the energy obtained is better than that of the HF approach, but is not as good as that of the configuration-interaction method. The main advantage of this procedure is that it is devoid of the most serious defect of HF method, viz. incorrect dissociation limits. (K.B.)
Sound Attenuation in Elliptic Mufflers Using a Regular Perturbation Method
Banerjee, Subhabrata; Jacobi, Anthony M.
2012-01-01
The study of sound attenuation in an elliptical chamber involves the solution of the Helmholtz equation in elliptic coordinate systems. The Eigen solutions for such problems involve the Mathieu and the modified Mathieu functions. The computation of such functions poses considerable challenge. An alternative method to solve such problems had been proposed in this paper. The elliptical cross-section of the muffler has been treated as a perturbed circle, enabling the use of a regular perturbatio...
Green's function method for perturbed Korteweg-de Vries equation
International Nuclear Information System (INIS)
Cai Hao; Huang Nianning
2003-01-01
The x-derivatives of squared Jost solution are the eigenfunctions with the zero eigenvalue of the linearized equation derived from the perturbed Korteweg-de Vries equation. A method similar to Green's function formalism is introduced to show the completeness of the squared Jost solutions in multi-soliton cases. It is not related to Lax equations directly, and thus it is beneficial to deal with the nonlinear equations with complicated Lax pair
Systems of evolution equations and the singular perturbation method
International Nuclear Information System (INIS)
Mika, J.
Several fundamental theorems are presented important for the solution of linear evolution equations in the Banach space. The algorithm is deduced extending the solution of the system of singularly perturbed evolution equations into an asymptotic series with respect to a small positive parameter. The asymptotic convergence is shown of an approximate solution to the accurate solution. Singularly perturbed evolution equations of the resonance type were analysed. The special role is considered of the asymptotic equivalence of P1 equations obtained as the first order approximation if the spherical harmonics method is applied to the linear Boltzmann equation, and the diffusion equations of the linear transport theory where the small parameter approaches zero. (J.B.)
Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods
Bhatnagar, S; Prashanth, L A
2013-01-01
Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...
Foundations of quantum chromodynamics: Perturbative methods in gauge theories
International Nuclear Information System (INIS)
Muta, T.
1986-01-01
This volume develops the techniques of perturbative QCD in great detail starting with field theory. Aside from extensive treatments of the renormalization group technique, the operator product expansion formalism and their applications to short-distance reactions, this book provides a comprehensive introduction to gauge field theories. Examples and exercises are provided to amplify the discussions on important topics. Contents: Introduction; Elements of Quantum Chromodynamics; The Renormalization Group Method; Asymptotic Freedom; Operator Product Expansion Formalism; Applications; Renormalization Scheme Dependence; Factorization Theorem; Further Applications; Power Corrections; Infrared Problem. Power Correlations; Infrared Problem
Approximate solution fuzzy pantograph equation by using homotopy perturbation method
Jameel, A. F.; Saaban, A.; Ahadkulov, H.; Alipiah, F. M.
2017-09-01
In this paper, Homotopy Perturbation Method (HPM) is modified and formulated to find the approximate solution for its employment to solve (FDDEs) involving a fuzzy pantograph equation. The solution that can be obtained by using HPM is in the form of infinite series that converge to the actual solution of the FDDE and this is one of the benefits of this method In addition, it can be used for solving high order fuzzy delay differential equations directly without reduction to a first order system. Moreover, the accuracy of HPM can be detected without needing the exact solution. The HPM is studied for fuzzy initial value problems involving pantograph equation. Using the properties of fuzzy set theory, we reformulate the standard approximate method of HPM and obtain the approximate solutions. The effectiveness of the proposed method is demonstrated for third order fuzzy pantograph equation.
Hybrid perturbation methods based on statistical time series models
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Performance prediction of electrohydrodynamic thrusters by the perturbation method
International Nuclear Information System (INIS)
Shibata, H.; Watanabe, Y.; Suzuki, K.
2016-01-01
In this paper, we present a novel method for analyzing electrohydrodynamic (EHD) thrusters. The method is based on a perturbation technique applied to a set of drift-diffusion equations, similar to the one introduced in our previous study on estimating breakdown voltage. The thrust-to-current ratio is generalized to represent the performance of EHD thrusters. We have compared the thrust-to-current ratio obtained theoretically with that obtained from the proposed method under atmospheric air conditions, and we have obtained good quantitative agreement. Also, we have conducted a numerical simulation in more complex thruster geometries, such as the dual-stage thruster developed by Masuyama and Barrett [Proc. R. Soc. A 469, 20120623 (2013)]. We quantitatively clarify the fact that if the magnitude of a third electrode voltage is low, the effective gap distance shortens, whereas if the magnitude of the third electrode voltage is sufficiently high, the effective gap distance lengthens.
Investigation of collisional excitation-transfer processes in a plasma by laser perturbation method
International Nuclear Information System (INIS)
Sakurai, Takeki
1983-01-01
The theoretical background and the experimental method of the laser perturbation method applied to the study of collisional excitation transfer process in plasma are explained. The atomic density at some specified level can be evaluated theoretically. By using the theoretical results and the experimentally obtained data, the total attenuation probability, the collisional transfer probability and natural emission probability were estimated. For the experiments, continuous wave laser (cw) and pulse laser are employed. It is possible by using pulse dye laser to observe the attenuation curve directly, and to bring in resonance to any atomic spectra. At the beginning, the experimental studies were made on He-Ne discharge. The pulse dye laser has been used for the excitation of alkali atoms. The first application of pulse laser to the study of plasma physics was the study on He. The cross section of disalignment has also been studied by the laser perturbation. The alignment of atoms, step and cascade transfer, the confinement of radiation and optogalvanic effect are discussed in this paper. (Kato, T.)
The transmission probability method in one-dimensional cylindrical geometry
International Nuclear Information System (INIS)
Rubin, I.E.
1983-01-01
The collision probability method widely used in solving the problems of neutron transpopt in a reactor cell is reliable for simple cells with small number of zones. The increase of the number of zones and also taking into account the anisotropy of scattering greatly increase the scope of calculations. In order to reduce the time of calculation the transmission probability method is suggested to be used for flux calculation in one-dimensional cylindrical geometry taking into account the scattering anisotropy. The efficiency of the suggested method is verified using the one-group calculations for cylindrical cells. The use of the transmission probability method allows to present completely angular and spatial dependences is neutrons distributions without the increase in the scope of calculations. The method is especially effective in solving the multi-group problems
Jump probabilities in the non-Markovian quantum jump method
International Nuclear Information System (INIS)
Haerkoenen, Kari
2010-01-01
The dynamics of a non-Markovian open quantum system described by a general time-local master equation is studied. The propagation of the density operator is constructed in terms of two processes: (i) deterministic evolution and (ii) evolution of a probability density functional in the projective Hilbert space. The analysis provides a derivation for the jump probabilities used in the recently developed non-Markovian quantum jump (NMQJ) method (Piilo et al 2008 Phys. Rev. Lett. 100 180402).
Bayesian maximum posterior probability method for interpreting plutonium urinalysis data
International Nuclear Information System (INIS)
Miller, G.; Inkret, W.C.
1996-01-01
A new internal dosimetry code for interpreting urinalysis data in terms of radionuclide intakes is described for the case of plutonium. The mathematical method is to maximise the Bayesian posterior probability using an entropy function as the prior probability distribution. A software package (MEMSYS) developed for image reconstruction is used. Some advantages of the new code are that it ensures positive calculated dose, it smooths out fluctuating data, and it provides an estimate of the propagated uncertainty in the calculated doses. (author)
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Beyond perturbation introduction to the homotopy analysis method
Liao, Shijun
2003-01-01
Solving nonlinear problems is inherently difficult, and the stronger the nonlinearity, the more intractable solutions become. Analytic approximations often break down as nonlinearity becomes strong, and even perturbation approximations are valid only for problems with weak nonlinearity.This book introduces a powerful new analytic method for nonlinear problems-homotopy analysis-that remains valid even with strong nonlinearity. In Part I, the author starts with a very simple example, then presents the basic ideas, detailed procedures, and the advantages (and limitations) of homotopy analysis. Part II illustrates the application of homotopy analysis to many interesting nonlinear problems. These range from simple bifurcations of a nonlinear boundary-value problem to the Thomas-Fermi atom model, Volterra''s population model, Von Kármán swirling viscous flow, and nonlinear progressive waves in deep water.Although the homotopy analysis method has been verified in a number of prestigious journals, it has yet to be ...
Developing feasible loading patterns using perturbation theory methods
International Nuclear Information System (INIS)
White, J.R.; Avila, K.M.
1990-01-01
This work illustrates an approach to core reload design that combines the power of integer programming with the efficiency of generalized perturbation theory. The main use of the method is as a tool to help the design engineer identify feasible loading patterns with minimum time and effort. The technique is highly successful for the burnable poison (BP) loading problem, but the unpredictable behavior of the branch-and-bound algorithm degrades overall performance for large problems. Unfortunately, the combined fuel shuffling plus BP optimization problem falls into this latter classification. Overall, however, the method shows great promise for significantly reducing the manpower time required for the reload design process. And it may even give the further benefit of better designs and improved performance
Calculating the albedo characteristics by the method of transmission probabilities
International Nuclear Information System (INIS)
Lukhvich, A.A.; Rakhno, I.L.; Rubin, I.E.
1983-01-01
The possibility to use the method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones is studied. The transmission probabilities method is a numerical method for solving the transport equation in the integrated form. All calculations have been conducted as a one-group approximation for the planes and rods with different optical thicknesses and capture-to-scattering ratios. Above calculations for plane and cylindrical geometries have shown the possibility to use the numerical method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones with high accuracy. In this case the computer time consumptions are minimum even with the cylindrical geometry, if the interpolation calculation of characteristics is used for the neutrons of the first path
COMPARATIVE ANALYSIS OF ESTIMATION METHODS OF PHARMACY ORGANIZATION BANKRUPTCY PROBABILITY
Directory of Open Access Journals (Sweden)
V. L. Adzhienko
2014-01-01
Full Text Available A purpose of this study was to determine the probability of bankruptcy by various methods in order to predict the financial crisis of pharmacy organization. Estimating the probability of pharmacy organization bankruptcy was conducted using W. Beaver’s method adopted in the Russian Federation, with integrated assessment of financial stability use on the basis of scoring analysis. The results obtained by different methods are comparable and show that the risk of bankruptcy of the pharmacy organization is small.
Application of a perturbation method for realistic dynamic simulation of industrial robots
Waiboer, R.R.; Aarts, Ronald G.K.M.; Jonker, Jan B.
2005-01-01
This paper presents the application of a perturbation method for the closed-loop dynamic simulation of a rigid-link manipulator with joint friction. In this method the perturbed motion of the manipulator is modelled as a first-order perturbation of the nominal manipulator motion. A non-linear finite
Formulation of nonlinear chromaticity in circular accelerators by canonical perturbation method
International Nuclear Information System (INIS)
Takao, Masaru
2005-01-01
The formulation of nonlinear chromaticity in circular accelerators based on the canonical perturbation method is presented. Since the canonical perturbation method directly relates the tune shift to the perturbation Hamiltonian, it greatly simplifies the calculation of the nonlinear chromaticity. The obtained integral representation for nonlinear chromaticity can be systematically extended to higher orders
Generalized perturbation theory based on the method of cyclic characteristics
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M.; Marleau, G. [Institut de Genie Nucleaire, Departement de Genie Physique, Ecole Polytechnique de Montreal, 2900 Boul. Edouard-Montpetit, Montreal, Que. H3T 1J4 (Canada)
2006-07-01
A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)
Generalized perturbation theory based on the method of cyclic characteristics
International Nuclear Information System (INIS)
Assawaroongruengchot, M.; Marleau, G.
2006-01-01
A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)
Acoustofluidics 13: Analysis of acoustic streaming by perturbation methods.
Sadhal, S S
2012-07-07
In this Part 13 of the tutorial series "Acoustofluidics--exploiting ultrasonic standing waves forces and acoustic streaming in microfluidic systems for cell and particle manipulation," the streaming phenomenon is presented from an analytical standpoint, and perturbation methods are developed for analyzing such flows. Acoustic streaming is the phenomenon that takes place when a steady flow field is generated by the absorption of an oscillatory field. This can happen either by attenuation (quartz wind) or by interaction with a boundary. The latter type of streaming can also be generated by an oscillating solid in an otherwise still fluid medium or vibrating enclosure of a fluid body. While we address the first kind of streaming, our focus is largely on the second kind from a practical standpoint for application to microfluidic systems. In this Focus article, we limit the analysis to one- and two-dimensional problems in order to understand the analytical techniques with examples that most-easily illustrate the streaming phenomenon.
Stability Analysis of Nonuniform Rectangular Beams Using Homotopy Perturbation Method
Directory of Open Access Journals (Sweden)
Seval Pinarbasi
2012-01-01
Full Text Available The design of slender beams, that is, beams with large laterally unsupported lengths, is commonly controlled by stability limit states. Beam buckling, also called “lateral torsional buckling,” is different from column buckling in that a beam not only displaces laterally but also twists about its axis during buckling. The coupling between twist and lateral displacement makes stability analysis of beams more complex than that of columns. For this reason, most of the analytical studies in the literature on beam stability are concentrated on simple cases: uniform beams with ideal boundary conditions and simple loadings. This paper shows that complex beam stability problems, such as lateral torsional buckling of rectangular beams with variable cross-sections, can successfully be solved using homotopy perturbation method (HPM.
Thermal disadvantage factor calculation by the multiregion collision probability method
International Nuclear Information System (INIS)
Ozgener, B.; Ozgener, H.A.
2004-01-01
A multi-region collision probability formulation that is capable of applying white boundary condition directly is presented and applied to thermal neutron transport problems. The disadvantage factors computed are compared with their counterparts calculated by S N methods with both direct and indirect application of white boundary condition. The results of the ABH and collision probability method with indirect application of white boundary condition are also considered and comparisons with benchmark Monte Carlo results are carried out. The studies show that the proposed formulation is capable of calculating thermal disadvantage factor with sufficient accuracy without resorting to the fictitious scattering outer shell approximation associated with the indirect application of the white boundary condition in collision probability solutions
Numerical perturbative methods in the quantum theory of physical systems
International Nuclear Information System (INIS)
Adam, G.
1980-01-01
During the last two decades, development of digital electronic computers has led to the deployment of new, distinct methods in theoretical physics. These methods, based on the advances of modern numerical analysis as well as on specific equations describing physical processes, enabled to perform precise calculations of high complexity which have completed and sometimes changed our image of many physical phenomena. Our efforts have concentrated on the development of numerical methods with such intrinsic performances as to allow a successful approach of some Key issues in present theoretical physics on smaller computation systems. The basic principle of such methods is to translate, in numerical analysis language, the theory of perturbations which is suited to numerical rather than to analytical computation. This idea has been illustrated by working out two problems which arise from the time independent Schroedinger equation in the non-relativistic approximation, within both quantum systems with a small number of particles and systems with a large number of particles, respectively. In the first case, we are led to the numerical solution of some quadratic ordinary differential equations (first section of the thesis) and in the second case, to the solution of some secular equations in the Brillouin area (second section). (author)
International Nuclear Information System (INIS)
Takac, S.M.
1972-01-01
The method is based on perturbation of the reactor cell from a few up to few tens of percent. Measurements were performed for square lattice calls of zero power reactors Anna, NORA and RB, with metal uranium and uranium oxide fuel elements, water, heavy water and graphite moderators. Character and functional dependence of perturbations were obtained from the experimental results. Zero perturbation was determined by extrapolation thus obtaining the real physical neutron flux distribution in the reactor cell. Simple diffusion theory for partial plate cell perturbation was developed for verification of the perturbation method. The results of these calculation proved that introducing the perturbation sample in the fuel results in flattening the thermal neutron density dependent on the amplitude of the applied perturbation. Extrapolation applied for perturbed distributions was found to be justified
The method of modular characteristic direction probabilities in MPACT
Energy Technology Data Exchange (ETDEWEB)
Liu, Z. [School of Nuclear Science and Technology, Xi' an Jiaotong University, No. 28 Xianning west road, Xi' an, Shaanxi 710049 (China); Kochunas, B.; Collins, B.; Downar, T. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2200 Bonisteel, Ann Arbor, MI 48109 (United States); Wu, H. [School of Nuclear Science and Technology, Xi' an Jiaotong University, No. 28 Xianning west road, Xi' an, Shaanxi 710049 (China)
2013-07-01
The method of characteristic direction probabilities (CDP) is based on a modular ray tracing technique which combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC). This past year CDP was implemented in the transport code MPACT for 2-D and 3-D transport calculations. By only coupling the fine mesh regions passed by the characteristic rays in the particular direction, the scale of the probabilities matrix is much smaller compared to the CPM. At the same time, the CDP has the same capacity of dealing with the complicated geometries with the MOC, because the same modular ray tracing techniques are used. Results from the C5G7 benchmark problems are given for different cases to show the accuracy and efficiency of the CDP compared to MOC. For the cases examined, the CDP and MOC methods were seen to differ in k{sub eff} by about 1-20 pcm, and the computational efficiency of the CDP appears to be better than the MOC for some problems. However, in other problems, particularly when the CDP matrices have to be recomputed from changing cross sections, the CDP does not perform as well. This indicates an area of future work. (authors)
Perturbation Method of Analysis Applied to Substitution Measurements of Buckling
Energy Technology Data Exchange (ETDEWEB)
Persson, Rolf
1966-11-15
Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.
Perturbation methods and closure approximations in nonlinear systems
International Nuclear Information System (INIS)
Dubin, D.H.E.
1984-01-01
In the first section of this thesis, Hamiltonian theories of guiding center and gyro-center motion are developed using modern symplectic methods and Lie transformations. Littlejohn's techniques, combined with the theory of resonant interaction and island overlap, are used to explore the problem of adiabatic invariance and onset of stochasticity. As an example, the breakdown of invariance due to resonance between drift motion and gyromotion in a tokamak is considered. A Hamiltonian is developed for motion in a straight magnetic field with electrostatic perturbations in the gyrokinetic ordering, from which nonlinear gyrokinetic equations are constructed which have the property of phase-space preservation, useful for computer simulation. Energy invariants are found and various limits of the equations are considered. In the second section, statistical closure theories are applied to simple dynamical systems. The logistic map is used as an example because of its universal properties and simple quadratic nonlinearity. The first closure considered is the direct interaction approximation of Kraichnan, which is found to fail when applied to the logistic map because it cannot approximate the bounded support of the map's equilibrium distribution. By imposing a periodically constraint on a Langevin form of the DIA a new stable closure is developed
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
The maximum entropy method of moments and Bayesian probability theory
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
DEFF Research Database (Denmark)
Farrokhzad, F.; Mowlaee, P.; Barari, Amin
2011-01-01
The beam deformation equation has very wide applications in structural engineering. As a differential equation, it has its own problem concerning existence, uniqueness and methods of solutions. Often, original forms of governing differential equations used in engineering problems are simplified...... Method (OHAM). The comparisons of the results reveal that these methods are very effective, convenient and quite accurate to systems of non-linear differential equation......., and this process produces noise in the obtained answers. This paper deals with solution of second order of differential equation governing beam deformation using four analytical approximate methods, namely the Homotopy Perturbation Method (HPM), Variational Iteration Method (VIM) and Optimal Homotopy Asymptotic...
METHOD OF FOREST FIRES PROBABILITY ASSESSMENT WITH POISSON LAW
Directory of Open Access Journals (Sweden)
A. S. Plotnikova
2016-01-01
Full Text Available The article describes the method for the forest fire burn probability estimation on a base of Poisson distribution. The λ parameter is assumed to be a mean daily number of fires detected for each Forest Fire Danger Index class within specific period of time. Thus, λ was calculated for spring, summer and autumn seasons separately. Multi-annual daily Forest Fire Danger Index values together with EO-derived hot spot map were input data for the statistical analysis. The major result of the study is generation of the database on forest fire burn probability. Results were validated against EO daily data on forest fires detected over Irkutsk oblast in 2013. Daily weighted average probability was shown to be linked with the daily number of detected forest fires. Meanwhile, there was found a number of fires which were developed when estimated probability was low. The possible explanation of this phenomenon was provided.
Robust Trajectory Design in Highly Perturbed Environments Leveraging Continuation Methods, Phase I
National Aeronautics and Space Administration — Research is proposed to investigate continuation methods to improve the robustness of trajectory design algorithms for spacecraft in highly perturbed dynamical...
Comments on the sequential probability ratio testing methods
Energy Technology Data Exchange (ETDEWEB)
Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics
1996-07-01
In this paper the classical sequential probability ratio testing method (SPRT) is reconsidered. Every individual boundary crossing event of the SPRT is regarded as a new piece of evidence about the problem under hypothesis testing. The Bayes method is applied for belief updating, i.e. integrating these individual decisions. The procedure is recommended to use when the user (1) would like to be informed about the tested hypothesis continuously and (2) would like to achieve his final conclusion with high confidence level. (Author).
Application of Classical and Lie Transform Methods to Zonal Perturbation in the Artificial Satellite
San-Juan, J. F.; San-Martin, M.; Perez, I.; Lopez-Ochoa, L. M.
2013-08-01
A scalable second-order analytical orbit propagator program is being carried out. This analytical orbit propagator combines modern perturbation methods, based on the canonical frame of the Lie transform, and classical perturbation methods in function of orbit types or the requirements needed for a space mission, such as catalog maintenance operations, long period evolution, and so on. As a first step on the validation of part of our orbit propagator, in this work we only consider the perturbation produced by zonal harmonic coefficients in the Earth's gravity potential, so that it is possible to analyze the behaviour of the perturbation methods involved in the corresponding analytical theories.
Optimal Monotonicity-Preserving Perturbations of a Given Runge–Kutta Method
Higueras, Inmaculada
2018-02-14
Perturbed Runge–Kutta methods (also referred to as downwind Runge–Kutta methods) can guarantee monotonicity preservation under larger step sizes relative to their traditional Runge–Kutta counterparts. In this paper we study the question of how to optimally perturb a given method in order to increase the radius of absolute monotonicity (a.m.). We prove that for methods with zero radius of a.m., it is always possible to give a perturbation with positive radius. We first study methods for linear problems and then methods for nonlinear problems. In each case, we prove upper bounds on the radius of a.m., and provide algorithms to compute optimal perturbations. We also provide optimal perturbations for many known methods.
Optimal Monotonicity-Preserving Perturbations of a Given Runge–Kutta Method
Higueras, Inmaculada; Ketcheson, David I.; Kocsis, Tihamé r A.
2018-01-01
Perturbed Runge–Kutta methods (also referred to as downwind Runge–Kutta methods) can guarantee monotonicity preservation under larger step sizes relative to their traditional Runge–Kutta counterparts. In this paper we study the question of how to optimally perturb a given method in order to increase the radius of absolute monotonicity (a.m.). We prove that for methods with zero radius of a.m., it is always possible to give a perturbation with positive radius. We first study methods for linear problems and then methods for nonlinear problems. In each case, we prove upper bounds on the radius of a.m., and provide algorithms to compute optimal perturbations. We also provide optimal perturbations for many known methods.
A Modified Computational Scheme for the Stochastic Perturbation Finite Element Method
Directory of Open Access Journals (Sweden)
Feng Wu
Full Text Available Abstract A modified computational scheme of the stochastic perturbation finite element method (SPFEM is developed for structures with low-level uncertainties. The proposed scheme can provide second-order estimates of the mean and variance without differentiating the system matrices with respect to the random variables. When the proposed scheme is used, it involves finite analyses of deterministic systems. In the case of one random variable with a symmetric probability density function, the proposed computational scheme can even provide a result with fifth-order accuracy. Compared with the traditional computational scheme of SPFEM, the proposed scheme is more convenient for numerical implementation. Four numerical examples demonstrate that the proposed scheme can be used in linear or nonlinear structures with correlated or uncorrelated random variables.
International Nuclear Information System (INIS)
Bertschinger, E.
1987-01-01
Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references
Estimation of CANDU reactor zone controller level by generalized perturbation method
International Nuclear Information System (INIS)
Kim, Do Heon; Kim, Jong Kyung; Choi, Hang Bok; Roh, Gyu Hong; Yang, Won Sik
1998-01-01
The zone controller level change due to refueling operation has been studied using a generalized perturbation method. The generalized perturbation method provides sensitivity of zone power to individual refueling operation and incremental change of zone controller level. By constructing a system equation for each zone power, the zone controller level change was obtained. The details and a proposed model for future work are described
Comparison of two perturbation methods to estimate the land surface modeling uncertainty
Su, H.; Houser, P.; Tian, Y.; Kumar, S.; Geiger, J.; Belvedere, D.
2007-12-01
In land surface modeling, it is almost impossible to simulate the land surface processes without any error because the earth system is highly complex and the physics of the land processes has not yet been understood sufficiently. In most cases, people want to know not only the model output but also the uncertainty in the modeling, to estimate how reliable the modeling is. Ensemble perturbation is an effective way to estimate the uncertainty in land surface modeling, since land surface models are highly nonlinear which makes the analytical approach not applicable in this estimation. The ideal perturbation noise is zero mean Gaussian distribution, however, this requirement can't be satisfied if the perturbed variables in land surface model have physical boundaries because part of the perturbation noises has to be removed to feed the land surface models properly. Two different perturbation methods are employed in our study to investigate their impact on quantifying land surface modeling uncertainty base on the Land Information System (LIS) framework developed by NASA/GSFC land team. One perturbation method is the built-in algorithm named "STATIC" in LIS version 5; the other is a new perturbation algorithm which was recently developed to minimize the overall bias in the perturbation by incorporating additional information from the whole time series for the perturbed variable. The statistical properties of the perturbation noise generated by the two different algorithms are investigated thoroughly by using a large ensemble size on a NASA supercomputer and then the corresponding uncertainty estimates based on the two perturbation methods are compared. Their further impacts on data assimilation are also discussed. Finally, an optimal perturbation method is suggested.
The application of probability methods for safeguards purposes
International Nuclear Information System (INIS)
Rumyantsev, A.N.
1976-01-01
The authors consider possible ways of applying probability methods to solve problems involved in accounting for nuclear materials. The increase in the flow of nuclear materials subject to IAEA safeguards makes it necessary to increase the accuracy of determination of the actual quantities of nuclear materials at all stages of their processing and use. It is proposed that the IAEA's automated system of accounting for nuclear materials, based on accounting information for each material balance zone and the results of random experimental checks performed by IAEA inspectors, be supplemented with mathematical models of the flow of nuclear materials in each balance zone based on the data supplied for each facility in the balance zone when it was placed under safeguards. The statistical error in determining the material balance and the material unaccounted for can be considerably reduced in this way even if the experimental control methods are retained. (author)
Neutron transport by collision probability method in complicated geometries
International Nuclear Information System (INIS)
Constantin, Marin
2000-01-01
For the first flight collision probability (FFCP) method a rapidly increasing of the memory requirements and execution time with the number of discrete regions occurs. Generally, the use of the method is restricted at cell/supercell level. However, the amazing developments both in computer hardware and computer architecture allow a real extending of the problems' domain and a more detailed treatment of the geometry. Two ways are discussed into the paper: the direct design of new codes and the improving of the mainframe old versions. The author's experience is focused on the performances' improving of the 3D integral transport code PIJXYZ (from an old version to a modern one) and on the design and developing of the 2D transport code CP 2 D in the last years. In the first case an optimization process have been performed before the parallelization. In the second a modular design and the newest techniques (factorization of the geometry, the macrobands method, the mobile set of chords, the automatic calculation of the integration error, optimal algorithms for the innermost programming level, the mixed method for tracking process and CPs calculation, etc.) were adopted. In both cases the parallelization uses a PCs network system. Some short examples for CP 2 D and PIJXYZ calculation are presented: reactivity void effect in typical CANDU cells using a multistratified coolant model, a problem of some adjacent fuel assemblies, CANDU reactivity devices 3D simulation. (author)
Homogenized parameters of light water fuel elements computed by a perturbative (perturbation) method
International Nuclear Information System (INIS)
Koide, Maria da Conceicao Michiyo
2000-01-01
A new analytic formulation for material parameters homogenization of the two dimensional and two energy-groups diffusion model has been successfully used as a fast computational tool for recovering the detailed group fluxes in full reactor cores. The homogenization method which has been proposed does not require the solution of the diffusion problem by a numerical method. As it is generally recognized that currents at assembly boundaries must be computed accurately, a simple numerical procedure designed to improve the values of currents obtained by nodal calculations is also presented. (author)
An Introduction to Perturbative Methods in Gauge Theories
International Nuclear Information System (INIS)
T Muta
1998-01-01
This volume develops the techniques of perturbative QCD in great pedagogical detail starting with field theory. Aside from extensive treatments of the renormalization group technique, the operator product expansion formalism and their applications to short-distance reactions, this book provides a comprehensive introduction to gauge theories. Examples and exercises are provided to amplify the discussions on important topics. This is an ideal textbook on the subject of quantum chromodynamics and is essential for researchers and graduate students in high energy physics, nuclear physics and mathematical physics
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been
Directory of Open Access Journals (Sweden)
Mustafa Kemal BAHAR
2010-06-01
Full Text Available In this study, the effects of applied electric field on the isolated square quantum well was investigated by analytic and perturbative method. The energy eigen values and wave functions in quantum well were found by perturbative method. Later, the electric field effects were investigated by analytic method, the results of perturbative and analytic method were compared. As well as both of results fit with each other, it was observed that externally applied electric field changed importantly electronic properties of the system.
Probability Density Function Method for Observing Reconstructed Attractor Structure
Institute of Scientific and Technical Information of China (English)
陆宏伟; 陈亚珠; 卫青
2004-01-01
Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6 - 6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.
International Nuclear Information System (INIS)
Dehghan, Mehdi; Shakeri, Fatemeh
2007-01-01
In this work, the solution of an inverse problem concerning a diffusion equation with source control parameters is presented. The homotopy perturbation method is employed to solve this equation. This method changes a difficult problem into a simple problem which can be easily solved. In this procedure, according to the homotopy technique, a homotopy with an embedding parameter p element of [0,1] is constructed, and this parameter is considered a 'small parameter', so the method is called the homotopy perturbation method, which can take full advantage of the traditional perturbation method and homotopy technique. The approximations obtained by the proposed method are uniformly valid not only for small parameters, but also for very large parameters. The fact that this technique, in contrast to the traditional perturbation methods, does not require a small parameter in the system, leads to wide applications in nonlinear equations
Analytic methods in applied probability in memory of Fridrikh Karpelevich
Suhov, Yu M
2002-01-01
This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable
International Nuclear Information System (INIS)
Voskresenskaya, O.O.
2002-01-01
It is shown that the relations between probabilities of A 2π -atoms creation in ns-states, derived with neglecting of the strong interaction between pions, hold practically unchanged if the strong interaction is taken into account in the first order of the perturbation theory. The formulation of Deser equation for the energy levels shift of the hadronic atoms (HA) is given in terms of the effective range of the strong interaction and relative correction to the Coulomb wave function of HA at origin, caused by the strong interaction. (author)
Commutator perturbation method in the study of vibrational-rotational spectra of diatomic molecules
International Nuclear Information System (INIS)
Matamala-Vasquez, A.; Karwowski, J.
2000-01-01
The commutator perturbation method, an algebraic version of the Van Vleck-Primas perturbation method, expressed in terms of ladder operators, has been applied to solving the eigenvalue problem of the Hamiltonian describing the vibrational-rotational motion of a diatomic molecule. The physical model used in this work is based on Dunham's approach. The method facilitates obtaining both energies and eigenvectors in an algebraic way
The perturbed angular correlation method - a modern technique in studying solids
International Nuclear Information System (INIS)
Unterricker, S.; Hunger, H.J.
1979-01-01
Starting from theoretical fundamentals the differential perturbed angular correlation method has been explained. By using the probe nucleus 111 Cd the magnetic dipole interaction in Fesub(x)Alsub(1-x) alloys and the electric quadrupole interaction in Cd have been measured. The perturbed angular correlation method is a modern nuclear measuring method and can be applied in studying ordering processes, phase transformations and radiation damages in metals, semiconductors and insulators
Choice Probability Generating Functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....
International Nuclear Information System (INIS)
Killingbeck, J.
1979-01-01
By using the methods of perturbation theory it is possible to construct simple formulae for the numerical integration of the Schroedinger equation, and also to calculate expectation values solely by means of simple eigenvalue calculations. (Auth.)
A perturbation method for dark solitons based on a complete set of the squared Jost solutions
International Nuclear Information System (INIS)
Ao Shengmei; Yan Jiaren
2005-01-01
A perturbation method for dark solitons is developed, which is based on the construction and the rigorous proof of the complete set of squared Jost solutions. The general procedure solving the adiabatic solution of perturbed nonlinear Schroedinger + equation, the time-evolution equation of dark soliton parameters and a formula for calculating the first-order correction are given. The method can also overcome the difficulties resulting from the non-vanishing boundary condition
International Nuclear Information System (INIS)
Santos, Adimir dos; Borges, A.A.
2000-01-01
A new method for the calculation of sensitivity coefficients is developed. The new method is a combination of two methodologies used for calculating these coefficients, which are the differential and the generalized perturbation theory methods. The proposed method utilizes as integral parameter the average flux in an arbitrary region of the system. Thus, the sensitivity coefficient contains only the component corresponding to the neutron flux. To obtain the new sensitivity coefficient, the derivates of the integral parameter, φ(ξ), with respect to σ are calculated using the perturbation method and the functional derivates of this generic integral parameter with respect to σ and φ are calculated using the differential method. The new method merges the advantages of the differential and generalized perturbation theory methods and eliminates their disadvantages. (author)
International Nuclear Information System (INIS)
Borges, Antonio Andrade
1998-01-01
A new method for the calculation of sensitivity coefficients is developed. The new method is a combination of two methodologies used for calculating theses coefficients, which are the differential and the generalized perturbation theory methods. The method utilizes as integral parameter the average flux in an arbitrary region of the system. Thus, the sensitivity coefficient contains only the component corresponding to the neutron flux. To obtain the new sensitivity coefficient, the derivatives of the integral parameter, Φ, with respect to σ are calculated using the perturbation method and the functional derivatives of this generic integral parameter with respect to σ and Φ are calculated using the differential method. (author)
Perturbative methods applied for sensitive coefficients calculations in thermal-hydraulic systems
International Nuclear Information System (INIS)
Andrade Lima, F.R. de
1993-01-01
The differential formalism and the Generalized Perturbation Theory (GPT) are applied to sensitivity analysis of thermal-hydraulics problems related to pressurized water reactor cores. The equations describing the thermal-hydraulic behavior of these reactors cores, used in COBRA-IV-I code, are conveniently written. The importance function related to the response of interest and the sensitivity coefficient of this response with respect to various selected parameters are obtained by using Differential and Generalized Perturbation Theory. The comparison among the results obtained with the application of these perturbative methods and those obtained directly with the model developed in COBRA-IV-I code shows a very good agreement. (author)
International Nuclear Information System (INIS)
Lima, Fernando R.A.; Lira, Carlos A.B.O.; Gandini, Augusto
1995-01-01
During the last two decades perturbative methods became an efficient tool to perform sensitivity analysis in nuclear reactor safety problems. In this paper, a comparative study taking into account perturbation formalisms (Diferential and Matricial Mthods and generalized Perturbation Theory - GPT) is considered. Then a few number of applications are described to analyze the sensitivity of some functions relavant to thermal hydraulics designs or safety analysis of nuclear reactor cores and steam generators. The behaviours of the nuclear reactor cores and steam generators are simulated, respectively, by the COBRA-IV-I and GEVAP codes. Results of sensitivity calculations have shown a good agreement when compared to those obtained directly by using the mentioned codes. So, a significative computational time safe can be obtained with perturbative methods performing sensitivity analysis in nuclear power plants. (author). 25 refs., 5 tabs
Directory of Open Access Journals (Sweden)
Muhammad Aslam Noor
2008-01-01
Full Text Available We suggest and analyze a technique by combining the variational iteration method and the homotopy perturbation method. This method is called the variational homotopy perturbation method (VHPM. We use this method for solving higher dimensional initial boundary value problems with variable coefficients. The developed algorithm is quite efficient and is practically well suited for use in these problems. The proposed scheme finds the solution without any discritization, transformation, or restrictive assumptions and avoids the round-off errors. Several examples are given to check the reliability and efficiency of the proposed technique.
Extended Krenciglowa-Kuo method and perturbation expansion of Q-box
International Nuclear Information System (INIS)
Shimizu, Genki; Otsuka, Takaharu; Takayanagi, Kazuo
2015-01-01
The Extended Krenciglowa-Kuo (EKK) method is a microscopic method to construct the energy-independent effective Hamiltonian H eff ; provided with an exact Q-box of the system, we can show which eigenstates are described by H eff given by the EKK method. In actual calculations, however, we can calculate the Q-box only up to a finite order in the perturbation theory. In this work, we examine the EKK method with the approximate Q-box, and show that the perturbative calculation of the Q-box does not harm the convergence properties of the EKK iterative method. (author)
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
Deriving average soliton equations with a perturbative method
International Nuclear Information System (INIS)
Ballantyne, G.J.; Gough, P.T.; Taylor, D.P.
1995-01-01
The method of multiple scales is applied to periodically amplified, lossy media described by either the nonlinear Schroedinger (NLS) equation or the Korteweg--de Vries (KdV) equation. An existing result for the NLS equation, derived in the context of nonlinear optical communications, is confirmed. The method is then applied to the KdV equation and the result is confirmed numerically
Utilization of the perturbation method for determination of the buckling heterogenous reactors
International Nuclear Information System (INIS)
Gheorghe, R.
1975-01-01
Evaluation of material buckling for heterogenous nulcear reactors is a key-problem for reactor people. In this direction several methods have been elaborated: bi-group method, heterogenous method and perturbation methods. Out of them, mostly employed is the perturbation method which is also presented in this paper and is applied in some parameter calculations of a new cell type for which fuel is positioned in the marginal area and the moderator is in the centre. It is based on the technique of progressive substitution. Advantages of the method: buckling comes out clearly, high level defects due to differences between O perturbated fluxes and the unperturbated flux Osub(o) can be corrected by an iterative procedure; using a modified bi-group theory, one can clearly describe effects of other parameters
Perturbation methods and the Melnikov functions for slowly varying oscillators
International Nuclear Information System (INIS)
Lakrad, Faouzi; Charafi, Moulay Mustapha
2005-01-01
A new approach to obtaining the Melnikov function for homoclinic orbits in slowly varying oscillators is proposed. The present method applies the Lindstedt-Poincare method to determine an approximation of homoclinic solutions. It is shown that the resultant Melnikov condition is the same as that obtained in the usual way involving distance functions in three dimensions by Wiggins and Holmes [Homoclinic orbits in slowly varying oscillators. SIAM J Math Anal 1987;18(3):612
Enhanced Multistage Homotopy Perturbation Method: Approximate Solutions of Nonlinear Dynamic Systems
Directory of Open Access Journals (Sweden)
Daniel Olvera
2014-01-01
Full Text Available We introduce a new approach called the enhanced multistage homotopy perturbation method (EMHPM that is based on the homotopy perturbation method (HPM and the usage of time subintervals to find the approximate solution of differential equations with strong nonlinearities. We also study the convergence of our proposed EMHPM approach based on the value of the control parameter h by following the homotopy analysis method (HAM. At the end of the paper, we compare the derived EMHPM approximate solutions of some nonlinear physical systems with their corresponding numerical integration solutions obtained by using the classical fourth order Runge-Kutta method via the amplitude-time response curves.
International Nuclear Information System (INIS)
Belendez, A.; Hernandez, A.; Belendez, T.; Neipp, C.; Marquez, A.
2008-01-01
He's homotopy perturbation method is used to calculate higher-order approximate periodic solutions of a nonlinear oscillator with discontinuity for which the elastic force term is proportional to sgn(x). We find He's homotopy perturbation method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. Only one iteration leads to high accuracy of the solutions with a maximal relative error for the approximate period of less than 1.56% for all values of oscillation amplitude, while this relative error is 0.30% for the second iteration and as low as 0.057% when the third-order approximation is considered. Comparison of the result obtained using this method with those obtained by different harmonic balance methods reveals that He's homotopy perturbation method is very effective and convenient
Analysis of Diffusion Problems using Homotopy Perturbation and Variational Iteration Methods
DEFF Research Database (Denmark)
Barari, Amin; Poor, A. Tahmasebi; Jorjani, A.
2010-01-01
In this paper, variational iteration method and homotopy perturbation method are applied to different forms of diffusion equation. The diffusion equations have found wide applications in heat transfer problems, theory of consolidation and many other problems in engineering. The methods proposed...
International Nuclear Information System (INIS)
Biazar, J.; Eslami, M.; Aminikhah, H.
2009-01-01
In this article, an application of He's homotopy perturbation method is applied to solve systems of Volterra integral equations of the first kind. Some non-linear examples are prepared to illustrate the efficiency and simplicity of the method. Applying the method for linear systems is so easily that it does not worth to have any example.
International Nuclear Information System (INIS)
Biazar, J.; Ghazvini, H.
2009-01-01
In this paper, the He's homotopy perturbation method is applied to solve systems of Volterra integral equations of the second kind. Some examples are presented to illustrate the ability of the method for linear and non-linear such systems. The results reveal that the method is very effective and simple.
Application of homotopy-perturbation method to nonlinear population dynamics models
International Nuclear Information System (INIS)
Chowdhury, M.S.H.; Hashim, I.; Abdulaziz, O.
2007-01-01
In this Letter, the homotopy-perturbation method (HPM) is employed to derive approximate series solutions of nonlinear population dynamics models. The nonlinear models considered are the multispecies Lotka-Volterra equations. The accuracy of this method is examined by comparison with the available exact and the fourth-order Runge-Kutta method (RK4)
Atomic and magnetic configurational energetics by the generalized perturbation method
DEFF Research Database (Denmark)
Ruban, Andrei V.; Shallcross, Sam; Simak, S.I.
2004-01-01
in the framework of the Korringa-Kohn-Rostoker method within the atomic sphere and coherent potential approximations. This is demonstrated with calculations of ordering energies, short-range order parameters, and transition temperatures in the CuZn, CuAu, CuPd, and PtCo systems. Furthermore, we show that the GPM...
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.
2013-01-01
Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...
International Nuclear Information System (INIS)
Kwok, K.S.; Bernard, J.A.; Lanning, D.D.
1992-01-01
The perturbed reactivity method is a general technique for the estimation of reactivity. It is particularly suited to the determination of a reactor's initial degree of subcriticality and was developed to facilitate the automated startup of both spacecraft and multi-modular reactors using model-based control laws. It entails perturbing a shutdown reactor by the insertion of reactivity at a known rate and then estimating the initial degree of subcriticality from observation of the resulting reactor period. While similar to inverse kinetics, the perturbed reactivity method differs in that the net reactivity present in the core is treated as two separate entities. The first is that associated with the known perturbation. This quantity, together with the observed period and the reactor's describing parameters, are the inputs to the method's implementing algorithm. The second entity, which is the algorithm;s output, is the sum of all other reactivities including those resulting from inherent feedback and the initial degree of subcriticality. During an automated startup, feedback effects will be minimal. Hence, when applied to a shutdown reactor, the output of the perturbed reactivity method will be a constant that is equal to the initial degree of subcriticality. This is a major advantage because repeated estimates can be made of this one quantity and signal smoothing techniques can be applied to enhance accuracy. In addition to describing the theoretical basis for the perturbed reactivity method, factors involved in its implementation such as the movement of control devices other than those used to create the perturbation, source estimation, and techniques for data smoothing are presented
Application of a Perturbation Method for Realistic Dynamic Simulation of Industrial Robots
International Nuclear Information System (INIS)
Waiboer, R. R.; Aarts, R. G. K. M.; Jonker, J. B.
2005-01-01
This paper presents the application of a perturbation method for the closed-loop dynamic simulation of a rigid-link manipulator with joint friction. In this method the perturbed motion of the manipulator is modelled as a first-order perturbation of the nominal manipulator motion. A non-linear finite element method is used to formulate the dynamic equations of the manipulator mechanism. In a closed-loop simulation the driving torques are generated by the control system. Friction torques at the actuator joints are introduced at the stage of perturbed dynamics. For a mathematical model of the friction torques we implemented the LuGre friction model that accounts both for the sliding and pre-sliding regime. To illustrate the method, the motion of a six-axes industrial Staeubli robot is simulated. The manipulation task implies transferring a laser spot along a straight line with a trapezoidal velocity profile. The computed trajectory tracking errors are compared with measured values, where in both cases the tip position is computed from the joint angles using a nominal kinematic robot model. It is found that a closed-loop simulation using a non-linear finite element model of this robot is very time-consuming due to the small time step of the discrete controller. Using the perturbation method with the linearised model a substantial reduction of the computer time is achieved without loss of accuracy
Lattice field theories: non-perturbative methods of analysis
International Nuclear Information System (INIS)
Weinstein, M.
1978-01-01
A lecture is given on the possible extraction of interesting physical information from quantum field theories by studying their semiclassical versions. From the beginning the problem of solving for the spectrum states of any given continuum quantum field theory is considered as a giant Schroedinger problem, and then some nonperturbative methods for diagonalizing the Hamiltonian of the theory are explained without recourse to semiclassical approximations. The notion of a lattice appears as an artifice to handle the problems associated with the familiar infrared and ultraviolet divergences of continuum quantum field theory and in fact for all but gauge theories. 18 references
The comparison of MCNP perturbation technique with MCNP difference method in critical calculation
International Nuclear Information System (INIS)
Liu Bin; Lv Xuefeng; Zhao Wei; Wang Kai; Tu Jing; Ouyang Xiaoping
2010-01-01
For a nuclear fission system, we calculated Δk eff , which arise from system material composition changes, by two different approaches, the MCNP perturbation technique and the MCNP difference method. For every material composition change, we made four different runs, each run with different cycles or each cycle generating different neutrons, then we compared the two Δk eff that are obtained by two different approaches. As a material composition change in any particular cell of the nuclear fission system is small compared to the material compositions in the whole nuclear fission system, in other words, this composition change can be treated as a small perturbation, the Δk eff results obtained from the MCNP perturbation technique are much quicker, much more efficient and reliable than the results from the MCNP difference method. When a material composition change in any particular cell of the nuclear fission system is significant compared to the material compositions in the whole nuclear fission system, both the MCNP perturbation technique and the MCNP difference method can give satisfactory results. But for the run with the same cycles and each cycle generating the same neutrons, the results obtained from the MCNP perturbation technique are systemically less than the results obtained from the MCNP difference method. To further confirm our calculation results from the MCNP4C, we run the exact same MCNP4C input file in MCNP5, the calculation results from MCNP5 are the same as the calculation results from MCNP4C. We need caution when using the MCNP perturbation technique to calculate the Δk eff as the material composition change is large compared to the material compositions in the whole nuclear fission system, even though the material composition changes of any particular cell of the fission system still meet the criteria of MCNP perturbation technique.
Born approximation to a perturbative numerical method for the solution of the Schroedinger equation
International Nuclear Information System (INIS)
Adam, Gh.
1978-01-01
A step function perturbative numerical method (SF-PN method) is developed for the solution of the Cauchy problem for the second order liniar differential equation in normal form. An important point stressed in the present paper, which seems to have been previously ignored in the literature devoted to the PN methods, is the close connection between the first order perturbation theory of the PN approach and the wellknown Born approximation, and, in general, the connection between the varjous orders of the PN corrections and the Neumann series. (author)
The multistage homotopy-perturbation method: A powerful scheme for handling the Lorenz system
International Nuclear Information System (INIS)
Chowdhury, M.S.H.; Hashim, I.; Momani, S.
2009-01-01
In this paper, a new reliable algorithm based on an adaptation of the standard homotopy-perturbation method (HPM) is presented. The HPM is treated as an algorithm in a sequence of intervals (i.e. time step) for finding accurate approximate solutions to the famous Lorenz system. Numerical comparisons between the multistage homotopy-perturbation method (MHPM) and the classical fourth-order Runge-Kutta (RK4) method reveal that the new technique is a promising tool for the nonlinear systems of ODEs.
Born approximation to a perturbative numerical method for the solution of the Schrodinger equation
International Nuclear Information System (INIS)
Adam, Gh.
1978-05-01
A perturbative numerical (PN) method is given for the solution of a regular one-dimensional Cauchy problem arising from the Schroedinger equation. The present method uses a step function approximation for the potential. Global, free of scaling difficulty, forward and backward PN algorithms are derived within first order perturbation theory (Born approximation). A rigorous analysis of the local truncation errors is performed. This shows that the order of accuracy of the method is equal to four. In between the mesh points, the global formula for the wavefunction is accurate within O(h 4 ), while that for the first order derivative is accurate within O(h 3 ). (author)
New numerical method for iterative or perturbative solution of quantum field theory
International Nuclear Information System (INIS)
Hahn, S.C.; Guralnik, G.S.
1999-01-01
A new computational idea for continuum quantum Field theories is outlined. This approach is based on the lattice source Galerkin methods developed by Garcia, Guralnik and Lawson. The method has many promising features including treating fermions on a relatively symmetric footing with bosons. As a spin-off of the technology developed for 'exact' solutions, the numerical methods used have a special case application to perturbation theory. We are in the process of developing an entirely numerical approach to evaluating graphs to high perturbative order. (authors)
Critical review of the probability of causation method
International Nuclear Information System (INIS)
Cox, L.A. Jr.; Fiksel, J.R.
1985-01-01
In a more controversial report than the others in the study, the authors use one scientific discipline to review the work of another discipline. Their proposal recognizes the imprecision that develops in moving from group to individual interpretations of causal effects by substituting the term assigned share for probability of causation. The authors conclude that the use of a formula will not provide reliable measures of risk attribution in individual cases. The gap between scientific certainty and assigning shares of responsibility must be filled by subjective value judgments supplied by the scientists. 22 references, 2 figures, 4 tables
Preparation of Stable Amyloid-β Oligomers Without Perturbative Methods.
Kotler, Samuel A; Ramamoorthy, Ayyalusamy
2018-01-01
Soluble amyloid-β (Aβ) oligomers have become a focal point in the study of Alzheimer's disease due to their ability to elicit cytotoxicity. A number of recent studies have concentrated on the structural characterization of soluble Aβ oligomers to gain insight into their mechanism of toxicity. Consequently, providing reproducible protocols for the preparation of such oligomers is of utmost importance. The method presented in this chapter details a protocol for preparing an Aβ oligomer, with a primarily disordered secondary structure, without the need for chemical modification or amino acid substitution. Due to the stability of these disordered Aβ oligomers and the reproducibility with which they form, they are amenable for biophysical and high-resolution structural characterization.
Directory of Open Access Journals (Sweden)
R. Darzi
2010-01-01
Full Text Available We applied the variational iteration method and the homotopy perturbation method to solve Sturm-Liouville eigenvalue and boundary value problems. The main advantage of these methods is the flexibility to give approximate and exact solutions to both linear and nonlinear problems without linearization or discretization. The results show that both methods are simple and effective.
Darzi R; Neamaty A
2010-01-01
We applied the variational iteration method and the homotopy perturbation method to solve Sturm-Liouville eigenvalue and boundary value problems. The main advantage of these methods is the flexibility to give approximate and exact solutions to both linear and nonlinear problems without linearization or discretization. The results show that both methods are simple and effective.
Choice probability generating functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2013-01-01
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...
DeVille, R. E. Lee; Harkin, Anthony; Holzer, Matt; Josić, Krešimir; Kaper, Tasso J.
2008-06-01
For singular perturbation problems, the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. E. 49 (1994) 4502-4511] has been shown to be an effective general approach for deriving reduced or amplitude equations that govern the long time dynamics of the system. It has been applied to a variety of problems traditionally analyzed using disparate methods, including the method of multiple scales, boundary layer theory, the WKBJ method, the Poincaré-Lindstedt method, the method of averaging, and others. In this article, we show how the RG method may be used to generate normal forms for large classes of ordinary differential equations. First, we apply the RG method to systems with autonomous perturbations, and we show that the reduced or amplitude equations generated by the RG method are equivalent to the classical Poincaré-Birkhoff normal forms for these systems up to and including terms of O(ɛ2), where ɛ is the perturbation parameter. This analysis establishes our approach and generalizes to higher order. Second, we apply the RG method to systems with nonautonomous perturbations, and we show that the reduced or amplitude equations so generated constitute time-asymptotic normal forms, which are based on KBM averages. Moreover, for both classes of problems, we show that the main coordinate changes are equivalent, up to translations between the spaces in which they are defined. In this manner, our results show that the RG method offers a new approach for deriving normal forms for nonautonomous systems, and it offers advantages since one can typically more readily identify resonant terms from naive perturbation expansions than from the nonautonomous vector fields themselves. Finally, we establish how well the solution to the RG equations approximates the solution of the original equations on time scales of O(1/ɛ).
Brémaud, Pierre
2017-01-01
The emphasis in this book is placed on general models (Markov chains, random fields, random graphs), universal methods (the probabilistic method, the coupling method, the Stein-Chen method, martingale methods, the method of types) and versatile tools (Chernoff's bound, Hoeffding's inequality, Holley's inequality) whose domain of application extends far beyond the present text. Although the examples treated in the book relate to the possible applications, in the communication and computing sciences, in operations research and in physics, this book is in the first instance concerned with theory. The level of the book is that of a beginning graduate course. It is self-contained, the prerequisites consisting merely of basic calculus (series) and basic linear algebra (matrices). The reader is not assumed to be trained in probability since the first chapters give in considerable detail the background necessary to understand the rest of the book. .
The method of rigged spaces in singular perturbation theory of self-adjoint operators
Koshmanenko, Volodymyr; Koshmanenko, Nataliia
2016-01-01
This monograph presents the newly developed method of rigged Hilbert spaces as a modern approach in singular perturbation theory. A key notion of this approach is the Lax-Berezansky triple of Hilbert spaces embedded one into another, which specifies the well-known Gelfand topological triple. All kinds of singular interactions described by potentials supported on small sets (like the Dirac δ-potentials, fractals, singular measures, high degree super-singular expressions) admit a rigorous treatment only in terms of the equipped spaces and their scales. The main idea of the method is to use singular perturbations to change inner products in the starting rigged space, and the construction of the perturbed operator by the Berezansky canonical isomorphism (which connects the positive and negative spaces from a new rigged triplet). The approach combines three powerful tools of functional analysis based on the Birman-Krein-Vishik theory of self-adjoint extensions of symmetric operators, the theory of singular quadra...
Yield strength measurement of shock-loaded metal by flyer-impact perturbation method
Ma, Xiaojuan; Shi, Zhan
2018-06-01
Yield strength is one of the most important physical properties of a solid material, especially far from its melting line. The flyer-impact perturbation method measures material yield strength on the basis of correlation between the yield strength under shock compression and the damping of oscillatory perturbations in the shape of a shock front passing through the material. We used flyer-impact experiments on targets with machined grooves on the impact surface of shock 6061-T6 aluminum to between 32 and 61 GPa and recorded the evolution of the shock front perturbation amplitude in the sample with electric pins. Simulations using the elastic-plastic model can be matched to the experiments, explaining well the form of the perturbation decay and constraining the yield strength of 6061-T6 aluminum to be 1.31-1.75 GPa. These results are in agreement with values obtained from reshock and release wave profiles. We conclude that the flyer-impact perturbation method is indeed a new means to measure material strength.
Application of the perturbation iteration method to boundary layer type problems.
Pakdemirli, Mehmet
2016-01-01
The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.
Non-standard perturbative methods for the effective potential in λφ4 QFT
International Nuclear Information System (INIS)
Okopinska, A.
1986-07-01
The effective potential in scalar QFT is calculated in the non-standard perturbative methods and compared with the conventional loop expansion. In the space time dimensions 0 and 1 the results are compared with the ''exact'' effective potential obtained numerically. In 4 dimensions we show that λφ 4 theory is non-interacting. (author)
A Double Perturbation Method for Reducing Dynamical Degradation of the Digital Baker Map
Liu, Lingfeng; Lin, Jun; Miao, Suoxia; Liu, Bocheng
2017-06-01
The digital Baker map is widely used in different kinds of cryptosystems, especially for image encryption. However, any chaotic map which is realized on the finite precision device (e.g. computer) will suffer from dynamical degradation, which refers to short cycle lengths, low complexity and strong correlations. In this paper, a novel double perturbation method is proposed for reducing the dynamical degradation of the digital Baker map. Both state variables and system parameters are perturbed by the digital logistic map. Numerical experiments show that the perturbed Baker map can achieve good statistical and cryptographic properties. Furthermore, a new image encryption algorithm is provided as a simple application. With a rather simple algorithm, the encrypted image can achieve high security, which is competitive to the recently proposed image encryption algorithms.
Perturbation method for calculation of narrow-band impedance and trapped modes
International Nuclear Information System (INIS)
Heifets, S.A.
1987-01-01
An iterative method for calculation of the narrow-band impedance is described for a system with a small variation in boundary conditions, so that the variation can be considered as a perturbation. The results are compared with numeric calculations. The method is used to relate the origin of the trapped modes with the degeneracy of the spectrum of an unperturbed system. The method also can be applied to transverse impedance calculations. 6 refs., 6 figs., 1 tab
Directory of Open Access Journals (Sweden)
Abdoul R. Ghotbi
2008-01-01
Full Text Available Due to wide range of interest in use of bioeconomic models to gain insight into the scientific management of renewable resources like fisheries and forestry, homotopy perturbation method is employed to approximate the solution of the ratio-dependent predator-prey system with constant effort prey harvesting. The results are compared with the results obtained by Adomian decomposition method. The results show that, in new model, there are less computations needed in comparison to Adomian decomposition method.
Regularization and computational methods for precise solution of perturbed orbit transfer problems
Woollands, Robyn Michele
The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these
Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method
Energy Technology Data Exchange (ETDEWEB)
Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)
2017-05-15
The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.
Risk prediction, safety analysis and quantitative probability methods - a caveat
International Nuclear Information System (INIS)
Critchley, O.H.
1976-01-01
Views are expressed on the use of quantitative techniques for the determination of value judgements in nuclear safety assessments, hazard evaluation, and risk prediction. Caution is urged when attempts are made to quantify value judgements in the field of nuclear safety. Criteria are given the meaningful application of reliability methods but doubts are expressed about their application to safety analysis, risk prediction and design guidances for experimental or prototype plant. Doubts are also expressed about some concomitant methods of population dose evaluation. The complexities of new designs of nuclear power plants make the problem of safety assessment more difficult but some possible approaches are suggested as alternatives to the quantitative techniques criticized. (U.K.)
Improved Monte Carlo-perturbation method for estimation of control rod worths in a research reactor
International Nuclear Information System (INIS)
Kalcheva, Silva; Koonen, Edgar
2009-01-01
A hybrid method dedicated to improve the experimental technique for estimation of control rod worths in a research reactor is presented. The method uses a combination of Monte Carlo technique and perturbation theory. Perturbation method is used to obtain the equation for the relative efficiency of control rod insertion. A series of coefficients, describing the axial absorption profile are used to correct the equation for a composite rod, having a complicated burn-up irradiation history. These coefficients have to be determined - by experiment or by using some theoretical/numerical method. In the present paper they are derived from the macroscopic absorption cross-sections, obtained from detailed Monte Carlo calculations by MCNPX 2.6.F of the axial burn-up profile during control rod life. The method is validated on measurements of control rod worths at the BR2 reactor. Comparison with direct MCNPX evaluations of control rod worths is also presented
Energy Technology Data Exchange (ETDEWEB)
Milivojevic, S [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)
1974-12-15
Probability method was chosen for analysing the reactor system reliability is considered realistic since it is based on verified experimental data. In fact this is a statistical method. The probability method developed takes into account the probability distribution of permitted levels of relevant parameters and their particular influence on the reliability of the system as a whole. The proposed method is rather general, and was used for problem of thermal safety analysis of reactor system. This analysis enables to analyze basic properties of the system under different operation conditions, expressed in form of probability they show the reliability of the system on the whole as well as reliability of each component.
Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs
Hadjimichael, Yiannis
2017-09-30
A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.
2012-01-01
PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator
International Nuclear Information System (INIS)
Gurjao, Emir Candeia
1996-02-01
The differential and GPT (Generalized Perturbation Theory) formalisms of the Perturbation Theory were applied in this work to a simplified U-tubes steam generator model to perform sensitivity analysis. The adjoint and importance equations, with the corresponding expressions for the sensitivity coefficients, were derived for this steam generator model. The system was numerically was numerically solved in a Fortran program, called GEVADJ, in order to calculate the sensitivity coefficients. A transient loss of forced primary coolant in the nuclear power plant Angra-1 was used as example case. The average and final values of functionals: secondary pressure and enthalpy were studied in relation to changes in the secondary feedwater flow, enthalpy and total volume in secondary circuit. Absolute variations in the above functionals were calculated using the perturbative methods, considering the variations in the feedwater flow and total secondary volume. Comparison with the same variations obtained via direct model showed in general good agreement, demonstrating the potentiality of perturbative methods for sensitivity analysis of nuclear systems. (author)
Estimation of functional failure probability of passive systems based on subset simulation method
International Nuclear Information System (INIS)
Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing
2012-01-01
In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)
Application of Multistage Homotopy Perturbation Method to the Chaotic Genesio System
Directory of Open Access Journals (Sweden)
M. S. H. Chowdhury
2012-01-01
Full Text Available Finding accurate solution of chaotic system by using efficient existing numerical methods is very hard for its complex dynamical behaviors. In this paper, the multistage homotopy-perturbation method (MHPM is applied to the Chaotic Genesio system. The MHPM is a simple reliable modification based on an adaptation of the standard homotopy-perturbation method (HPM. The HPM is treated as an algorithm in a sequence of intervals for finding accurate approximate solutions to the Chaotic Genesio system. Numerical comparisons between the MHPM and the classical fourth-order Runge-Kutta (RK4 solutions are made. The results reveal that the new technique is a promising tool for the nonlinear chaotic systems of ordinary differential equations.
Directory of Open Access Journals (Sweden)
Jing Wang
2013-01-01
Full Text Available The image reconstruction for electrical impedance tomography (EIT mathematically is a typed nonlinear ill-posed inverse problem. In this paper, a novel iteration regularization scheme based on the homotopy perturbation technique, namely, homotopy perturbation inversion method, is applied to investigate the EIT image reconstruction problem. To verify the feasibility and effectiveness, simulations of image reconstruction have been performed in terms of considering different locations, sizes, and numbers of the inclusions, as well as robustness to data noise. Numerical results indicate that this method can overcome the numerical instability and is robust to data noise in the EIT image reconstruction. Moreover, compared with the classical Landweber iteration method, our approach improves the convergence rate. The results are promising.
Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.
2018-05-01
Based on the minimal length concept, inspired by Heisenberg algebra, a closed analytical formula is derived for the energy spectrum of the prolate γ-rigid Bohr-Mottelson Hamiltonian of nuclei, within a quantum perturbation method (QPM), by considering a scaled Davidson potential in β shape variable. In the resulting solution, called X(3)-D-ML, the ground state and the first β-band are all studied as a function of the free parameters. The fact of introducing the minimal length concept with a QPM makes the model very flexible and a powerful approach to describe nuclear collective excitations of a variety of vibrational-like nuclei. The introduction of scaling parameters in the Davidson potential enables us to get a physical minimum of this latter in comparison with previous works. The analysis of the corrected wave function, as well as the probability density distribution, shows that the minimal length parameter has a physical upper bound limit.
2014-01-01
Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism
International Nuclear Information System (INIS)
Wasastjerna, F.; Lux, I.
1980-03-01
A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)
Nikazad, T; Davidi, R; Herman, G T
2012-03-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.
International Nuclear Information System (INIS)
Cuce, Erdem; Cuce, Pinar Mert
2015-01-01
Highlights: • Homotopy perturbation method has been applied to porous fins. • Dimensionless efficiency and effectiveness expressions have been firstly developed. • Effects of porous and convection parameters on thermal analysis have been clarified. • Ratio of porous fin to solid fin heat transfer rate has been given for various cases. • Reliability and practicality of homotopy perturbation method has been illustrated. - Abstract: In our previous works, thermal performance of straight fins with both constant and temperature-dependent thermal conductivity has been investigated in detail and dimensionless analytical expressions of fin efficiency and fin effectiveness have been developed for the first time in literature via homotopy perturbation method. In this study, previous works have been extended to porous fins. Governing equations have been formulated by performing Darcy’s model. Dimensionless temperature distribution along the length of porous fin has been determined as a function of porosity and convection parameters. The ratio of porous fin to solid fin heat transfer rate has also been evaluated as a function of thermo-geometric fin parameter. The results have been compared with those of finite difference method for a specific case and an excellent agreement has been observed. The expressions developed are beneficial for thermal engineers for preliminary assessment of thermophysical systems instead of consuming time in heat conduction problems governed by strongly nonlinear differential equations
Homotopy perturbation method for free vibration analysis of beams on elastic foundation
International Nuclear Information System (INIS)
Ozturk, Baki; Coskun, Safa Bozkurt; Koc, Mehmet Zahid; Atay, Mehmet Tarik
2010-01-01
In this study, the homotopy perturbation method (HPM) is applied for free vibration analysis of beam on elastic foundation. This numerical method is applied on a previously available case study. Analytical solutions and frequency factors are evaluated for different ratios of axial load N acting on the beam to Euler buckling load, N r . The application of HPM for the particular problem in this study gives results which are in excellent agreement with both analytical solutions and the variational iteration method (VIM) solutions for the case considered in this study and the differential transform method (DTM) results available in the literature.
Directory of Open Access Journals (Sweden)
seyd ghasem enayati
2017-01-01
Full Text Available In this paper, two powerful analytical methods known as modified homotopy perturbation method and Amplitude Frequency Formulation called respectively MHPM and AFF, are introduced to derive approximate solutions of a system of ordinary differential equations appear in mechanical applications. These methods convert a difficult problem into a simple one, which can be easily handled. The obtained solutions are compared with numerical fourth order runge-kutta method to show the applicability and accuracy of both MHPM and AFF in solving this sample problem. The results attained in this paper confirm the idea that MHPM and AFF are powerful mathematical tools and they can be applied to linear and nonlinear problems.
Directory of Open Access Journals (Sweden)
Anant Kant Shukla
2014-11-01
Full Text Available We obtain approximate analytical solutions of two mathematical models of the dynamics of tobacco use and relapse including peer pressure using the homotopy perturbation method (HPM and the homotopy analysis method (HAM. To enlarge the domain of convergence we apply the Padé approximation to the HPM and HAM series solutions. We show graphically that the results obtained by both methods are very accurate in comparison with the numerical solution for a period of 30 years.
Determination of the most reactivity control rod by pseudo-harmonics perturbation method
International Nuclear Information System (INIS)
Freire, Fernando S.; Silva, Fernando C.; Martinez, Aquilino S.
2005-01-01
Frequently it is necessary to compute the change in core multiplication caused by a change in the core temperature or composition. Even when this perturbation is localized, such as a control rod inserted into the core, one does not have to repeat the original criticality calculation, but instead we can use the well-known pseudo-harmonics perturbation method to express the corresponding change in the multiplication factor in terms of the neutron flux expanded in the basis vectors characterizing the unperturbed core. Therefore we may compute the control rod worth to find the most reactivity control rod to calculate the fast shutdown margin. In this thesis we propose a simple and precise method to identify the most reactivity control rod. (author)
Energy Technology Data Exchange (ETDEWEB)
Perruchot-Triboulet, S.; Sanchez, R.
1997-12-01
The modification of the isotopic composition, the temperature or even accounting for across section uncertainties in one part of a nuclear reactor core, affects the value of the effective multiplication factor. A new tool allows the analysis of the reactivity effect generated by the modification of the system. With the help of the direct and adjoint fluxes, a detailed balance of reactivity, between the compared systems, is done for each isotopic cross section. After the presentation of the direct and adjoint transport equations in the context of the multigroup code transport APOLLO2, this note describes the method, based on perturbation theory, for the analysis of the reactivity variation. An example application is also given. (author).
Approximate solution of generalized Ginzburg-Landau-Higgs system via homotopy perturbation method
Energy Technology Data Exchange (ETDEWEB)
Lu Juhong [School of Physics and Electromechanical Engineering, Shaoguan Univ., Guangdong (China); Dept. of Information Engineering, Coll. of Lishui Professional Tech., Zhejiang (China); Zheng Chunlong [School of Physics and Electromechanical Engineering, Shaoguan Univ., Guangdong (China); Shanghai Inst. of Applied Mathematics and Mechanics, Shanghai Univ., SH (China)
2010-04-15
Using the homotopy perturbation method, a class of nonlinear generalized Ginzburg-Landau-Higgs systems (GGLH) is considered. Firstly, by introducing a homotopic transformation, the nonlinear problem is changed into a system of linear equations. Secondly, by selecting a suitable initial approximation, the approximate solution with arbitrary degree accuracy to the generalized Ginzburg-Landau-Higgs system is derived. Finally, another type of homotopic transformation to the generalized Ginzburg-Landau-Higgs system reported in previous literature is briefly discussed. (orig.)
Perturbation method utilization in the analysis of the Convertible Spectral Shift Reactor (RCVS)
International Nuclear Information System (INIS)
Bruna, G.B; Legendre, J.F.; Porta, J.; Doriath, J.Y.
1988-01-01
In the framework of the preliminary faisability studies on a new core concept, techniques derived from perturbation theory show-up very useful in the calculation and physical analysis of project parameters. We show, in the present work, some applications of these methods to the RCVS (Reacteur Convertible a Variation de Spectre - Convertible Spectral Shift Reactor) Concept studies. Actually, we present here the search of a few group project type energy structure and the splitting of reactivity effects into individual components [fr
Soliton solutions of the two-dimensional KdV-Burgers equation by homotopy perturbation method
International Nuclear Information System (INIS)
Molabahrami, A.; Khani, F.; Hamedi-Nezhad, S.
2007-01-01
In this Letter, the He's homotopy perturbation method (HPM) to finding the soliton solutions of the two-dimensional Korteweg-de Vries Burgers' equation (tdKdVB) for the initial conditions was applied. Numerical solutions of the equation were obtained. The obtained solutions, in comparison with the exact solutions admit a remarkable accuracy. The results reveal that the HPM is very effective and simple
Core design and operation optimization methods based on time-dependent perturbation theory
International Nuclear Information System (INIS)
Greenspan, E.
1983-08-01
A general approach for the optimization of nuclear reactor core design and operation is outlined; it is based on two cornerstones: a newly developed time-dependent (or burnup-dependent) perturbation theory for nonlinear problems and a succesive iteration technique. The resulting approach is capable of handling realistic reactor models using computational methods of any degree of sophistication desired, while accounting for all the constraints imposed. Three general optimization strategies, different in the way for handling the constraints, are formulated. (author)
Analysis of radionuclide transport through fissured porous media with a perturbation method
Energy Technology Data Exchange (ETDEWEB)
Banat, M [JGC Corp., Tokyo (Japan)
1995-04-01
This paper presents a specific procedure for obtaining solutions for the transport of radionuclides in a fissured porous media. The concentration profiles are deduced for a wide range of Peclet numbers using a perturbation method with a multiscale of time. Results show clearly that because of an increase of longitudinal dispersion, the radionuclide moves faster with respect to the case of zero dispersion (i.e. an infinite Peclet number). The main purpose of this paper is to demonstrate the practical advantage of the present calculation method with respect to the classical numerical and analytical methods used for radionuclide transport. (author).
International Nuclear Information System (INIS)
Claro, L.H.; Alvim, A.C.M.; Thome, Z.D.
1988-08-01
The objective of this work is to stydy the effect of intense perturbations, such as control rod insertion in the core of PWR reactors, through a perturbation approach consisting of a modified version of the pseudo-harmonics method. A typical one-dimensional PWR reactor model was used as a reference state, from which two perturbations were imposed, simulation gray and black control rod insertion. In the first case, eigenvalue convergence was achieved with the eighth order of approximation approximation and perturbed fluxes and eigenvalue estimates agreed very well with direct calculation results. The second case tested represents a very intense localized perturbation. Oscillation in keff were observed er of approximation increased and the method failed to converge. Results obtained indicate that the pseudo-harmonics method can be used to compute 2 group fluxes and fundamental eigenvalue of perturbated states resulting from gray control rod insertion in PWR reactors. The method is limited, however, by perturbation intensity, as other perturbation methods are. (author) [pt
A discrete homotopy perturbation method for non-linear Schrodinger equation
Directory of Open Access Journals (Sweden)
H. A. Wahab
2015-12-01
Full Text Available A general analysis is made by homotopy perturbation method while taking the advantages of the initial guess, appearance of the embedding parameter, different choices of the linear operator to the approximated solution to the non-linear Schrodinger equation. We are not dependent upon the Adomian polynomials and find the linear forms of the components without these calculations. The discretised forms of the nonlinear Schrodinger equation allow us whether to apply any numerical technique on the discritisation forms or proceed for perturbation solution of the problem. The discretised forms obtained by constructed homotopy provide the linear parts of the components of the solution series and hence a new discretised form is obtained. The general discretised form for the NLSE allows us to choose any initial guess and the solution in the closed form.
Local and accumulated truncation errors in a class of perturbative numerical methods
International Nuclear Information System (INIS)
Adam, G.; Adam, S.; Corciovei, A.
1980-01-01
The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)
Method to Calculate Accurate Top Event Probability in a Seismic PSA
Energy Technology Data Exchange (ETDEWEB)
Jung, Woo Sik [Sejong Univ., Seoul (Korea, Republic of)
2014-05-15
ACUBE(Advanced Cutset Upper Bound Estimator) calculates the top event probability and importance measures from cutsets by dividing cutsets into major and minor groups depending on the cutset probability, where the cutsets that have higher cutset probability are included in the major group and the others in minor cutsets, converting major cutsets into a Binary Decision Diagram (BDD). By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. ACUBE works by dividing the cutsets into two groups (higher and lower cutset probability groups), calculating the top event probability and importance measures in each group, and combining the two results from the two groups. Here, ACUBE calculates the top event probability and importance measures of the higher cutset probability group exactly. On the other hand, ACUBE calculates these measures of the lower cutset probability group with an approximation such as MCUB. The ACUBE algorithm is useful for decreasing the conservatism that is caused by approximating the top event probability and importance measure calculations with given cutsets. By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. This study shows that careful attention should be paid and an appropriate method be provided in order to avoid the significant overestimation of the top event probability calculation. Due to the strength of ACUBE that is explained in this study, the ACUBE became a vital tool for calculating more accurate CDF of the seismic PSA cutsets than the conventional probability calculation method.
International Nuclear Information System (INIS)
Zheng Qiyan; Zhang Lijun; Huang Weiqi; Yin Qingliao
2010-01-01
Assessment procedure of aircraft crash events in siting for nuclear power plants, and the methods of probability determination in two different stages of prelimi- nary screening and detailed evaluation are introduced in this paper. Except for general air traffic, airport operations and aircraft in the corridor, the probability of aircraft crash by military operation in the military airspaces is considered here. (authors)
Calculation of transition probabilities using the multiconfiguration Dirac-Fock method
International Nuclear Information System (INIS)
Kim, Yong Ki; Desclaux, Jean Paul; Indelicato, Paul
1998-01-01
The performance of the multiconfiguration Dirac-Fock (MCDF) method in calculating transition probabilities of atoms is reviewed. In general, the MCDF wave functions will lead to transition probabilities accurate to ∼ 10% or better for strong, electric-dipole allowed transitions for small atoms. However, it is more difficult to get reliable transition probabilities for weak transitions. Also, some MCDF wave functions for a specific J quantum number may not reduce to the appropriate L and S quantum numbers in the nonrelativistic limit. Transition probabilities calculated from such MCDF wave functions for nonrelativistically forbidden transitions are unreliable. Remedies for such cases are discussed
International Nuclear Information System (INIS)
Noack, K.
1981-01-01
The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered [ru
Brus, D.J.; Gruijter, de J.J.
2003-01-01
In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be
DEFF Research Database (Denmark)
Reck, Kasper; Thomsen, Erik Vilain; Hansen, Ole
2011-01-01
. The solution of the mapped Helmholtz equation is found by solving an infinite series of Poisson equations using two dimensional Fourier series. The solution is entirely based on analytical expressions and is not mesh dependent. The analytical results are compared to a numerical (finite element method) solution......The scalar wave equation, or Helmholtz equation, describes within a certain approximation the electromagnetic field distribution in a given system. In this paper we show how to solve the Helmholtz equation in complex geometries using conformal mapping and the homotopy perturbation method...
International Nuclear Information System (INIS)
Noack, K.
1982-01-01
The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method
Determination of Periodic Solution for Tapered Beams with Modified Iteration Perturbation Method
Directory of Open Access Journals (Sweden)
Mohammad Mehdi Mashinchi Joubari
2015-01-01
Full Text Available In this paper, we implemented the Modified Iteration Perturbation Method (MIPM for approximating the periodic behavior of a tapered beam. This problem is formulated as a nonlinear ordinary differential equation with linear and nonlinear terms. The solution is quickly convergent and does not need to complicated calculations. Comparing the results of the MIPM with the exact solution shows that this method is effective and convenient. Also, it is predicated that MIPM can be potentially used in the analysis of strongly nonlinear oscillation problems accurately.
Directory of Open Access Journals (Sweden)
Claude Rodrigue Bambe Moutsinga
2018-01-01
Full Text Available Most existing multivariate models in finance are based on diffusion models. These models typically lead to the need of solving systems of Riccati differential equations. In this paper, we introduce an efficient method for solving systems of stiff Riccati differential equations. In this technique, a combination of Laplace transform and homotopy perturbation methods is considered as an algorithm to the exact solution of the nonlinear Riccati equations. The resulting technique is applied to solving stiff diffusion model problems that include interest rates models as well as two and three-factor stochastic volatility models. We show that the present approach is relatively easy, efficient and highly accurate.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
Optimization of Candu fuel management with gradient methods using generalized perturbation theory
International Nuclear Information System (INIS)
Chambon, R.; Varin, E.; Rozon, D.
2005-01-01
CANDU fuel management problems are solved using time-average representation of the core. Optimization problems based on this representation have been defined in the early nineties. The mathematical programming using the generalized perturbation theory (GPT) that was developed has been implemented in the reactor code DONJON. The use of the augmented Lagrangian (AL) method is presented and evaluated in this paper. This approach is mandatory for new constraint problems. Combined with the classical Lemke method, it proves to be very efficient to reach optimal solution in a very limited number of iterations. (authors)
Numerical simulation of the regularized long wave equation by He's homotopy perturbation method
Energy Technology Data Exchange (ETDEWEB)
Inc, Mustafa [Department of Mathematics, Firat University, 23119 Elazig (Turkey)], E-mail: minc@firat.edu.tr; Ugurlu, Yavuz [Department of Mathematics, Firat University, 23119 Elazig (Turkey)
2007-09-17
In this Letter, we present the homotopy perturbation method (shortly HPM) for obtaining the numerical solution of the RLW equation. We obtain the exact and numerical solutions of the Regularized Long Wave (RLW) equation for certain initial condition. The initial approximation can be freely chosen with possible unknown constants which can be determined by imposing the boundary and initial conditions. Comparison of the results with those of other methods have led us to significant consequences. The numerical solutions are compared with the known analytical solutions.
Numerical simulation of the regularized long wave equation by He's homotopy perturbation method
International Nuclear Information System (INIS)
Inc, Mustafa; Ugurlu, Yavuz
2007-01-01
In this Letter, we present the homotopy perturbation method (shortly HPM) for obtaining the numerical solution of the RLW equation. We obtain the exact and numerical solutions of the Regularized Long Wave (RLW) equation for certain initial condition. The initial approximation can be freely chosen with possible unknown constants which can be determined by imposing the boundary and initial conditions. Comparison of the results with those of other methods have led us to significant consequences. The numerical solutions are compared with the known analytical solutions
Shen, Tonghao; Su, Neil Qiang; Wu, Anan; Xu, Xin
2014-03-05
In this work, we first review the perturbative treatment of an oscillator with cubic anharmonicity. It is shown that there is a quantum-classical correspondence in terms of mean displacement, mean-squared displacement, and the corresponding variance in the first-order perturbation theory, provided that the amplitude of the classical oscillator is fixed at the zeroth-order energy of quantum mechanics EQM (0). This correspondence condition is realized by proposing the extended Langevin dynamics (XLD), where the key is to construct a proper driving force. It is assumed that the driving force adopts a simple harmonic form with its amplitude chosen according to EQM (0), while the driving frequency chosen as the harmonic frequency. The latter can be improved by using the natural frequency of the system in response to the potential if its anharmonicity is strong. By comparing to the accurate numeric results from discrete variable representation calculations for a set of diatomic species, it is shown that the present method is able to capture the large part of anharmonicity, being competitive with the wave function-based vibrational second-order perturbation theory, for the whole frequency range from ∼4400 cm(-1) (H2 ) to ∼160 cm(-1) (Na2 ). XLD shows a substantial improvement over the classical molecular dynamics which ceases to work for hard mode when zero-point energy effects are significant. Copyright © 2013 Wiley Periodicals, Inc.
A Newton-Based Extremum Seeking MPPT Method for Photovoltaic Systems with Stochastic Perturbations
Directory of Open Access Journals (Sweden)
Heng Li
2014-01-01
Full Text Available Microcontroller based maximum power point tracking (MPPT has been the most popular MPPT approach in photovoltaic systems due to its high flexibility and efficiency in different photovoltaic systems. It is well known that PV systems typically operate under a range of uncertain environmental parameters and disturbances, which implies that MPPT controllers generally suffer from some unknown stochastic perturbations. To address this issue, a novel Newton-based stochastic extremum seeking MPPT method is proposed. Treating stochastic perturbations as excitation signals, the proposed MPPT controller has a good tolerance of stochastic perturbations in nature. Different from conventional gradient-based extremum seeking MPPT algorithm, the convergence rate of the proposed controller can be totally user-assignable rather than determined by unknown power map. The stability and convergence of the proposed controller are rigorously proved. We further discuss the effects of partial shading and PV module ageing on the proposed controller. Numerical simulations and experiments are conducted to show the effectiveness of the proposed MPPT algorithm.
Analysis of 2D reactor core using linear perturbation theory and nodal finite element methods
International Nuclear Information System (INIS)
Adrian Mugica; Edmundo del Valle
2005-01-01
In this work the multigroup steady state neutron diffusion equations are solved using the nodal finite element method (NFEM) and the Linear Perturbation Theory (LPT) for XY geometry. The NFEM used corresponds to the Raviart-Thomas schemes RT0 and RT1, interpolating 5 and 12 parameters respectively in each node of the space discretization. The accuracy of these methods is related with the dimension of the space approximation and the mesh size. Therefore, using fine meshes and the RT0 or RT1 nodal methods leads to a large an interesting eigenvalue problem. The finite element method used to discretize the weak formulation of the diffusion equations is the Galerkin one. The algebraic structure of the discrete eigenvalue problem is obtained and solved using the Wielandt technique and the BGSTAB iterative method using the SPARSKIT package developed by Yousef Saad. The results obtained with LPT show good agreement with the results obtained directly for the perturbed problem. In fact, the cpu time to solve a single problem, the unperturbed and the perturbed one, is practically the same but when one is focused in shuffling many times two different assemblies in the core then the LPT technique becomes quite useful to get good approximations in a short time. This particular problem was solved for one quarter-core with NFEM. Thus, the computer program based on LPT can be used to perform like an analysis tool in the fuel reload optimization or combinatory analysis to get reload patterns in nuclear power plants once that it had been incorporated with the thermohydraulic aspects needed to simulate accurately a real problem. The maximum differences between the NFEM and LPT for the three LWR reactor cores are about 250 pcm. This quantity is considered an acceptable value for this kind of analysis. (authors)
Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.
Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay
2018-04-17
In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Kalcheva, Silva; Koonen, Edgar
2008-01-01
A hybrid method dedicated to improve the experimental technique for estimation of control rod worths in a research reactor is presented. The method uses a combination of Monte Carlo technique and perturbation theory. The perturbation theory is used to obtain the relation between the relative rod efficiency and the buckling of the reactor with partially inserted rod. A series of coefficients, describing the axial absorption profile are used to correct the buckling for an arbitrary composite rod, having complicated burn up irradiation history. These coefficients have to be determined - by experiment or by using some theoretical/numerical method. In the present paper they are derived from the macroscopic absorption cross sections, obtained from detailed Monte Carlo calculations by MCNPX 2.6.F of the axial burn up profile during control rod life. The method is validated on measurements of control rod worths at the BR2 reactor. Comparison with direct Monte Carlo evaluations of control rod worths is also presented. The uncertainties, arising from the used approximations in the presented hybrid method are discussed. (authors)
A perturbation-based susbtep method for coupled depletion Monte-Carlo codes
International Nuclear Information System (INIS)
Kotlyar, Dan; Aufiero, Manuele; Shwageraus, Eugene; Fratoni, Massimiliano
2017-01-01
Highlights: • The GPT method allows to calculate the sensitivity coefficients to any perturbation. • Full Jacobian of sensitivities, cross sections (XS) to concentrations, may be obtained. • The time dependent XS is obtained by combining the GPT and substep methods. • The proposed GPT substep method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. - Abstract: Coupled Monte Carlo (MC) methods are becoming widely used in reactor physics analysis and design. Many research groups therefore, developed their own coupled MC depletion codes. Typically, in such coupled code systems, neutron fluxes and cross sections are provided to the depletion module by solving a static neutron transport problem. These fluxes and cross sections are representative only of a specific time-point. In reality however, both quantities would change through the depletion time interval. Recently, Generalized Perturbation Theory (GPT) equivalent method that relies on collision history approach was implemented in Serpent MC code. This method was used here to calculate the sensitivity of each nuclide and reaction cross section due to the change in concentration of every isotope in the system. The coupling method proposed in this study also uses the substep approach, which incorporates these sensitivity coefficients to account for temporal changes in cross sections. As a result, a notable improvement in time dependent cross section behavior was obtained. The method was implemented in a wrapper script that couples Serpent with an external depletion solver. The performance of this method was compared with other existing methods. The results indicate that the proposed method requires substantially less MC transport solutions to achieve the same accuracy.
Papasotiriou, P. J.; Geroyannis, V. S.
We implement Hartle's perturbation method to the computation of relativistic rigidly rotating neutron star models. The program has been written in SCILAB (© INRIA ENPC), a matrix-oriented high-level programming language. The numerical method is described in very detail and is applied to many models in slow or fast rotation. We show that, although the method is perturbative, it gives accurate results for all practical purposes and it should prove an efficient tool for computing rapidly rotating pulsars.
Non perturbative method for radiative corrections applied to lepton-proton scattering
International Nuclear Information System (INIS)
Chahine, C.
1979-01-01
We present a new, non perturbative method to effect radiative corrections in lepton (electron or muon)-nucleon scattering, useful for existing or planned experiments. This method relies on a spectral function derived in a previous paper, which takes into account both real soft photons and virtual ones and hence is free from infrared divergence. Hard effects are computed perturbatively and then included in the form of 'hard factors' in the non peturbative soft formulas. Practical computations are effected using the Gauss-Jacobi integration method which reduce the relevant integrals to a rapidly converging sequence. For the simple problem of the radiative quasi-elastic peak, we get an exponentiated form conjectured by Schwinger and found by Yennie, Frautschi and Suura. We compare also our results with the peaking approximation, which we derive independantly, and with the exact one-photon emission formula of Mo and Tsai. Applications of our method to the continuous spectrum include the radiative tail of the Δ 33 resonance in e + p scattering and radiative corrections to the Feynman scale invariant F 2 structure function for the kinematics of two recent high energy muon experiments
Large-order perturbation theory
International Nuclear Information System (INIS)
Wu, T.T.
1982-01-01
The original motivation for studying the asymptotic behavior of the coefficients of perturbation series came from quantum field theory. An overview is given of some of the attempts to understand quantum field theory beyond finite-order perturbation series. At least is the case of the Thirring model and probably in general, the full content of a relativistic quantum field theory cannot be recovered from its perturbation series. This difficulty, however, does not occur in quantum mechanics, and the anharmonic oscillator is used to illustrate the methods used in large-order perturbation theory. Two completely different methods are discussed, the first one using the WKB approximation, and a second one involving the statistical analysis of Feynman diagrams. The first one is well developed and gives detailed information about the desired asymptotic behavior, while the second one is still in its infancy and gives instead information about the distribution of vertices of the Feynman diagrams
International Nuclear Information System (INIS)
Gratreau, P.
1987-01-01
The motion of charged particles in a magnetized plasma column, such as that of a magnetic mirror trap or a tokamak, is determined in the framework of the canonical perturbation theory through a method of variation of constants which preserves the energy conservation and the symmetry invariance. The choice of a frame of coordinates close to that of the magnetic coordinates allows a relatively precise determination of the guiding-center motion with a low-ordered approximation in the adiabatic parameter. A Hamiltonian formulation of the motion equations is obtained
Energy Technology Data Exchange (ETDEWEB)
Takac, S M; Krcevinac, S B [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Yugoslavia)
1966-07-15
Measurements of thermal neutron density distributions were carried out in a variety of reactor cells by the newly developed cell perturbation method. The big geometrical and nuclear differences between the considered cells served as a very good testing ground for both the theory and experiments. The final experimental results are compared with a 'THERMOS'-type of calculation and in one case with the K-7 TRANSPO. In lattices L-1, L-2 and L-3 a very good agreement was reached with the results of K-7 THERMOS, while in lattice L-4, because of its complexity, the agreement was within the quoted errors (author)
International Nuclear Information System (INIS)
Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei
2012-01-01
In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)
Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods
DEFF Research Database (Denmark)
Nielsen, Søren R.K.; Sørensen, John Dalsgaard
Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...
An adjusted probability method for the identification of sociometric status in classrooms
García Bacete, F.J.; Cillessen, A.H.N.
2017-01-01
Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB) in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of
DEVELOPMENT OF THE PROBABLY-GEOGRAPHICAL FORECAST METHOD FOR DANGEROUS WEATHER PHENOMENA
Directory of Open Access Journals (Sweden)
Elena S. Popova
2015-12-01
Full Text Available This paper presents a scheme method of probably-geographical forecast for dangerous weather phenomena. Discuss two general realization stages of this method. Emphasize that developing method is response to actual questions of modern weather forecast and it’s appropriate phenomena: forecast is carried out for specific point in space and appropriate moment of time.
Wang, Yajie; Shi, Yunbo; Yu, Xiaoyu; Liu, Yongjie
2016-01-01
Currently, tracking in photovoltaic (PV) systems suffers from some problems such as high energy consumption, poor anti-interference performance, and large tracking errors. This paper presents a solar PV tracking system on the basis of an improved perturbation and observation method, which maximizes photoelectric conversion efficiency. According to the projection principle, we design a sensor module with a light-intensity-detection module for environmental light-intensity measurement. The effect of environmental factors on the system operation is reduced, and intelligent identification of the weather is realized. This system adopts the discrete-type tracking method to reduce power consumption. A mechanical structure with a level-pitch double-degree-of-freedom is designed, and attitude correction is performed by closed-loop control. A worm-and-gear mechanism is added, and the reliability, stability, and precision of the system are improved. Finally, the perturbation and observation method designed and improved by this study was tested by simulated experiments. The experiments verified that the photoelectric sensor resolution can reach 0.344°, the tracking error is less than 2.5°, the largest improvement in the charge efficiency can reach 44.5%, and the system steadily and reliably works. PMID:27327657
Directory of Open Access Journals (Sweden)
Yajie Wang
Full Text Available Currently, tracking in photovoltaic (PV systems suffers from some problems such as high energy consumption, poor anti-interference performance, and large tracking errors. This paper presents a solar PV tracking system on the basis of an improved perturbation and observation method, which maximizes photoelectric conversion efficiency. According to the projection principle, we design a sensor module with a light-intensity-detection module for environmental light-intensity measurement. The effect of environmental factors on the system operation is reduced, and intelligent identification of the weather is realized. This system adopts the discrete-type tracking method to reduce power consumption. A mechanical structure with a level-pitch double-degree-of-freedom is designed, and attitude correction is performed by closed-loop control. A worm-and-gear mechanism is added, and the reliability, stability, and precision of the system are improved. Finally, the perturbation and observation method designed and improved by this study was tested by simulated experiments. The experiments verified that the photoelectric sensor resolution can reach 0.344°, the tracking error is less than 2.5°, the largest improvement in the charge efficiency can reach 44.5%, and the system steadily and reliably works.
Wang, Yajie; Shi, Yunbo; Yu, Xiaoyu; Liu, Yongjie
2016-01-01
Currently, tracking in photovoltaic (PV) systems suffers from some problems such as high energy consumption, poor anti-interference performance, and large tracking errors. This paper presents a solar PV tracking system on the basis of an improved perturbation and observation method, which maximizes photoelectric conversion efficiency. According to the projection principle, we design a sensor module with a light-intensity-detection module for environmental light-intensity measurement. The effect of environmental factors on the system operation is reduced, and intelligent identification of the weather is realized. This system adopts the discrete-type tracking method to reduce power consumption. A mechanical structure with a level-pitch double-degree-of-freedom is designed, and attitude correction is performed by closed-loop control. A worm-and-gear mechanism is added, and the reliability, stability, and precision of the system are improved. Finally, the perturbation and observation method designed and improved by this study was tested by simulated experiments. The experiments verified that the photoelectric sensor resolution can reach 0.344°, the tracking error is less than 2.5°, the largest improvement in the charge efficiency can reach 44.5%, and the system steadily and reliably works.
8th International Conference on Soft Methods in Probability and Statistics
Giordani, Paolo; Vantaggi, Barbara; Gagolewski, Marek; Gil, María; Grzegorzewski, Przemysław; Hryniewicz, Olgierd
2017-01-01
This proceedings volume is a collection of peer reviewed papers presented at the 8th International Conference on Soft Methods in Probability and Statistics (SMPS 2016) held in Rome (Italy). The book is dedicated to Data science which aims at developing automated methods to analyze massive amounts of data and to extract knowledge from them. It shows how Data science employs various programming techniques and methods of data wrangling, data visualization, machine learning, probability and statistics. The soft methods proposed in this volume represent a collection of tools in these fields that can also be useful for data science.
Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas
2014-07-01
Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Manzoni, Francesco; Ryde, Ulf
2018-03-01
We have calculated relative binding affinities for eight tetrafluorophenyl-triazole-thiogalactoside inhibitors of galectin-3 with the alchemical free-energy perturbation approach. We obtain a mean absolute deviation from experimental estimates of only 2-3 kJ/mol and a correlation coefficient (R 2) of 0.5-0.8 for seven relative affinities spanning a range of up to 11 kJ/mol. We also studied the effect of using different methods to calculate the charges of the inhibitor and different sizes of the perturbed group (the atoms that are described by soft-core potentials and are allowed to have differing coordinates). However, the various approaches gave rather similar results and it is not possible to point out one approach as consistently and significantly better than the others. Instead, we suggest that such small and reasonable variations in the computational method can be used to check how stable the calculated results are and to obtain a more accurate estimate of the uncertainty than if performing only one calculation with a single computational setup.
Unification of field theory and maximum entropy methods for learning probability densities
Kinney, Justin B.
2014-01-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...
Probability approaching method (PAM) and its application on fuel management optimization
International Nuclear Information System (INIS)
Liu, Z.; Hu, Y.; Shi, G.
2004-01-01
For multi-cycle reloading optimization problem, a new solving scheme is presented. The multi-cycle problem is de-coupled into a number of relatively independent mono-cycle issues, then this non-linear programming problem with complex constraints is solved by an advanced new algorithm -probability approaching method (PAM), which is based on probability theory. The result on simplified core model shows well effect of this new multi-cycle optimization scheme. (authors)
An evaluation method for tornado missile strike probability with stochastic correction
International Nuclear Information System (INIS)
Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo
2017-01-01
An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure
An evaluation method for tornado missile strike probability with stochastic correction
Energy Technology Data Exchange (ETDEWEB)
Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo [Nuclear Risk Research Center (External Natural Event Research Team), Central Research Institute of Electric Power Industry, Abiko (Japan)
2017-03-15
An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
International Nuclear Information System (INIS)
Caldarola, L.
1976-01-01
A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)
Probability-neighbor method of accelerating geometry treatment in reactor Monte Carlo code RMC
International Nuclear Information System (INIS)
She, Ding; Li, Zeguang; Xu, Qi; Wang, Kan; Yu, Ganglin
2011-01-01
Probability neighbor method (PNM) is proposed in this paper to accelerate geometry treatment of Monte Carlo (MC) simulation and validated in self-developed reactor Monte Carlo code RMC. During MC simulation by either ray-tracking or delta-tracking method, large amounts of time are spent in finding out which cell one particle is located in. The traditional way is to search cells one by one with certain sequence defined previously. However, this procedure becomes very time-consuming when the system contains a large number of cells. Considering that particles have different probability to enter different cells, PNM method optimizes the searching sequence, i.e., the cells with larger probability are searched preferentially. The PNM method is implemented in RMC code and the numerical results show that the considerable time of geometry treatment in MC calculation for complicated systems is saved, especially effective in delta-tracking simulation. (author)
Emission probability determination of {sup 133}Ba by the sum-peak method
Energy Technology Data Exchange (ETDEWEB)
Silva, R.L. da; Almeida, M.C.M. de; Delgado, J.U.; Poledna, R.; Araujo, M.T.F.; Trindade, O.L.; Veras, E.V. de; Santos, A.; Rangel, J.; Ferreira Filho, A.L., E-mail: ronaldo@ird.gov.br, E-mail: marcandida@yahoo.com.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)
2016-07-01
The National Laboratory of Metrology Ionizing Radiation (LNMRI/IRD/CNEN) has several measurement methods in order to ensure low uncertainties about the results. Through gamma spectrometry analysis by sum-peak absolute method they were performed the standardization of {sup 133}Ba activity and your emission probability determination of different energies with reduced uncertainties. The advantages of radionuclides calibrations by absolute method are accuracy, low uncertainties and is not necessary the use of radionuclides reference standards. {sup 133}Ba is used in research laboratories on calibration detectors in different work areas. The uncertainties for the activity and for the emission probability results are lower than 1%. (author)
International Nuclear Information System (INIS)
Ozgener, B.; Ozgener, H.A.
2005-01-01
A multiregion, multigroup collision probability method with white boundary condition is developed for thermalization calculations of light water moderated reactors. Hydrogen scatterings are treated by Nelkin's kernel while scatterings from other nuclei are assumed to obey the free-gas scattering kernel. The isotropic return (white) boundary condition is applied directly by using the appropriate collision probabilities. Comparisons with alternate numerical methods show the validity of the present formulation. Comparisons with some experimental results indicate that the present formulation is capable of calculating disadvantage factors which are closer to the experimental results than alternative methods
International Nuclear Information System (INIS)
Rossi, Lubianka Ferrari Russo
2014-01-01
The main target of this study is to introduce a new method for calculating the coefficients of sensibility through the union of differential method and generalized perturbation theory, which are the two methods generally used in reactor physics to obtain such variables. These two methods, separated, have some issues turning the sensibility coefficients calculation slower or computationally exhaustive. However, putting them together, it is possible to repair these issues and build a new equation for the coefficient of sensibility. The method introduced in this study was applied in a PWR reactor, where it was performed the sensibility analysis for the production and 239 Pu conversion rate during 120 days (1 cycle) of burnup. The computational code used for both burnup and sensibility analysis, the CINEW, was developed in this study and all the results were compared with codes widely used in reactor physics, such as CINDER and SERPENT. The new mathematical method for calculating the sensibility coefficients and the code CINEW provide good numerical agility and also good efficiency and security, once the new method, when compared with traditional ones, provide satisfactory results, even when the other methods use different mathematical approaches. The burnup analysis, performed using the code CINEW, was compared with the code CINDER, showing an acceptable variation, though CINDER presents some computational issues due to the period it was built. The originality of this study is the application of such method in problems involving temporal dependence and, not least, the elaboration of the first national code for burnup and sensitivity analysis. (author)
The application of probability methods with a view to improving the quality of equipment
International Nuclear Information System (INIS)
Carnino, A.; Gachot, B.; Greppo, J.-F.; Guitton, J.
1976-01-01
After stating that reliability and availability could be considered as parameters allowing the quality of equipment to be estimated, the chief aspects of the use of probability methods in the field of quality is described. These methods are mainly applied at the design, operation and maintenance level of the equipment, as well as at the compilation stage of the corresponding data [fr
Fourth-order perturbative extension of the single-double excitation coupled-cluster method
International Nuclear Information System (INIS)
Derevianko, Andrei; Emmons, Erik D.
2002-01-01
Fourth-order many-body corrections to matrix elements for atoms with one valence electron are derived. The obtained diagrams are classified using coupled-cluster-inspired separation into contributions from n-particle excitations from the lowest-order wave function. The complete set of fourth-order diagrams involves only connected single, double, and triple excitations and disconnected quadruple excitations. Approximately half of the fourth-order diagrams are not accounted for by the popular coupled-cluster method truncated at single and double excitations (CCSD). Explicit formulas are tabulated for the entire set of fourth-order diagrams missed by the CCSD method and its linearized version, i.e., contributions from connected triple and disconnected quadruple excitations. A partial summation scheme of the derived fourth-order contributions to all orders of perturbation theory is proposed
Directory of Open Access Journals (Sweden)
Wenzhen Chen
2013-01-01
Full Text Available The singularly perturbed method (SPM is proposed to obtain the analytical solution for the delayed supercritical process of nuclear reactor with temperature feedback and small step reactivity inserted. The relation between the reactivity and time is derived. Also, the neutron density (or power and the average density of delayed neutron precursors as the function of reactivity are presented. The variations of neutron density (or power and temperature with time are calculated and plotted and compared with those by accurate solution and other analytical methods. It is shown that the results by the SPM are valid and accurate in the large range and the SPM is simpler than those in the previous literature.
Perturbative method for the derivation of quantum kinetic theory based on closed-time-path formalism
International Nuclear Information System (INIS)
Koide, Jun
2002-01-01
Within the closed-time-path formalism, a perturbative method is presented, which reduces the microscopic field theory to the quantum kinetic theory. In order to make this reduction, the expectation value of a physical quantity must be calculated under the condition that the Wigner distribution function is fixed, because it is the independent dynamical variable in the quantum kinetic theory. It is shown that when a nonequilibrium Green function in the form of the generalized Kadanoff-Baym ansatz is utilized, this condition appears as a cancellation of a certain part of contributions in the diagrammatic expression of the expectation value. Together with the quantum kinetic equation, which can be derived in the closed-time-path formalism, this method provides a basis for the kinetic-theoretical description
A perturbation method to the tent map based on Lyapunov exponent and its application
Cao, Lv-Chen; Luo, Yu-Ling; Qiu, Sen-Hui; Liu, Jun-Xiu
2015-10-01
Perturbation imposed on a chaos system is an effective way to maintain its chaotic features. A novel parameter perturbation method for the tent map based on the Lyapunov exponent is proposed in this paper. The pseudo-random sequence generated by the tent map is sent to another chaos function — the Chebyshev map for the post processing. If the output value of the Chebyshev map falls into a certain range, it will be sent back to replace the parameter of the tent map. As a result, the parameter of the tent map keeps changing dynamically. The statistical analysis and experimental results prove that the disturbed tent map has a highly random distribution and achieves good cryptographic properties of a pseudo-random sequence. As a result, it weakens the phenomenon of strong correlation caused by the finite precision and effectively compensates for the digital chaos system dynamics degradation. Project supported by the Guangxi Provincial Natural Science Foundation, China (Grant No. 2014GXNSFBA118271), the Research Project of Guangxi University, China (Grant No. ZD2014022), the Fund from Guangxi Provincial Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS14-04), the Fund from the Guangxi Provincial Key Laboratory of Wireless Wideband Communication & Signal Processing, China (Grant No. GXKL0614205), the Education Development Foundation and the Doctoral Research Foundation of Guangxi Normal University, the State Scholarship Fund of China Scholarship Council (Grant No. [2014]3012), and the Innovation Project of Guangxi Graduate Education, China (Grant No. YCSZ2015102).
A method for estimating failure rates for low probability events arising in PSA
International Nuclear Information System (INIS)
Thorne, M.C.; Williams, M.M.R.
1995-01-01
The authors develop a method for predicting failure rates and failure probabilities per event when, over a given test period or number of demands, no failures have occurred. A Bayesian approach is adopted to calculate a posterior probability distribution for the failure rate or failure probability per event subsequent to the test period. This posterior is then used to estimate effective failure rates or probabilities over a subsequent period of time or number of demands. In special circumstances, the authors results reduce to the well-known rules of thumb, viz: 1/N and 1/T, where N is the number of demands during the test period for no failures and T is the test period for no failures. However, the authors are able to give strict conditions on the validity of these rules of thumb and to improve on them when necessary
International Nuclear Information System (INIS)
Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing
2012-01-01
In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)
International Nuclear Information System (INIS)
Sakane, Shinichi; Yezdimer, Eric M.; Liu, Wenbin; Barriocanal, Jose A.; Doren, Douglas J.; Wood, Robert H.
2000-01-01
The ab initio/classical free energy perturbation (ABC-FEP) method proposed previously by Wood et al. [J. Chem. Phys. 110, 1329 (1999)] uses classical simulations to calculate solvation free energies within an empirical potential model, then applies free energy perturbation theory to determine the effect of changing the empirical solute-solvent interactions to corresponding interactions calculated from ab initio methods. This approach allows accurate calculation of solvation free energies using an atomistic description of the solvent and solute, with interactions calculated from first principles. Results can be obtained at a feasible computational cost without making use of approximations such as a continuum solvent or an empirical cavity formation energy. As such, the method can be used far from ambient conditions, where the empirical parameters needed for approximate theories of solvation may not be available. The sources of error in the ABC-FEP method are the approximations in the ab initio method, the finite sample of configurations, and the classical solvent model. This article explores the accuracy of various approximations used in the ABC-FEP method by comparing to the experimentally well-known free energy of hydration of water at two state points (ambient conditions, and 973.15 K and 600 kg/m3). The TIP4P-FQ model [J. Chem. Phys. 101, 6141 (1994)] is found to be a reliable solvent model for use with this method, even at supercritical conditions. Results depend strongly on the ab initio method used: a gradient-corrected density functional theory is not adequate, but a localized MP2 method yields excellent agreement with experiment. Computational costs are reduced by using a cluster approximation, in which ab initio pair interaction energies are calculated between the solute and up to 60 solvent molecules, while multi-body interactions are calculated with only a small cluster (5 to 12 solvent molecules). Sampling errors for the ab initio contribution to
International Nuclear Information System (INIS)
Fathizadeh, M.; Aroujalian, A.
2012-01-01
The boundary layer convective heat transfer equations with low pressure gradient over a flat plate are solved using Homotopy Perturbation Method, which is one of the semi-exact methods. The nonlinear equations of momentum and energy solved simultaneously via Homotopy Perturbation Method are in good agreement with results obtained from numerical methods. Using this method, a general equation in terms of Pr number and pressure gradient (λ) is derived which can be used to investigate velocity and temperature profiles in the boundary layer.
Using the probability method for multigroup calculations of reactor cells in a thermal energy range
International Nuclear Information System (INIS)
Rubin, I.E.; Pustoshilova, V.S.
1984-01-01
The possibility of using the transmission probability method with performance inerpolation for determining spatial-energy neutron flux distribution in cells of thermal heterogeneous reactors is considered. The results of multigroup calculations of several uranium-water plane and cylindrical cells with different fuel enrichment in a thermal energy range are given. A high accuracy of results is obtained with low computer time consumption. The use of the transmission probability method is particularly reasonable in algorithms of the programmes compiled computer with significant reserve of internal memory
An Adjusted Probability Method for the Identification of Sociometric Status in Classrooms
Directory of Open Access Journals (Sweden)
Francisco J. García Bacete
2017-10-01
Full Text Available Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of each sociometric group, the sources for discrepant classifications between methods, the behavioral profiles of discrepant and consistent cases between methods, and age differences.Method: We compared the GB adjusted probability method with the standard score model proposed by Coie and Dodge (CD and the probability score model proposed by Newcomb and Bukowski (NB. The GB method is an adaptation of the NB method, cutoff scores are derived from the distribution of raw liked most and liked least scores in each classroom instead of using fixed and absolute scores as does NB method. The criteria for neglected status are also modified by the GB method. Participants were 569 children (45% girls from 23 elementary school classrooms (13 Grades 1–2, 10 Grades 5–6.Results: We found agreement as well as differences between the three methods. The CD method yielded discrepancies in the classifications because of its dependence on z-scores and composite dimensions. The NB method was less optimal in the validation of the behavioral characteristics of the sociometric groups, because of its fixed cutoffs for identifying preferred, rejected, and controversial children, and not differentiating between positive and negative nominations for neglected children. The GB method addressed some of the limitations of the other two methods. It improved the classified of neglected students, as well as discrepant cases of the preferred, rejected, and controversial groups. Agreement between methods was higher with the oldest children.Conclusion: GB is a valid sociometric method as evidences by the behavior profiles of the sociometric status groups identified with this method.
Directory of Open Access Journals (Sweden)
Elise Cormie-Bowins
2012-10-01
Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.
DEFF Research Database (Denmark)
Sørup, Hjalte Jomo Danielsen; Georgiadis, Stylianos; Gregersen, Ida Bülow
2017-01-01
Urban water infrastructure has very long planning horizons, and planning is thus very dependent on reliable estimates of the impacts of climate change. Many urban water systems are designed using time series with a high temporal resolution. To assess the impact of climate change on these systems......, similarly high-resolution precipitation time series for future climate are necessary. Climate models cannot at their current resolutions provide these time series at the relevant scales. Known methods for stochastic downscaling of climate change to urban hydrological scales have known shortcomings...... in constructing realistic climate-changed precipitation time series at the sub-hourly scale. In the present study we present a deterministic methodology to perturb historical precipitation time series at the minute scale to reflect non-linear expectations to climate change. The methodology shows good skill...
''Use of perturbative methods to break down the variation of reactivity between two systems''
International Nuclear Information System (INIS)
Perruchot-Triboulet, S.; Sanchez, R.
1997-01-01
The modification of the isotopic composition, the temperature or even accounting for across section uncertainties in one part of a nuclear reactor core, affects the value of the effective multiplication factor. A new tool allows the analysis of the reactivity effect generated by the modification of the system. With the help of the direct and adjoint fluxes, a detailed balance of reactivity, between the compared systems, is done for each isotopic cross section. After the presentation of the direct and adjoint transport equations in the context of the multigroup code transport APOLLO2, this note describes the method, based on perturbation theory, for the analysis of the reactivity variation. An example application is also given. (author)
Directory of Open Access Journals (Sweden)
D. Sarsri
2016-03-01
Full Text Available This paper presents a methodological approach to compute the stochastic eigenmodes of large FE models with parameter uncertainties based on coupling of second order perturbation method and component mode synthesis methods. Various component mode synthesis methods are used to optimally reduce the size of the model. The statistical first two moments of dynamic response of the reduced system are obtained by the second order perturbation method. Numerical results illustrating the accuracy and efficiency of the proposed coupled methodological procedures for large FE models with uncertain parameters are presented.
Energy Technology Data Exchange (ETDEWEB)
Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
International Nuclear Information System (INIS)
Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t
2012-01-01
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Disadvantage factors for square lattice cells using a collision probability method
International Nuclear Information System (INIS)
Raghav, H.P.
1976-01-01
The flux distribution in an infinite square lattice consisting of cylindrical fuel rods and moderator is calculated by using a collision probability method. Neutrons are assumed to be monoenergetic and the sources as well as scattering are assumed to be isotropic. Carlvik's method for the calculation of collision probability is used. The important features of the method are that the square boundary is treated exactly and the contribution of the surrounding cells is calculated explicitly. The method is programmed in a computer code CELLC. This carries out integration by Simpson's rule. The convergence and accuracy of CELLC is assessed by computing disadvantage factors for the well-known Thie lattices and comparing the results with Monte Carlo and other integral transport theory methods used elsewhere. It is demonstrated that it is not correct to apply the white boundary condition in the Wigner Seitz Cell for low pitch and low cross sections. (orig.) [de
Directory of Open Access Journals (Sweden)
Cinicioglu Esma Nur
2014-01-01
Full Text Available Dempster−Shafer belief function theory can address a wider class of uncertainty than the standard probability theory does, and this fact appeals the researchers in operations research society for potential application areas. However, the lack of a decision theory of belief functions gives rise to the need to use the probability transformation methods for decision making. For representation of statistical evidence, the class of consonant belief functions is used which is not closed under Dempster’s rule of combination but is closed under Walley’s rule of combination. In this research, it is shown that the outcomes obtained using both Dempster’s and Walley’s rules do result in different probability distributions when pignistic transformation is used. However, when plausibility transformation is used, they do result in the same probability distribution. This result shows that the choice of the combination rule and probability transformation method may have a significant effect on decision making since it may change the choice of the decision alternative selected. This result is illustrated via an example of missile type identification.
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
2012-03-15
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright Â© 2012 Elsevier Inc. All rights reserved.
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
Assawaroongruengchot, Monchai
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M
2007-07-01
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
International Nuclear Information System (INIS)
Assawaroongruengchot, M.
2007-01-01
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
Directory of Open Access Journals (Sweden)
Reza Mohammadyari
2015-08-01
Full Text Available The problem of solid particle settling is a well known problem in mechanic of fluids. The parametrized Perturbation Method is applied to analytically solve the unsteady motion of a spherical particle falling in a Newtonian fluid using the drag of the form given by Oseen/Ferreira, for a range of Reynolds numbers. Particle equation of motion involved added mass term and ignored the Basset term. By using this new kind of perturbation method called parameterized perturbation method (PPM, analytical expressions for the instantaneous velocity, acceleration and position of the particle were derived. The presented results show the effectiveness of PPM and high rate of convergency of the method to achieve acceptable answers.
A method for the estimation of the probability of damage due to earthquakes
International Nuclear Information System (INIS)
Alderson, M.A.H.G.
1979-07-01
The available information on seismicity within the United Kingdom has been combined with building damage data from the United States to produce a method of estimating the probability of damage to structures due to the occurrence of earthquakes. The analysis has been based on the use of site intensity as the major damage producing parameter. Data for structural, pipework and equipment items have been assumed and the overall probability of damage calculated as a function of the design level. Due account is taken of the uncertainties of the seismic data. (author)
Calculating method on human error probabilities considering influence of management and organization
International Nuclear Information System (INIS)
Gao Jia; Huang Xiangrui; Shen Zupei
1996-01-01
This paper is concerned with how management and organizational influences can be factored into quantifying human error probabilities on risk assessments, using a three-level Influence Diagram (ID) which is originally only as a tool for construction and representation of models of decision-making trees or event trees. An analytical model of human errors causation has been set up with three influence levels, introducing a method for quantification assessments (of the ID), which can be applied into quantifying probabilities) of human errors on risk assessments, especially into the quantification of complex event trees (system) as engineering decision-making analysis. A numerical case study is provided to illustrate the approach
Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements
Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)
2016-01-01
The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.
International Nuclear Information System (INIS)
Zio, E.; Pedroni, N.
2010-01-01
The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the
Energy Technology Data Exchange (ETDEWEB)
Takac, S M [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)
1972-07-01
The method is based on perturbation of the reactor cell from a few up to few tens of percent. Measurements were performed for square lattice calls of zero power reactors ANNA, NORA and RB, with metal uranium and uranium oxide fuel elements, water, heavy water and graphite moderators. Character and functional dependence of perturbations were obtained from the experimental results. Zero perturbation was determined by extrapolation thus obtaining the real physical neutron flux distribution in the reactor cell. Simple diffusion theory for partial plate cell perturbation was developed for verification of the perturbation method. The results of these calculation proved that introducing the perturbation sample in the fuel results in flattening the thermal neutron density dependent on the amplitude of the applied perturbation. Extrapolation applied for perturbed distributions was found to be justified.
Application of He's homotopy perturbation method to conservative truly nonlinear oscillators
International Nuclear Information System (INIS)
Belendez, A.; Belendez, T.; Marquez, A.; Neipp, C.
2008-01-01
We apply He's homotopy perturbation method to find improved approximate solutions to conservative truly nonlinear oscillators. This approach gives us not only a truly periodic solution but also the period of the motion as a function of the amplitude of oscillation. We find that this method works very well for the whole range of parameters in the case of the cubic oscillator, and excellent agreement of the approximate frequencies with the exact one has been demonstrated and discussed. For the second order approximation we have shown that the relative error in the analytical approximate frequency is approximately 0.03% for any parameter values involved. We also compared the analytical approximate solutions and the Fourier series expansion of the exact solution. This has allowed us to compare the coefficients for the different harmonic terms in these solutions. The most significant features of this method are its simplicity and its excellent accuracy for the whole range of oscillation amplitude values and the results reveal that this technique is very effective and convenient for solving conservative truly nonlinear oscillatory systems
DEFF Research Database (Denmark)
Nielsen, Søren R. K.; Peng, Yongbo; Sichani, Mahdi Teimouri
2016-01-01
The paper deals with the response and reliability analysis of hysteretic or geometric nonlinear uncertain dynamical systems of arbitrary dimensionality driven by stochastic processes. The approach is based on the probability density evolution method proposed by Li and Chen (Stochastic dynamics...... of structures, 1st edn. Wiley, London, 2009; Probab Eng Mech 20(1):33–44, 2005), which circumvents the dimensional curse of traditional methods for the determination of non-stationary probability densities based on Markov process assumptions and the numerical solution of the related Fokker–Planck and Kolmogorov......–Feller equations. The main obstacle of the method is that a multi-dimensional convolution integral needs to be carried out over the sample space of a set of basic random variables, for which reason the number of these need to be relatively low. In order to handle this problem an approach is suggested, which...
International Nuclear Information System (INIS)
Shigeru Aoki
2005-01-01
The secondary system such as pipings, tanks and other mechanical equipment is installed in the primary system such as building. The important secondary systems should be designed to maintain their function even if they are subjected to destructive earthquake excitations. The secondary system has many nonlinear characteristics. Impact and friction characteristic, which are observed in mechanical supports and joints, are common nonlinear characteristics. As impact damper and friction damper, impact and friction characteristic are used for reduction of seismic response. In this paper, analytical methods of the first excursion probability of the secondary system with impact and friction, subjected to earthquake excitation are proposed. By using the methods, the effects of impact force, gap size and friction force on the first excursion probability are examined. When the tolerance level is normalized by the maximum response of the secondary system without impact or friction characteristics, variation of the first excursion probability is very small for various values of the natural period. In order to examine the effectiveness of the proposed method, the obtained results are compared with those obtained by the simulation method. Some estimation methods for the maximum response of the secondary system with nonlinear characteristics have been developed. (author)
International Nuclear Information System (INIS)
Esmaeilpour, M.; Ganji, D.D.
2007-01-01
In this Letter, the problem of forced convection over a horizontal flat plate is presented and the homotopy perturbation method (HPM) is employed to compute an approximation to the solution of the system of nonlinear differential equations governing on the problem. It has been attempted to show the capabilities and wide-range applications of the homotopy perturbation method in comparison with the previous ones in solving heat transfer problems. The obtained solutions, in comparison with the exact solutions admit a remarkable accuracy. A clear conclusion can be drawn from the numerical results that the HPM provides highly accurate numerical solutions for nonlinear differential equations
DEFF Research Database (Denmark)
Ganji, D.D; Miansari, Mo; B, Ganjavi
2008-01-01
In this paper, homotopy-perturbation method (HPM) is introduced to solve nonlinear equations of ozone decomposition in aqueous solutions. HPM deforms a di¢ cult problem into a simple problem which can be easily solved. The effects of some parameters such as temperature to the solutions are consid......In this paper, homotopy-perturbation method (HPM) is introduced to solve nonlinear equations of ozone decomposition in aqueous solutions. HPM deforms a di¢ cult problem into a simple problem which can be easily solved. The effects of some parameters such as temperature to the solutions...
Applied probability and stochastic processes
Sumita, Ushio
1999-01-01
Applied Probability and Stochastic Processes is an edited work written in honor of Julien Keilson. This volume has attracted a host of scholars in applied probability, who have made major contributions to the field, and have written survey and state-of-the-art papers on a variety of applied probability topics, including, but not limited to: perturbation method, time reversible Markov chains, Poisson processes, Brownian techniques, Bayesian probability, optimal quality control, Markov decision processes, random matrices, queueing theory and a variety of applications of stochastic processes. The book has a mixture of theoretical, algorithmic, and application chapters providing examples of the cutting-edge work that Professor Keilson has done or influenced over the course of his highly-productive and energetic career in applied probability and stochastic processes. The book will be of interest to academic researchers, students, and industrial practitioners who seek to use the mathematics of applied probability i...
Eshkuvatov, Z K; Zulkarnain, F S; Nik Long, N M A; Muminov, Z
2016-01-01
Modified homotopy perturbation method (HPM) was used to solve the hypersingular integral equations (HSIEs) of the first kind on the interval [-1,1] with the assumption that the kernel of the hypersingular integral is constant on the diagonal of the domain. Existence of inverse of hypersingular integral operator leads to the convergence of HPM in certain cases. Modified HPM and its norm convergence are obtained in Hilbert space. Comparisons between modified HPM, standard HPM, Bernstein polynomials approach Mandal and Bhattacharya (Appl Math Comput 190:1707-1716, 2007), Chebyshev expansion method Mahiub et al. (Int J Pure Appl Math 69(3):265-274, 2011) and reproducing kernel Chen and Zhou (Appl Math Lett 24:636-641, 2011) are made by solving five examples. Theoretical and practical examples revealed that the modified HPM dominates the standard HPM and others. Finally, it is found that the modified HPM is exact, if the solution of the problem is a product of weights and polynomial functions. For rational solution the absolute error decreases very fast by increasing the number of collocation points.
Perturbation theory corrections to the two-particle reduced density matrix variational method.
Juhasz, Tamas; Mazziotti, David A
2004-07-15
In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.
International Nuclear Information System (INIS)
Etter, S.
1982-01-01
By current ultrasonic flow measuring equipment (UFME) the mean velocity is measured for one or two measuring paths. This mean velocity is not equal to the velocity averaged over the flow cross-section, by means of which the flow rate is calculated. This difference will be found already for axially symmetrical, fully developed velocity profiles and, to a larger extent, for disturbed profiles varying in flow direction and for nonsteady flow. Corrective factors are defined for steady and nonsteady flows. These factors can be derived from the flow profiles within the UFME. By mathematical simulation of the entrainment effect the influence of cross and swirl flows on various ultrasonic measuring methods is studied. The applied UFME with crossed measuring paths is shown to be largely independent of cross and swirl flows. For evaluation in a computer of velocity network measurements in circular cross-sections the equations for interpolation and integration are derived. Results of the mathematical method are the isotach profile, the flow rate and, for fully developed flow, directly the corrective factor. In the experimental part corrective factors are determined in nonsteady flow in a measuring plane before and in form measuring planes behind a perturbation. (orig./RW) [de
A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography
Sun, S.; Chen, C.; WANG, H.; Wang, Q.
2014-12-01
The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M
Concise method for evaluating the probability distribution of the marginal cost of power generation
International Nuclear Information System (INIS)
Zhang, S.H.; Li, Y.Z.
2000-01-01
In the developing electricity market, many questions on electricity pricing and the risk modelling of forward contracts require the evaluation of the expected value and probability distribution of the short-run marginal cost of power generation at any given time. A concise forecasting method is provided, which is consistent with the definitions of marginal costs and the techniques of probabilistic production costing. The method embodies clear physical concepts, so that it can be easily understood theoretically and computationally realised. A numerical example has been used to test the proposed method. (author)
LAMP-B: a Fortran program set for the lattice cell analysis by collision probability method
International Nuclear Information System (INIS)
Tsuchihashi, Keiichiro
1979-02-01
Nature of physical problem solved: LAMB-B solves an integral transport equation by the collision probability method for many variety of lattice cell geometries: spherical, plane and cylindrical lattice cell; square and hexagonal arrays of pin rods; annular clusters and square clusters. LAMP-B produces homogenized constants for multi and/or few group diffusion theory programs. Method of solution: LAMP-B performs an exact numerical integration to obtain the collision probabilities. Restrictions on the complexity of the problem: Not more than 68 group in the fast group calculation, and not more than 20 regions in the resonance integral calculation. Typical running time: It varies with the number of energy groups and the selection of the geometry. Unusual features of the program: Any or any combination of constituent subprograms can be used so that the partial use of this program is available. (author)
Energy Technology Data Exchange (ETDEWEB)
Zolfaghari, M; Ghaderi, R; Sheikhol Eslami, A; Hosseinnia, S H; Sadati, J [Intelligent System Research Group, Faculty of Electrical and Computer Engineering, Babol, Noushirvani University of Technology, PO Box 47135-484, Babol (Iran, Islamic Republic of); Ranjbar, A [Golestan University, Gorgan (Iran, Islamic Republic of); Momani, S [Department of Mathematics, Mutah University, PO Box 7, Al-Karak (Jordan)], E-mail: h.hoseinnia@stu.nit.ac.ir, E-mail: a.ranjbar@nit.ac.ir, E-mail: shahermm@yahoo.com
2009-10-15
The enhanced homotopy perturbation method (EHPM) is applied for finding improved approximate solutions of the well-known Bagley-Torvik equation for three different cases. The main characteristic of the EHPM is using a stabilized linear part, which guarantees the stability and convergence of the overall solution. The results are finally compared with the Adams-Bashforth-Moulton numerical method, the Adomian decomposition method (ADM) and the fractional differential transform method (FDTM) to verify the performance of the EHPM.
International Nuclear Information System (INIS)
Zolfaghari, M; Ghaderi, R; Sheikhol Eslami, A; Hosseinnia, S H; Sadati, J; Ranjbar, A; Momani, S
2009-01-01
The enhanced homotopy perturbation method (EHPM) is applied for finding improved approximate solutions of the well-known Bagley-Torvik equation for three different cases. The main characteristic of the EHPM is using a stabilized linear part, which guarantees the stability and convergence of the overall solution. The results are finally compared with the Adams-Bashforth-Moulton numerical method, the Adomian decomposition method (ADM) and the fractional differential transform method (FDTM) to verify the performance of the EHPM.
Zolfaghari, M.; Ghaderi, R.; Sheikhol Eslami, A.; Ranjbar, A.; Hosseinnia, S. H.; Momani, S.; Sadati, J.
2009-10-01
The enhanced homotopy perturbation method (EHPM) is applied for finding improved approximate solutions of the well-known Bagley-Torvik equation for three different cases. The main characteristic of the EHPM is using a stabilized linear part, which guarantees the stability and convergence of the overall solution. The results are finally compared with the Adams-Bashforth-Moulton numerical method, the Adomian decomposition method (ADM) and the fractional differential transform method (FDTM) to verify the performance of the EHPM.
International Nuclear Information System (INIS)
Doyon, L.R.; CEA Centre d'Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette
1975-01-01
A simple method is presented for computer solving every system model (availability, reliability, and maintenance) with intervals between failures, and time duration for repairs distributed according to any probability law, and for any maintainance policy. A matrix equation is obtained using Markov diagrams. An example is given with the solution by the APAFS program (Algorithme Pour l'Analyse de la Fiabilite des Systemes) [fr
International Nuclear Information System (INIS)
Lyman, J.T.; Wolbarst, A.B.
1987-01-01
To predict the likelihood of success of a therapeutic strategy, one must be able to assess the effects of the treatment upon both diseased and healthy tissues. This paper proposes a method for determining the probability that a healthy organ that receives a non-uniform distribution of X-irradiation, heat, chemotherapy, or other agent will escape complications. Starting with any given dose distribution, a dose-cumulative-volume histogram for the organ is generated. This is then reduced by an interpolation scheme (involving the volume-weighting of complication probabilities) to a slightly different histogram that corresponds to the same overall likelihood of complications, but which contains one less step. The procedure is repeated, one step at a time, until there remains a final, single-step histogram, for which the complication probability can be determined. The formalism makes use of a complication response function C(D, V) which, for the given treatment schedule, represents the probability of complications arising when the fraction V of the organ receives dose D and the rest of the organ gets none. Although the data required to generate this function are sparse at present, it should be possible to obtain the necessary information from in vivo and clinical studies. Volume effects are taken explicitly into account in two ways: the precise shape of the patient's histogram is employed in the calculation, and the complication response function is a function of the volume
International Nuclear Information System (INIS)
Mickael, M.; Gardner, R.P.; Verghese, K.
1988-01-01
An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude
Deriving the probability of a linear opinion pooling method being superior to a set of alternatives
International Nuclear Information System (INIS)
Bolger, Donnacha; Houlding, Brett
2017-01-01
Linear opinion pools are a common method for combining a set of distinct opinions into a single succinct opinion, often to be used in a decision making task. In this paper we consider a method, termed the Plug-in approach, for determining the weights to be assigned in this linear pool, in a manner that can be deemed as rational in some sense, while incorporating multiple forms of learning over time into its process. The environment that we consider is one in which every source in the pool is herself a decision maker (DM), in contrast to the more common setting in which expert judgments are amalgamated for use by a single DM. We discuss a simulation study that was conducted to show the merits of our technique, and demonstrate how theoretical probabilistic arguments can be used to exactly quantify the probability of this technique being superior (in terms of a probability density metric) to a set of alternatives. Illustrations are given of simulated proportions converging to these true probabilities in a range of commonly used distributional cases. - Highlights: • A novel context for combination of expert opinion is provided. • A dynamic reliability assessment method is stated, justified by properties and a data study. • The theoretical grounding underlying the data-driven justification is explored. • We conclude with areas for expansion and further relevant research.
Skinner, Carl G; Patel, Manish M; Thomas, Jerry D; Miller, Michael A
2011-01-01
Statistical methods are pervasive in medical research and general medical literature. Understanding general statistical concepts will enhance our ability to critically appraise the current literature and ultimately improve the delivery of patient care. This article intends to provide an overview of the common statistical methods relevant to medicine.
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We
Implementation of the probability table method in a continuous-energy Monte Carlo code system
International Nuclear Information System (INIS)
Sutton, T.M.; Brown, F.B.
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5
Directory of Open Access Journals (Sweden)
U. Filobello-Nino
2015-01-01
Full Text Available We propose an approximate solution of T-F equation, obtained by using the nonlinearities distribution homotopy perturbation method (NDHPM. Besides, we show a table of comparison, between this proposed approximate solution and a numerical of T-F, by establishing the accuracy of the results.
A computational chemistry analysis of six unique tautomers of cyromazine, a pesticide used for fly control, was performed with density functional theory (DFT) and canonical second order Møller–Plesset perturbation theory (MP2) methods to gain insight into the contributions of molecular structure to ...
New method for extracting tumors in PET/CT images based on the probability distribution
International Nuclear Information System (INIS)
Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori
2006-01-01
In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)
Directory of Open Access Journals (Sweden)
Madeiro Francisco
2010-01-01
Full Text Available Abstract This paper presents an alternative method for determining exact expressions for the bit error probability (BEP of modulation schemes subject to Nakagami- fading. In this method, the Nakagami- fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami- random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami- fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of -ary quadrature amplitude modulation ( -QAM, -ary pulse amplitude modulation ( -PAM, and rectangular quadrature amplitude modulation ( -QAM under Nakagami- fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.
International Nuclear Information System (INIS)
Esik, Olga; Tusnady, Gabor; Daubner, Kornel; Nemeth, Gyoergy; Fuezy, Marton; Szentirmay, Zoltan
1997-01-01
Purpose: The typically benign, but occasionally rapidly fatal clinical course of papillary thyroid cancer has raised the need for individual survival probability estimation, to tailor the treatment strategy exclusively to a given patient. Materials and methods: A retrospective study was performed on 400 papillary thyroid cancer patients with a median follow-up time of 7.1 years to establish a clinical database for uni- and multivariate analysis of the prognostic factors related to survival (Kaplan-Meier product limit method and Cox regression). For a more precise prognosis estimation, the effect of the most important clinical events were then investigated on the basis of a Markov renewal model. The basic concept of this approach is that each patient has an individual disease course which (besides the initial clinical categories) is affected by special events, e.g. internal covariates (local/regional/distant relapses). On the supposition that these events and the cause-specific death are influenced by the same biological processes, the parameters of transient survival probability characterizing the speed of the course of the disease for each clinical event and their sequence were determined. The individual survival curves for each patient were calculated by using these parameters and the independent significant clinical variables selected from multivariate studies, summation of which resulted in a mean cause-specific survival function valid for the entire group. On the basis of this Markov model, prediction of the cause-specific survival probability is possible for extrastudy cases, if it is supposed that the clinical events occur within new patients in the same manner and with the similar probability as within the study population. Results: The patient's age, a distant metastasis at presentation, the extent of the surgical intervention, the primary tumor size and extent (pT), the external irradiation dosage and the degree of TSH suppression proved to be
International Nuclear Information System (INIS)
Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun
2015-01-01
Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy
International Nuclear Information System (INIS)
Bosevski, T.
1986-01-01
An improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell has been developed. Expanding the neutron flux and source into a series of even powers of the radius, one' gets a convenient method for integration of the one-energy group integral transport equation. It is shown that it is possible to perform an analytical integration in the x-y plane in one variable and to use the effective Gaussian integration over another one. Choosing a convenient distribution of space points in fuel and moderator the transport matrix calculation and cell reaction rate integration were condensed. On the basis of the proposed method, the computer program DISKRET for the ZUSE-Z 23 K computer has been written. The suitability of the proposed method for the calculation of the thermal-neutron-flux distribution in a reactor cell can be seen from the test results obtained. Compared with the other collision probability methods, the proposed treatment excels with a mathematical simplicity and a faster convergence. (author)
Directory of Open Access Journals (Sweden)
Aboozar Heydari
2017-09-01
Full Text Available In this paper, the effects of nonlinear forces due to the electromagnetic field of bearing and the unbalancing force on nonlinear vibration behavior of a rotor is investigated. The rotor is modeled as a rigid body that is supported by two magnetic bearings with eight-polar structures. The governing dynamics equations of the system that are coupled nonlinear second order ordinary differential equations (ODEs are derived, and for solving these equations, the homotopy perturbation method (HPM is used. By applying HPM, the possibility of presenting a harmonic semi-analytical solution, is provided. In fact, with equality the coefficient of auxiliary parameter (p, the system of coupled nonlinear second order and non-homogenous differential equations are obtained so that consists of unbalancing effects. By considering some initial condition for displacement and velocity in the horizontal and vertical directions, free vibration analysis is done and next, the forced vibration analysis under the effect of harmonic forces also is investigated. Likewise, various parameters on the vibration behavior of rotor are studied. Changes in amplitude and response phase per excitation frequency are investigated. Results show that by increasing excitation frequency, the motion amplitude is also increases and by passing the critical speed, it decreases. Also it shows that the magnetic bearing system performance is in stable maintenance of rotor. The parameters affecting on vibration behavior, has been studied and by comparison the results with the other references, which have a good precision up to 2nd order of embedding parameter, it implies the accuracy of this method in current research.
A prototype method for diagnosing high ice water content probability using satellite imager data
Yost, Christopher R.; Bedka, Kristopher M.; Minnis, Patrick; Nguyen, Louis; Strapp, J. Walter; Palikonda, Rabindra; Khlopenkov, Konstantin; Spangenberg, Douglas; Smith, William L., Jr.; Protat, Alain; Delanoe, Julien
2018-03-01
Recent studies have found that ingestion of high mass concentrations of ice particles in regions of deep convective storms, with radar reflectivity considered safe for aircraft penetration, can adversely impact aircraft engine performance. Previous aviation industry studies have used the term high ice water content (HIWC) to define such conditions. Three airborne field campaigns were conducted in 2014 and 2015 to better understand how HIWC is distributed in deep convection, both as a function of altitude and proximity to convective updraft regions, and to facilitate development of new methods for detecting HIWC conditions, in addition to many other research and regulatory goals. This paper describes a prototype method for detecting HIWC conditions using geostationary (GEO) satellite imager data coupled with in situ total water content (TWC) observations collected during the flight campaigns. Three satellite-derived parameters were determined to be most useful for determining HIWC probability: (1) the horizontal proximity of the aircraft to the nearest overshooting convective updraft or textured anvil cloud, (2) tropopause-relative infrared brightness temperature, and (3) daytime-only cloud optical depth. Statistical fits between collocated TWC and GEO satellite parameters were used to determine the membership functions for the fuzzy logic derivation of HIWC probability. The products were demonstrated using data from several campaign flights and validated using a subset of the satellite-aircraft collocation database. The daytime HIWC probability was found to agree quite well with TWC time trends and identified extreme TWC events with high probability. Discrimination of HIWC was more challenging at night with IR-only information. The products show the greatest capability for discriminating TWC ≥ 0.5 g m-3. Product validation remains challenging due to vertical TWC uncertainties and the typically coarse spatio-temporal resolution of the GEO data.
Developments in perturbation theory
International Nuclear Information System (INIS)
Greenspan, E.
1976-01-01
Included are sections dealing with perturbation expressions for reactivity, methods for the calculation of perturbed fluxes, integral transport theory formulations for reactivity, generalized perturbation theory, sensitivity and optimization studies, multigroup calculations of bilinear functionals, and solution of inhomogeneous Boltzmann equations with singular operators
Evaluation and comparison of estimation methods for failure rates and probabilities
Energy Technology Data Exchange (ETDEWEB)
Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)
2006-02-01
An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.
Energy Technology Data Exchange (ETDEWEB)
Liu Guoming [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)], E-mail: gmliusy@gmail.com; Wu Hongchun; Cao Liangzhi [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)
2008-09-15
This paper presents a transmission probability method (TPM) to solve the neutron transport equation in three-dimensional triangular-z geometry. The source within the mesh is assumed to be spatially uniform and isotropic. At the mesh surface, the constant and the simplified P{sub 1} approximation are invoked for the anisotropic angular flux distribution. Based on this model, a code TPMTDT is encoded. It was verified by three 3D Takeda benchmark problems, in which the first two problems are in XYZ geometry and the last one is in hexagonal-z geometry, and an unstructured geometry problem. The results of the present method agree well with those of Monte-Carlo calculation method and Spherical Harmonics (P{sub N}) method.
Directory of Open Access Journals (Sweden)
Abdalla Ahmed Abdel-Ghaly
2016-06-01
Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.
LaBudde, Robert A; Harnly, James M
2012-01-01
A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.
Du, Yuanwei; Guo, Yubin
2015-01-01
The intrinsic mechanism of multimorbidity is difficult to recognize and prediction and diagnosis are difficult to carry out accordingly. Bayesian networks can help to diagnose multimorbidity in health care, but it is difficult to obtain the conditional probability table (CPT) because of the lack of clinically statistical data. Today, expert knowledge and experience are increasingly used in training Bayesian networks in order to help predict or diagnose diseases, but the CPT in Bayesian networks is usually irrational or ineffective for ignoring realistic constraints especially in multimorbidity. In order to solve these problems, an evidence reasoning (ER) approach is employed to extract and fuse inference data from experts using a belief distribution and recursive ER algorithm, based on which evidence reasoning method for constructing conditional probability tables in Bayesian network of multimorbidity is presented step by step. A multimorbidity numerical example is used to demonstrate the method and prove its feasibility and application. Bayesian network can be determined as long as the inference assessment is inferred by each expert according to his/her knowledge or experience. Our method is more effective than existing methods for extracting expert inference data accurately and is fused effectively for constructing CPTs in a Bayesian network of multimorbidity.
PREDICTION OF RESERVOIR FLOW RATE OF DEZ DAM BY THE PROBABILITY MATRIX METHOD
Directory of Open Access Journals (Sweden)
Mohammad Hashem Kanani
2012-12-01
Full Text Available The data collected from the operation of existing storage reservoirs, could offer valuable information for the better allocation and management of fresh water rates for future use to mitigation droughts effect. In this paper the long-term Dez reservoir (IRAN water rate prediction is presented using probability matrix method. Data is analyzed to find the probability matrix of water rates in Dez reservoir based on the previous history of annual water entrance during the past and present years(40 years. The algorithm developed covers both, the overflow and non-overflow conditions in the reservoir. Result of this study shows that in non-overflow conditions the most exigency case is equal to 75%. This means that, if the reservoir is empty (the stored water is less than 100 MCM this year, it would be also empty by 75% next year. The stored water in the reservoir would be less than 300 MCM by 85% next year if the reservoir is empty this year. This percentage decreases to 70% next year if the water of reservoir is less than 300 MCM this year. The percentage also decreases to 5% next year if the reservoir is full this year. In overflow conditions the most exigency case is equal to 75% again. The reservoir volume would be less than 150 MCM by 90% next year, if it is empty this year. This percentage decreases to 70% if its water volume is less than 300 MCM and 55% if the water volume is less than 500 MCM this year. Result shows that too, if the probability matrix of water rates to a reservoir is multiplied by itself repeatedly; it converges to a constant probability matrix, which could be used to predict the long-term water rate of the reservoir. In other words, the probability matrix of series of water rates is changed to a steady probability matrix in the course of time, which could reflect the hydrological behavior of the watershed and could be easily used for the long-term prediction of water storage in the down stream reservoirs.
Zhang, Guannan; Del-Castillo-Negrete, Diego
2017-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
Linear perturbation renormalization group method for Ising-like spin systems
Directory of Open Access Journals (Sweden)
J. Sznajd
2013-03-01
Full Text Available The linear perturbation group transformation (LPRG is used to study the thermodynamics of the axial next-nearest-neighbor Ising model with four spin interactions (extended ANNNI in a field. The LPRG for weakly interacting Ising chains is presented. The method is used to study finite field para-ferrimagnetic phase transitions observed in layered uranium compounds, UAs1-xSex, UPd2Si2 or UNi2Si2. The above-mentioned systems are made of ferromagnetic layers and the spins from the nearest-neighbor and next-nearest-neighbor layers are coupled by the antiferromagnetic interactions J121-xSex the para-ferri phase transition is of the first order as expected from the symmetry reason, in UT2Si2 (T=Pd, Ni this transition seems to be a continuous one, at least in the vicinity of the multicritical point. Within the MFA, the critical character of the finite field para-ferrimagnetic transition at least at one isolated point can be described by the ANNNI model supplemented by an additional, e.g., four-spin interaction. However, in LPRG approximation for the ratio κ = J2/J1 around 0.5 there is a critical value of the field for which an isolated critical point also exists in the original ANNNI model. The positive four-spin interaction shifts the critical point towards higher fields and changes the shape of the specific heat curve. In the latter case for the fields small enough, the specific heat exhibits two-peak structure in the paramagnetic phase.
On the partitioning method and the perturbation quantum theory - discrete spectra
International Nuclear Information System (INIS)
Logrado, P.G.
1982-05-01
Lower and upper bounds to eigenvalues of the Schroedinger equation H Ψ = E Ψ (H = H 0 + V) and the convergence condition, in Schonberg's perturbation theory, are presented. These results are obtained using the partitioning technique. It is presented for the first time a perturbation treatment obtained when the reference function in the partitioning technique is chosen to be a true eigenfunction Ψ. The convergence condition and upper and lower bounds for the true eigenvalues E are derived in this formulation. The concept of the reaction and wave operators is also discussed. (author)
Energy Technology Data Exchange (ETDEWEB)
Wampler, William R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Myers, Samuel M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Modine, Normand A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
Dinesh Kumar, S.; Nageshwar Rao, R.; Pramod Chakravarthy, P.
2017-11-01
In this paper, we consider a boundary value problem for a singularly perturbed delay differential equation of reaction-diffusion type. We construct an exponentially fitted numerical method using Numerov finite difference scheme, which resolves not only the boundary layers but also the interior layers arising from the delay term. An extensive amount of computational work has been carried out to demonstrate the applicability of the proposed method.
International Nuclear Information System (INIS)
Shafii, Mohammad Ali; Meidianti, Rahma; Wildian,; Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto
2014-01-01
Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation
Energy Technology Data Exchange (ETDEWEB)
Shafii, Mohammad Ali, E-mail: mashafii@fmipa.unand.ac.id; Meidianti, Rahma, E-mail: mashafii@fmipa.unand.ac.id; Wildian,, E-mail: mashafii@fmipa.unand.ac.id; Fitriyani, Dian, E-mail: mashafii@fmipa.unand.ac.id [Department of Physics, Andalas University Padang West Sumatera Indonesia (Indonesia); Tongkukut, Seni H. J. [Department of Physics, Sam Ratulangi University Manado North Sulawesi Indonesia (Indonesia); Arkundato, Artoto [Department of Physics, Jember University Jember East Java Indonesia (Indonesia)
2014-09-30
Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.
Energy Technology Data Exchange (ETDEWEB)
Wu Hongchun [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China)]. E-mail: hongchun@mail.xjtu.edu.cn; Liu Pingping [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China); Zhou Yongqiang [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China); Cao Liangzhi [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China)
2007-01-15
In the advanced reactor, the fuel assembly or core with unstructured geometry is frequently used and for calculating its fuel assembly, the transmission probability method (TPM) has been used widely. However, the rectangle or hexagon meshes are mainly used in the TPM codes for the normal core structure. The triangle meshes are most useful for expressing the complicated unstructured geometry. Even though finite element method and Monte Carlo method is very good at solving unstructured geometry problem, they are very time consuming. So we developed the TPM code based on the triangle meshes. The TPM code based on the triangle meshes was applied to the hybrid fuel geometry, and compared with the results of the MCNP code and other codes. The results of comparison were consistent with each other. The TPM with triangle meshes would thus be expected to be able to apply to the two-dimensional arbitrary fuel assembly.
Liu, Rong
2017-01-01
Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781
On the Perturb-and-Observe and Incremental Conductance MPPT methods for PV systems
DEFF Research Database (Denmark)
Sera, Dezso; Mathe, Laszlo; Kerekes, Tamas
2013-01-01
This paper presents a detailed analysis of the two most well-known hill-climbing MPPT algorithms, the Perturb-and-Observe (P&O) and Incremental Conductance (INC). The purpose of the analysis is to clarify some common misconceptions in the literature regarding these two trackers, therefore helping...
A comparison of Probability Of Detection (POD) data determined using different statistical methods
Fahr, A.; Forsyth, D.; Bullock, M.
1993-12-01
Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.
Collision probability method for discrete presentation of space in cylindrical cell
International Nuclear Information System (INIS)
Bosevski, T.
1969-08-01
A suitable numerical method for integration of one-group integral transport equation is obtained by series expansion of flux and neutron source by radius squared, when calculating the parameters of cylindrically symmetric reactor cell. Separation of variables in (x,y) plane enables analytical integration in one direction and efficient Gauss quadrature formula in the second direction. White boundary condition is used for determining the neutron balance. Suitable choice of spatial points distribution in the fuel and moderator condenses the procedure for determining the transport matrix and accelerates the convergence when calculating the absorption in the reactor cell. In comparison to other collision probability methods the proposed procedure is a simple mathematical model which demands smaller computer capacity and shorter computing time
Energy Technology Data Exchange (ETDEWEB)
Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)
2016-10-15
The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.
Energy Technology Data Exchange (ETDEWEB)
Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu
2017-02-01
The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution of dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.
Instantons and large N an introduction to non-perturbative methods in quantum field theory
Marino, Marcos
2015-01-01
This highly pedagogical textbook for graduate students in particle, theoretical and mathematical physics, explores advanced topics of quantum field theory. Clearly divided into two parts; the first focuses on instantons with a detailed exposition of instantons in quantum mechanics, supersymmetric quantum mechanics, the large order behavior of perturbation theory, and Yang-Mills theories, before moving on to examine the large N expansion in quantum field theory. The organised presentation style, in addition to detailed mathematical derivations, worked examples and applications throughout, enables students to gain practical experience with the tools necessary to start research. The author includes recent developments on the large order behaviour of perturbation theory and on large N instantons, and updates existing treatments of classic topics, to ensure that this is a practical and contemporary guide for students developing their understanding of the intricacies of quantum field theory.
Forner-Cordero, Arturo; Ackermann, Marko; de Lima Freitas, Mateus
2011-01-01
Perturbations during human gait such as a trip or a slip can result in a fall, especially among frail populations such as the elderly. In order to recover from a trip or a stumble during gait, humans perform different types of recovery strategies. It is very useful to uncover the mechanisms of the recovery to improve training methods for populations at risk of falling. Moreover, human recovery strategies could be applied to implement controllers for bipedal robot walker, as an application of biomimetic design. A biomechanical model of the response to a trip during gait might uncover the control mechanisms underlying the different recovery strategies and the adaptation of the responses found during the execution of successive perturbation trials. This paper introduces a model of stumble in the multibody system framework. This model is used to assess different feedforward strategies to recover from a trip. First of all, normal gait patterns for the musculoskeletal system model are obtained by solving an optimal control problem. Secondly, the reference gait is perturbed by the application of forces on the swinging foot in different ways: as an instantaneous inelastic collision of the foot with an obstacle, as an impulsive horizontal force or using a force curve measured experimentally during gait perturbation experiments. The influence of the type of perturbation, the timing of the collision with respect to the gait cycle, as well as of the coefficient of restitution was investigated previously. Finally, in order to test the effects of different muscle excitation levels on the initial phases of the recovery response, several muscle excitations were added to selected muscles of the legs, thus providing a simulation of the recovery reactions. These results pave the way for future analysis and modeling of the control mechanisms of gait.
Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.
2011-01-01
Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927
Tracer diffusion in an ordered alloy: application of the path probability and Monte Carlo methods
International Nuclear Information System (INIS)
Sato, Hiroshi; Akbar, S.A.; Murch, G.E.
1984-01-01
Tracer diffusion technique has been extensively utilized to investigate diffusion phenomena and has contributed a great deal to the understanding of the phenomena. However, except for self diffusion and impurity diffusion, the meaning of tracer diffusion is not yet satisfactorily understood. Here we try to extend the understanding to concentrated alloys. Our major interest here is directed towards understanding the physical factors which control diffusion through the comparison of results obtained by the Path Probability Method (PPM) and those by the Monte Carlo simulation method (MCSM). Both the PPM and the MCSM are basically in the same category of statistical mechanical approaches applicable to random processes. The advantage of the Path Probability method in dealing with phenomena which occur in crystalline systems has been well established. However, the approximations which are inevitably introduced to make the analytical treatment tractable, although their meaning may be well-established in equilibrium statistical mechanics, sometimes introduce unwarranted consequences the origin of which is often hard to trace. On the other hand, the MCSM which can be carried out in a parallel fashion to the PPM provides, with care, numerically exact results. Thus a side-by-side comparison can give insight into the effect of approximations in the PPM. It was found that in the pair approximation of the CVM, the distribution in the completely random state is regarded as homogeneous (without fluctuations), and hence, the fluctuation in distribution is not well represented in the PPM. These examples thus show clearly how the comparison of analytical results with carefully carried out calculations by the MCSM guides the progress of theoretical treatments and gives insights into the mechanism of diffusion
Unification of field theory and maximum entropy methods for learning probability densities
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Jung, Minsoo
2015-01-01
When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.
Zeng, Sen; Huang, Shuangxi; Liu, Yang
Cooperative business processes (CBP)-based service-oriented enterprise networks (SOEN) are emerging with the significant advances of enterprise integration and service-oriented architecture. The performance prediction and optimization for CBP-based SOEN is very complex. To meet these challenges, one of the key points is to try to reduce an abstract service’s waiting number of its physical services. This paper introduces a probability-based determination method (PBDM) of an abstract service’ waiting number, M l , and time span, τ i , for its physical services. The determination of M i and τ i is according to the physical services’ arriving rule and their overall performance’s distribution functions. In PBDM, the arriving probability of the physical services with the best overall performance value is a pre-defined reliability. PBDM has made use of the information of the physical services’ arriving rule and performance distribution functions thoroughly, which will improve the computational efficiency for the scheme design and performance optimization of the collaborative business processes in service-oriented computing environments.
Fundamental parameters of QCD from non-perturbative methods for two and four flavors
International Nuclear Information System (INIS)
Marinkovic, Marina
2013-01-01
The non-perturbative formulation of Quantumchromodynamics (QCD) on a four dimensional space-time Euclidean lattice together with the finite size techniques enable us to perform the renormalization of the QCD parameters non-perturbatively. In order to obtain precise predictions from lattice QCD, one needs to include the dynamical fermions into lattice QCD simulations. We consider QCD with two and four mass degenerate flavors of O(a) improved Wilson quarks. In this thesis, we improve the existing determinations of the fundamental parameters of two and four flavor QCD. In four flavor theory, we compute the precise value of the Λ parameter in the units of the scale L max defined in the hadronic regime. We also give the precise determination of the Schroedinger functional running coupling in four flavour theory and compare it to the perturbative results. The Monte Carlo simulations of lattice QCD within the Schroedinger Functional framework were performed with a platform independent program package Schroedinger Funktional Mass Preconditioned Hybrid Monte Carlo (SF-MP-HMC), developed as a part of this project. Finally, we compute the strange quark mass and the Λ parameter in two flavour theory, performing a well-controlled continuum limit and chiral extrapolation. To achieve this, we developed a universal program package for simulating two flavours of Wilson fermions, Mass Preconditioned Hybrid Monte Carlo (MP-HMC), which we used to run large scale simulations on small lattice spacings and on pion masses close to the physical value.
A fast and accurate method for perturbative resummation of transverse momentum-dependent observables
Kang, Daekyoung; Lee, Christopher; Vaidya, Varun
2018-04-01
We propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons ( γ ∗, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impact parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.
Performances improvement of maximum power point tracking perturb and observe method
Energy Technology Data Exchange (ETDEWEB)
Egiziano, L.; Femia, N.; Granozio, D.; Petrone, G.; Spagnuolo, G. [Salermo Univ., Salermo (Italy); Vitelli, M. [Seconda Univ. di Napoli, Napoli (Italy)
2006-07-01
Perturb and observe best operation conditions were investigated in order to identify edge efficiency performance capabilities of a maximum power point (MPP) tracking technique for photovoltaic (PV) applications. The strategy was developed to ensure a 3-points behavior across the MPP under a fixed irradiation level with a central point blocked on the MPP and 2 operating points operating at voltage values that guaranteed the same power levels. The system was also devised to quickly detect the MPP movement in the presence of varying atmospheric conditions by increasing the perturbation so that the MPP was guaranteed within a few sampling periods. A perturbation equation was selected where amplitude was represented as a function of the actual power drawn from the PV field together with the adoption of a parabolic interpolation of the sequence of the final 3 acquired voltage power couples corresponding to as many operating points. The technique was developed to ensure that the power difference between 2 consecutive operating points was higher than the power quantization error. Simulations were conducted to demonstrate that the proposed technique arranged operating points symmetrically around the MPP. The average power of the 3-points set was achieved by means of the parabolic prediction. Experiments conducted to validate the simulation showed a reduced power oscillation below the MPP and a real power gain. 2 refs., 8 figs.
Satake, Eiki; Vashlishan Murray, Amy
2015-01-01
This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…
Festa, Roberto
1992-01-01
According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important
International Nuclear Information System (INIS)
Gocmen, C.
2007-01-01
When the total solar eclipse came into question, people connected the eclipse with the earthquake dated 17.08.1999. We thought if any physical parameters change during total solar eclipse on the earth, we could measure this changing and we did the project 'To Measure Probable Physical Changes On The Earth During Total Solar Eclipse Using Geophysical Methods' We did gravity, magnetic and self-potential measurements at Konya and Ankara during total solar eclipse (29, March, 2006) and the day before eclipse and the day after eclipse. The measurements went on three days continuously twenty-four hours at Konya and daytime in Ankara. Bogazici University Kandilli Observatory gave us magnetic values in Istanbul and we compare the values with our magnetic values. Turkish State Meteorological Service sent us temperature and air pressure observations during three days, in Konya and Ankara. We interpreted all of them
Use of probabilistic methods for estimating failure probabilities and directing ISI-efforts
Energy Technology Data Exchange (ETDEWEB)
Nilsson, F; Brickstad, B [University of Uppsala, (Switzerland)
1988-12-31
Some general aspects of the role of Non Destructive Testing (NDT) efforts on the resulting probability of core damage is discussed. A simple model for the estimation of the pipe break probability due to IGSCC is discussed. It is partly based on analytical procedures, partly on service experience from the Swedish BWR program. Estimates of the break probabilities indicate that further studies are urgently needed. It is found that the uncertainties about the initial crack configuration are large contributors to the total uncertainty. Some effects of the inservice inspection are studied and it is found that the detection probabilities influence the failure probabilities. (authors).
Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.
Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D
2015-01-01
Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in
Directory of Open Access Journals (Sweden)
Pranab Kanti Roy
2015-09-01
Full Text Available This work aimed at studying the effects of environmental temperature and surface emissivity parameter on the temperature distribution, efficiency and heat transfer rate of a conductive–radiative fin. The Homotopy Perturbation Method (HPM being one of the semi-numerical methods for highly nonlinear and inhomogeneous equations, the local temperature distribution efficiencies and heat transfer rates are obtained using HPM in which Newton–Raphson method is used for the insulated boundary condition. It is found that the results of the present works are in good agreement with results available in the literature.
Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.
2017-12-01
We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.
Method for comparison of tokamak divertor strike point data with magnetic perturbation models
Czech Academy of Sciences Publication Activity Database
Cahyna, Pavel; Peterka, Matěj; Nardon, E.; Frerichs, H.; Pánek, Radomír
2014-01-01
Roč. 54, č. 6 (2014), 064002-064002 ISSN 0029-5515. [International Workshop on Stochasticity in Fusion Plasmas /6./. Jülich, 18.03.2013-20.03.2013] R&D Projects: GA ČR GAP205/11/2341 Institutional support: RVO:61389021 Keywords : divertor * resonant magnetic perturbation Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 3.062, year: 2014 http://iopscience.iop.org/0029-5515/54/6/064002/pdf/0029-5515_54_6_064002.pdf
Energy Technology Data Exchange (ETDEWEB)
Casoli, Pierre; Authier, Nicolas [Commissariat a l' Energie Atomique, Centre d' Etudes de Valduc, 21120 Is-Sur-Tille (France)
2008-07-01
Reactivity worth measurements of material samples put in the central cavities of nuclear reactors allow to test cross section nuclear databases or to extract information about the critical masses of fissile elements. Such experiments have already been completed on the Caliban and Silene experimental reactors operated by the Criticality and Neutronics Research Laboratory of Valduc (CEA, France) using the perturbation measurement technique. Calculations have been performed to prepare future experiments on new materials, such as light elements, structure materials, fission products or actinides. (authors)
Stability under persistent perturbation by white noise
International Nuclear Information System (INIS)
Kalyakin, L
2014-01-01
Deterministic dynamical system which has an asymptotical stable equilibrium is considered under persistent perturbation by white noise. It is well known that if the perturbation does not vanish in the equilibrium position then there is not Lyapunov's stability. The trajectories of the perturbed system diverge from the equilibrium to arbitrarily large distances with probability 1 in finite time. New concept of stability on a large time interval is discussed. The length of interval agrees the reciprocal quantity of the perturbation parameter. The measure of stability is the expectation of the square distance from the trajectory till the equilibrium position. The method of parabolic equation is applied to both estimate the expectation and prove such stability. The main breakthrough is the barrier function derived for the parabolic equation. The barrier is constructed by using the Lyapunov function of the unperturbed system
Refinement of a Method for Identifying Probable Archaeological Sites from Remotely Sensed Data
Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel; Chen, Li
2012-01-01
To facilitate locating archaeological sites before they are compromised or destroyed, we are developing approaches for generating maps of probable archaeological sites, through detecting subtle anomalies in vegetative cover, soil chemistry, and soil moisture by analyzing remotely sensed data from multiple sources. We previously reported some success in this effort with a statistical analysis of slope, radar, and Ikonos data (including tasseled cap and NDVI transforms) with Student's t-test. We report here on new developments in our work, performing an analysis of 8-band multispectral Worldview-2 data. The Worldview-2 analysis begins by computing medians and median absolute deviations for the pixels in various annuli around each site of interest on the 28 band difference ratios. We then use principle components analysis followed by linear discriminant analysis to train a classifier which assigns a posterior probability that a location is an archaeological site. We tested the procedure using leave-one-out cross validation with a second leave-one-out step to choose parameters on a 9,859x23,000 subset of the WorldView-2 data over the western portion of Ft. Irwin, CA, USA. We used 100 known non-sites and trained one classifier for lithic sites (n=33) and one classifier for habitation sites (n=16). We then analyzed convex combinations of scores from the Archaeological Predictive Model (APM) and our scores. We found that that the combined scores had a higher area under the ROC curve than either individual method, indicating that including WorldView-2 data in analysis improved the predictive power of the provided APM.
International Nuclear Information System (INIS)
Coleman, J.H.
1980-10-01
A technique is discussed for computing the probability distribution of the accumulated dose received by an arbitrary receptor resulting from several single releases from an intermittent source. The probability density of the accumulated dose is the convolution of the probability densities of doses from the intermittent releases. Emissions are not assumed to be constant over the brief release period. The fast fourier transform is used in the calculation of the convolution
On the method of logarithmic cumulants for parametric probability density function estimation.
Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane
2013-10-01
Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.
Exact asymptotics of probabilities of large deviations for Markov chains: the Laplace method
Energy Technology Data Exchange (ETDEWEB)
Fatalov, Vadim R [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)
2011-08-31
We prove results on exact asymptotics as n{yields}{infinity} for the expectations E{sub a} exp{l_brace}-{theta}{Sigma}{sub k=0}{sup n-1}g(X{sub k}){r_brace} and probabilities P{sub a}{l_brace}(1/n {Sigma}{sub k=0}{sup n-1}g(X{sub k})
Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra
2014-06-01
In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in
New results to BDD truncation method for efficient top event probability calculation
International Nuclear Information System (INIS)
Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang
2012-01-01
A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.
International Nuclear Information System (INIS)
Sanchez, Richard
1977-01-01
A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the Interface Current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding and water, or homogenized structural material. The cells are divided into zones which are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is made by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: the first uses a cylindrical cell model and one or three terms for the flux expansion; the second uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark pr
Roesler, Elizabeth L.; Grabowski, Timothy B.
2018-01-01
Developing effective monitoring methods for elusive, rare, or patchily distributed species requires extra considerations, such as imperfect detection. Although detection is frequently modeled, the opportunity to assess it empirically is rare, particularly for imperiled species. We used Pecos assiminea (Assiminea pecos), an endangered semiaquatic snail, as a case study to test detection and accuracy issues surrounding quadrat searches. Quadrats (9 × 20 cm; n = 12) were placed in suitable Pecos assiminea habitat and randomly assigned a treatment, defined as the number of empty snail shells (0, 3, 6, or 9). Ten observers rotated through each quadrat, conducting 5-min visual searches for shells. The probability of detecting a shell when present was 67.4 ± 3.0%, but it decreased with the increasing litter depth and fewer number of shells present. The mean (± SE) observer accuracy was 25.5 ± 4.3%. Accuracy was positively correlated to the number of shells in the quadrat and negatively correlated to the number of times a quadrat was searched. The results indicate quadrat surveys likely underrepresent true abundance, but accurately determine the presence or absence. Understanding detection and accuracy of elusive, rare, or imperiled species improves density estimates and aids in monitoring and conservation efforts.
Directory of Open Access Journals (Sweden)
Norhasimah Mahiddin
2014-01-01
Full Text Available The modified decomposition method (MDM and homotopy perturbation method (HPM are applied to obtain the approximate solution of the nonlinear model of tumour invasion and metastasis. The study highlights the significant features of the employed methods and their ability to handle nonlinear partial differential equations. The methods do not need linearization and weak nonlinearity assumptions. Although the main difference between MDM and Adomian decomposition method (ADM is a slight variation in the definition of the initial condition, modification eliminates massive computation work. The approximate analytical solution obtained by MDM logically contains the solution obtained by HPM. It shows that HPM does not involve the Adomian polynomials when dealing with nonlinear problems.
International Nuclear Information System (INIS)
Bakosi, Jozsef; Ristorcelli, Raymond J.
2010-01-01
Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.
Directory of Open Access Journals (Sweden)
D. Olvera
2015-01-01
Full Text Available We expand the application of the enhanced multistage homotopy perturbation method (EMHPM to solve delay differential equations (DDEs with constant and variable coefficients. This EMHPM is based on a sequence of subintervals that provide approximate solutions that require less CPU time than those computed from the dde23 MATLAB numerical integration algorithm solutions. To address the accuracy of our proposed approach, we examine the solutions of several DDEs having constant and variable coefficients, finding predictions with a good match relative to the corresponding numerical integration solutions.
International Nuclear Information System (INIS)
Murray, J.J.
1976-07-01
It may be expected that solenoid magnets will be used in many storage ring experiments. Typically an insert would consist of a main solenoid at the interaction point with a symmetrical pair of compensating solenoids located somewhere between the main solenoid and the ends of the interaction region. The magnetic fields of such an insert may significantly affect storage ring performance. We suggest here a simple, systematic method for evaluation of the effects, which together with adequate design supervision and field measurements will help to prevent any serious operational problems that might result if significant perturbations went unnoticed. 5 refs
Directory of Open Access Journals (Sweden)
Saeed Dinarvand
2012-01-01
Full Text Available The steady three-dimensional flow of condensation or spraying on inclined spinning disk is studied analytically. The governing nonlinear equations and their associated boundary conditions are transformed into the system of nonlinear ordinary differential equations. The series solution of the problem is obtained by utilizing the homotopy perturbation method (HPM. The velocity and temperature profiles are shown and the influence of Prandtl number on the heat transfer and Nusselt number is discussed in detail. The validity of our solutions is verified by the numerical results. Unlike free surface flows on an incline, this through flow is highly affected by the spray rate and the rotation of the disk.
Energy Technology Data Exchange (ETDEWEB)
Bobodzhanov, A A; Safonov, V F [National Research University " Moscow Power Engineering Institute" , Moscow (Russian Federation)
2013-07-31
The paper deals with extending the Lomov regularization method to classes of singularly perturbed Fredholm-type integro-differential systems, which have not so far been studied. In these the limiting operator is discretely noninvertible. Such systems are commonly known as problems with unstable spectrum. Separating out the essential singularities in the solutions to these problems presents great difficulties. The principal one is to give an adequate description of the singularities induced by 'instability points' of the spectrum. A methodology for separating singularities by using normal forms is developed. It is applied to the above type of systems and is substantiated in these systems. Bibliography: 10 titles.
International Nuclear Information System (INIS)
Shafii, M. Ali; Su'ud, Zaki; Waris, Abdul; Kurniasih, Neny; Ariani, Menik; Yulianti, Yanti
2010-01-01
Nuclear reactor design and analysis of next-generation reactors require a comprehensive computing which is better to be executed in a high performance computing. Flat flux (FF) approach is a common approach in solving an integral transport equation with collision probability (CP) method. In fact, the neutron flux distribution is not flat, even though the neutron cross section is assumed to be equal in all regions and the neutron source is uniform throughout the nuclear fuel cell. In non-flat flux (NFF) approach, the distribution of neutrons in each region will be different depending on the desired interpolation model selection. In this study, the linear interpolation using Finite Element Method (FEM) has been carried out to be treated the neutron distribution. The CP method is compatible to solve the neutron transport equation for cylindrical geometry, because the angle integration can be done analytically. Distribution of neutrons in each region of can be explained by the NFF approach with FEM and the calculation results are in a good agreement with the result from the SRAC code. In this study, the effects of the mesh on the k eff and other parameters are investigated.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
Tripathi, Rajnee; Mishra, Hradyesh Kumar
2016-01-01
In this communication, we describe the Homotopy Perturbation Method with Laplace Transform (LT-HPM), which is used to solve the Lane-Emden type differential equations. It's very difficult to solve numerically the Lane-Emden types of the differential equation. Here we implemented this method for two linear homogeneous, two linear nonhomogeneous, and four nonlinear homogeneous Lane-Emden type differential equations and use their appropriate comparisons with exact solutions. In the current study, some examples are better than other existing methods with their nearer results in the form of power series. The Laplace transform used to accelerate the convergence of power series and the results are shown in the tables and graphs which have good agreement with the other existing method in the literature. The results show that LT-HPM is very effective and easy to implement.
Energy Technology Data Exchange (ETDEWEB)
Mai, Sebastian; Marquetand, Philipp; González, Leticia [Institute of Theoretical Chemistry, University of Vienna, Währinger Str. 17, 1090 Vienna (Austria); Müller, Thomas, E-mail: th.mueller@fz-juelich.de [Institute for Advanced Simulation, Jülich Supercomputing Centre, Forschungszentrum Jülich, 52425 Jülich (Germany); Plasser, Felix [Interdisciplinary Center for Scientific Computing, University of Heidelberg, Im Neuenheimer Feld 368, 69120 Heidelberg (Germany); Lischka, Hans [Institute of Theoretical Chemistry, University of Vienna, Währinger Str. 17, 1090 Vienna (Austria); Department of Chemistry and Biochemistry, Texas Tech University, Lubbock, Texas 79409-1061 (United States)
2014-08-21
An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbit coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations.
International Nuclear Information System (INIS)
Mai, Sebastian; Marquetand, Philipp; González, Leticia; Müller, Thomas; Plasser, Felix; Lischka, Hans
2014-01-01
An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbit coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations
Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.
Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David
2018-07-01
To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP
Assessment of climate change using methods of mathematic statistics and theory of probability
International Nuclear Information System (INIS)
Trajanoska, Lidija; Kaevski, Ivancho
2004-01-01
In simple terms: 'Climate' is the average of 'weather'. The Earth's weather system is a complex machine composed of coupled sub-systems (ocean, air, land, ice and the biosphere) between which energy are exchanged. The understanding and study of climate change does not only rely on the understanding of the physics of climate change but is linked to the following question: 'How we can detect change in a system that is changing all the time under its own volition'? What is even the meaning of 'change' in such a situation? The concept of 'change' we should transform into the concept of 'significant and long-term' then this re-phrasing allows for a definition in mathematical terms. Significant change in a system becomes a measure of how large an observed change is in terms of the variability one would see under 'normal' conditions. Example could be the analyses of the yearly temperature of the air and precipitations, like in this paper. A large amount of data are selected as representing the 'before' case (change) and another set of data are selected as being the 'after' case and then the average in these two cases are compared. These comparisons are in the form of 'hypothesis tests' in which one tests whether the hypothesis that there has Open no change can be rejected. Both parameter and nonparametric statistic methods are used in the theory of mathematic statistic. The most indicative changeable which show global change is an average, standard deviation and probability function distribution on examined time series. Examined meteorological series are taken like haphazard process so we can mathematic statistic applied.(Author)
International Nuclear Information System (INIS)
Collins, J.C.
1985-01-01
Progress in quantum chromodynamics in the past year is reviewed in these specific areas: proof of factorization for hadron-hadron collisions, fast calculation of higher order graphs, perturbative Monte Carlo calculations for hadron-hadron scattering, applicability of perturbative methods to heavy quark production, and understanding of the small-x problem. 22 refs
International Nuclear Information System (INIS)
Bartlett, R.; Kirtman, B.; Davidson, E.R.
1978-01-01
After noting some advantages of using perturbation theory some of the various types are related on a chart and described, including many-body nonlinear summations, quartic force-field fit for geometry, fourth-order correlation approximations, and a survey of some recent work. Alternative initial approximations in perturbation theory are also discussed. 25 references
Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei
2018-06-01
In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.
DEFF Research Database (Denmark)
Hu, Y.; Li, H.; Liao, X
2016-01-01
method of early deterioration condition for critical components based only on temperature characteristic parameters. First, the dynamic threshold of deterioration degree function was proposed by analyzing the operational data between temperature and rotor speed. Second, a probability evaluation method...... of early deterioration condition was presented. Finally, two cases showed the validity of the proposed probability evaluation method in detecting early deterioration condition and in tracking their further deterioration for the critical components.......This study determines the early deterioration condition of critical components for a wind turbine generator system (WTGS). Due to the uncertainty nature of the fluctuation and intermittence of wind, early deterioration condition evaluation poses a challenge to the traditional vibration...
Directory of Open Access Journals (Sweden)
P. J. Irvine
2013-09-01
Full Text Available We present a simple method to generate a perturbed parameter ensemble (PPE of a fully-coupled atmosphere-ocean general circulation model (AOGCM, HadCM3, without requiring flux-adjustment. The aim was to produce an ensemble that samples parametric uncertainty in some key variables and gives a plausible representation of the climate. Six atmospheric parameters, a sea-ice parameter and an ocean parameter were jointly perturbed within a reasonable range to generate an initial group of 200 members. To screen out implausible ensemble members, 20 yr pre-industrial control simulations were run and members whose temperature responses to the parameter perturbations were projected to be outside the range of 13.6 ± 2 °C, i.e. near to the observed pre-industrial global mean, were discarded. Twenty-one members, including the standard unperturbed model, were accepted, covering almost the entire span of the eight parameters, challenging the argument that without flux-adjustment parameter ranges would be unduly restricted. This ensemble was used in 2 experiments; an 800 yr pre-industrial and a 150 yr quadrupled CO2 simulation. The behaviour of the PPE for the pre-industrial control compared well to ERA-40 reanalysis data and the CMIP3 ensemble for a number of surface and atmospheric column variables with the exception of a few members in the Tropics. However, we find that members of the PPE with low values of the entrainment rate coefficient show very large increases in upper tropospheric and stratospheric water vapour concentrations in response to elevated CO2 and one member showed an implausible nonlinear climate response, and as such will be excluded from future experiments with this ensemble. The outcome of this study is a PPE of a fully-coupled AOGCM which samples parametric uncertainty and a simple methodology which would be applicable to other GCMs.
Perturbative and constructive renormalization
International Nuclear Information System (INIS)
Veiga, P.A. Faria da
2000-01-01
These notes are a survey of the material treated in a series of lectures delivered at the X Summer School Jorge Andre Swieca. They are concerned with renormalization in Quantum Field Theories. At the level of perturbation series, we review classical results as Feynman graphs, ultraviolet and infrared divergences of Feynman integrals. Weinberg's theorem and Hepp's theorem, the renormalization group and the Callan-Symanzik equation, the large order behavior and the divergence of most perturbation series. Out of the perturbative regime, as an example of a constructive method, we review Borel summability and point out how it is possible to circumvent the perturbation diseases. These lectures are a preparation for the joint course given by professor V. Rivasseau at the same school, where more sophisticated non-perturbative analytical methods based on rigorous renormalization group techniques are presented, aiming at furthering our understanding about the subject and bringing field theoretical models to a satisfactory mathematical level. (author)
Energy Technology Data Exchange (ETDEWEB)
Roach, Dennis P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rice, Thomas M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Paquette, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-07-01
Wind turbine blades pose a unique set of inspection challenges that span from very thick and attentive spar cap structures to porous bond lines, varying core material and a multitude of manufacturing defects of interest. The need for viable, accurate nondestructive inspection (NDI) technology becomes more important as the cost per blade, and lost revenue from downtime, grows. NDI methods must not only be able to contend with the challenges associated with inspecting extremely thick composite laminates and subsurface bond lines, but must also address new inspection requirements stemming from the growing understanding of blade structural aging phenomena. Under its Blade Reliability Collaborative program, Sandia Labs quantitatively assessed the performance of a wide range of NDI methods that are candidates for wind blade inspections. Custom wind turbine blade test specimens, containing engineered defects, were used to determine critical aspects of NDI performance including sensitivity, accuracy, repeatability, speed of inspection coverage, and ease of equipment deployment. The detection of fabrication defects helps enhance plant reliability and increase blade life while improved inspection of operating blades can result in efficient blade maintenance, facilitate repairs before critical damage levels are reached and minimize turbine downtime. The Sandia Wind Blade Flaw Detection Experiment was completed to evaluate different NDI methods that have demonstrated promise for interrogating wind blades for manufacturing flaws or in-service damage. These tests provided the Probability of Detection information needed to generate industry-wide performance curves that quantify: 1) how well current inspection techniques are able to reliably find flaws in wind turbine blades (industry baseline) and 2) the degree of improvements possible through integrating more advanced NDI techniques and procedures. _____________ S a n d i a N a t i o n a l L a b o r a t o r i e s i s a m u l t i
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Zhukovskiy, Yu L.; Korolev, N. A.; Babanova, I. S.; Boikov, A. V.
2017-10-01
This article is devoted to the development of a method for probability estimate of failure of an asynchronous motor as a part of electric drive with a frequency converter. The proposed method is based on a comprehensive method of diagnostics of vibration and electrical characteristics that take into account the quality of the supply network and the operating conditions. The developed diagnostic system allows to increase the accuracy and quality of diagnoses by determining the probability of failure-free operation of the electromechanical equipment, when the parameters deviate from the norm. This system uses an artificial neural networks (ANNs). The results of the system for estimator the technical condition are probability diagrams of the technical state and quantitative evaluation of the defects of the asynchronous motor and its components.
International Nuclear Information System (INIS)
Zeppenfeld, D.
1984-01-01
The present thesis deals with the construction and the analysis of mesonic bound states in SU(N) gauge theories in a two-dimensional space-time. The based field theory can thereby be considered as a simplified version of the QCD, the theory of the strong interactions. After an extensive discussion of the quantization in the temporal gauge and after the Poincare invariance of the theory has been shown mesonic bound states and the meson spectrum for different ranges of the free parameters of the theory (quark mass, coupling constant, and index N of the gauge group) are treated. The spectrum is given by a boundary value problem which in the perturbative limit is solved analytically. For massless quarks gauge-invariant annihilation operators are constructed which permit an exact solution of the energy eigenvalue equation. The energy eigenstates so found described massive interacting mesons which are surrounded by a cloud of massless free particles. (orig.) [de
Directory of Open Access Journals (Sweden)
FAHIM GOHARAWAN
2017-04-01
Full Text Available Techniques for the cavity measurement of the electrical characteristics of the materials are well established using the approximate method due to its simplicity in material insertion and fabrication. However, the exact method which requires more comprehensive mathematical analysis as well, owing to the practical difficulties for the material insertion, is not mostly used while performing the measurements as compared to approximate method in most of the works. In this work the comparative analysis of both the approximate as well as Exact method is performed and accuracy of the Exact method is established by performing the measurements of non-magnetic material Teflon within the cavity.
International Nuclear Information System (INIS)
Awan, F.G.; Sheikh, N.A.; Qureshi, S.A.; Sheikh, N.M.
2017-01-01
Techniques for the cavity measurement of the electrical characteristics of the materials are well established using the approximate method due to its simplicity in material insertion and fabrication. However, the exact method which requires more comprehensive mathematical analysis as well, owing to the practical difficulties for the material insertion, is not mostly used while performing the measurements as compared to approximate method in most of the works. In this work the comparative analysis of both the approximate as well as Exact method is performed and accuracy of the Exact method is established by performing the measurements of non-magnetic material Teflon within the cavity. (author)
International Nuclear Information System (INIS)
Saitovitch, H.
1979-01-01
This work is based on our quadrupolar interaction (QI) measurements on intercalated 2H-TaS sub(2) coumponds. As intercalating elements we used the alcalines - Li, Na, K, Cs -as well as the NH sub(3) (ammonia) and C sub(6) H sub(5) N (pyridine) molecules. The (QI) measurements were performed via the differential perturbed angular correlation (DPAC) technique, using Ta sup(181) as the probe isotope, on the hydrated and anhidrous phases of the intercalated systems. Our results happened to be in better agreement with the ionic model, one of the accepted models used to describe the intercalation process, as well as the transfered charges quantities and its distribution in the intercalated systems. And by its side the measured quantities, quadrupole interaction frequencies (QIF) and their distributions δ, contributed to support and to improve the ionic model. A strong charge dynamics between the 2H-TaS sub(2) sandwiches was observed and a relation between the (QIF) changes and amount of transfered charge (e sup(-)/Ta) was established. The attempt to specify the numerical contributions to the (QI) changes arriving from the different components of the 2H-TaS sub(2) intercalated systems put in evidence the probable orbitals involved in the systems bonds. Finally the kinetics of the intercalation process to form the 2H-TaS sub(2) (Li) sub(x) system was followed continuously by the (DPAC) measurements. (author)
International Nuclear Information System (INIS)
Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.
2013-01-01
Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.
Methods for estimating the probability of cancer from occupational radiation exposure
International Nuclear Information System (INIS)
1996-04-01
The aims of this TECDOC are to present the factors which are generally accepted as being responsible for cancer induction, to examine the role of radiation as a carcinogen, to demonstrate how the probability of cancer causation by radiation may be calculated and to inform the reader of the uncertainties that are associated with the use of various risk factors and models in such calculations. 139 refs, 2 tabs
International Nuclear Information System (INIS)
Nieves, Jose F.; Pal, Palash B.
2006-01-01
We consider the calculation of amplitudes for processes that take place in a constant background magnetic field, first using the standard method for the calculation of an amplitude in an external field, and second utilizing the Schwinger propagator for charged particles in a magnetic field. We show that there are processes for which the Schwinger-propagator method does not yield the total amplitude. We explain why the two methods yield equivalent results in some cases and indicate when we can expect the equivalence to hold. We show these results in fairly general terms and illustrate them with specific examples as well
International Nuclear Information System (INIS)
Hojjati, M.H.; Jafari, S.
2008-01-01
In this work, two powerful analytical methods, namely homotopy perturbation method (HPM) and Adomian's decomposition method (ADM), are introduced to obtain distributions of stresses and displacements in rotating annular elastic disks with uniform and variable thicknesses and densities. The results obtained by these methods are then compared with the verified variational iteration method (VIM) solution. He's homotopy perturbation method which does not require a 'small parameter' has been used and a homotopy with an imbedding parameter p element of [0,1] is constructed. The method takes the full advantage of the traditional perturbation methods and the homotopy techniques and yields a very rapid convergence of the solution. Adomian's decomposition method is an iterative method which provides analytical approximate solutions in the form of an infinite power series for nonlinear equations without linearization, perturbation or discretization. Variational iteration method, on the other hand, is based on the incorporation of a general Lagrange multiplier in the construction of correction functional for the equation. This study demonstrates the ability of the methods for the solution of those complicated rotating disk cases with either no or difficult to find fairly exact solutions without the need to use commercial finite element analysis software. The comparison among these methods shows that although the numerical results are almost the same, HPM is much easier, more convenient and efficient than ADM and VIM
Delineating social network data anonymization via random edge perturbation
Xue, Mingqiang; Karras, Panagiotis; Raï ssi, Chedy; Kalnis, Panos; Pung, Hungkeng
2012-01-01
study of the probability of success of any}structural attack as a function of the perturbation probability. Our analysis provides a powerful tool for delineating the identification risk of perturbed social network data; our extensive experiments
International Nuclear Information System (INIS)
Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.
1994-01-01
Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-12-01
To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured
International Nuclear Information System (INIS)
Campolina, Daniel de A.M.; Pereira, Claubia; Veloso, Maria Auxiliadora F.
2013-01-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using sampling based method is recent because of the huge computational effort required. In this work a sample space of MCNP calculations were used as a black box model to propagate the uncertainty of system parameters. The efficiency of the method was compared to a conservative method. Uncertainties in input parameters of the reactor considered non-neutronic uncertainties, including geometry dimensions and density. The effect of the uncertainties on the effective multiplication factor of the system was analyzed respect to the possibility of using many uncertainties in the same input. If the case includes more than 46 parameters with uncertainty in the same input, the sampling based method is proved to be more efficient than the conservative method. (author)
International Nuclear Information System (INIS)
Balino, Jorge L.; Larreteguy, Axel E.; Andrade Lima, Fernando R.
1995-01-01
The differential method was applied to the sensitivity analysis for water hammer problems in hydraulic networks. Starting from the classical water hammer equations in a single-phase liquid with friction, the state vector comprising the piezometric head and the velocity was defined. Applying the differential method the adjoint operator, the adjoint equations with the general form of their boundary conditions, and the general form of the bilinear concomitant were calculated. The discretized adjoint equations and the corresponding boundary conditions were programmed and solved by using the so called method of characteristics. As an example, a constant-level tank connected through a pipe to a valve discharging to atmosphere was considered. The bilinear concomitant was calculated for this particular case. The corresponding sensitivity coefficients due to the variation of different parameters by using both the differential method and the response surface generated by the computer code WHAT were also calculated. The results obtained with these methods show excellent agreement. (author). 11 refs, 2 figs, 2 tabs
International Nuclear Information System (INIS)
Passos, E.M.J. de
1976-01-01
The relationship between the Johnson-Baranger time-dependent folded diagram (JBFD) expansion, and the time independent methods of perturbation theory, are investigated. In the nondegenerate case, the JBFD expansion and the Rayleigh-Schroedinger perturbation expansion, for the ground state energy, are identical. On the other hand, in the degenerate case, for the nonhermitian effective interaction considered, the JBFD expansion, of the effective interaction, is equal to the perturbative expansion of the effective interaction of the nonhermitian eigenvalue problem of Bloch and Brandow-Des Cloizeaux. For the two hermitian effective interactions, the JBFD expansion of the effective interaction differs from the perturbation expansion of the effective interaction of the hermitian eigenvalue problem of Des Cloizeaux [pt
International Nuclear Information System (INIS)
Sakai, Shiro; Arita, Ryotaro; Aoki, Hideo
2006-01-01
We propose a new quantum Monte Carlo method especially intended to couple with the dynamical mean-field theory. The algorithm is not only much more efficient than the conventional Hirsch-Fye algorithm, but is applicable to multiorbital systems having an SU(2)-symmetric Hund's coupling as well
Energy Technology Data Exchange (ETDEWEB)
Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Opp, Daniel; Zhang, Geoffrey; Moros, Eduardo; Feygelman, Vladimir, E-mail: vladimir.feygelman@moffitt.org [Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida 33612 (United States)
2014-06-15
Purpose: In this work, the feasibility of implementing a motion-perturbation approach to accurately estimate volumetric dose in the presence of organ motion—previously demonstrated for VMAT-–is studied for static gantry IMRT. The method's accuracy is improved for the voxels that have very low planned dose but acquire appreciable dose due to motion. The study describes the modified algorithm and its experimental validation and provides an example of a clinical application. Methods: A contoured region-of-interest is propagated according to the predefined motion kernel throughout time-resolved 4D phantom dose grids. This timed series of 3D dose grids is produced by the measurement-guided dose reconstruction algorithm, based on an irradiation of a staticARCCHECK (AC) helical dosimeter array (Sun Nuclear Corp., Melbourne, FL). Each moving voxel collects dose over the dynamic simulation. The difference in dose-to-moving voxel vs dose-to-static voxel in-phantom forms the basis of a motion perturbation correction that is applied to the corresponding voxel in the patient dataset. A new method to synchronize the accelerator and dosimeter clocks, applicable to fixed-gantry IMRT, was developed. Refinements to the algorithm account for the excursion of low dose voxels into high dose regions, causing appreciable dose increase due to motion (LDVE correction). For experimental validation, four plans using TG-119 structure sets and objectives were produced using segmented IMRT direct machine parameters optimization in Pinnacle treatment planning system (v. 9.6, Philips Radiation Oncology Systems, Fitchburg, WI). All beams were delivered with the gantry angle of 0°. Each beam was delivered three times: (1) to the static AC centered on the room lasers; (2) to a static phantom containing a MAPCHECK2 (MC2) planar diode array dosimeter (Sun Nuclear); and (3) to the moving MC2 phantom. The motion trajectory was an ellipse in the IEC XY plane, with 3 and 1.5 cm axes. The period
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
International Nuclear Information System (INIS)
Purba, Julwan Hendry; Sony Tjahyani, D.T.; Widodo, Surip; Tjahjono, Hendro
2017-01-01
Highlights: •FPFTA deals with epistemic uncertainty using fuzzy probability. •Criticality analysis is important for reliability improvement. •An α-cut method based importance measure is proposed for criticality analysis in FPFTA. •The α-cut method based importance measure utilises α-cut multiplication, α-cut subtraction, and area defuzzification technique. •Benchmarking confirm that the proposed method is feasible for criticality analysis in FPFTA. -- Abstract: Fuzzy probability – based fault tree analysis (FPFTA) has been recently developed and proposed to deal with the limitations of conventional fault tree analysis. In FPFTA, reliabilities of basic events, intermediate events and top event are characterized by fuzzy probabilities. Furthermore, the quantification of the FPFTA is based on fuzzy multiplication rule and fuzzy complementation rule to propagate uncertainties from basic event to the top event. Since the objective of the fault tree analysis is to improve the reliability of the system being evaluated, it is necessary to find the weakest path in the system. For this purpose, criticality analysis can be implemented. Various importance measures, which are based on conventional probabilities, have been developed and proposed for criticality analysis in fault tree analysis. However, not one of those importance measures can be applied for criticality analysis in FPFTA, which is based on fuzzy probability. To be fully applied in nuclear power plant probabilistic safety assessment, FPFTA needs to have its corresponding importance measure. The objective of this study is to develop an α-cut method based importance measure to evaluate and rank the importance of basic events for criticality analysis in FPFTA. To demonstrate the applicability of the proposed measure, a case study is performed and its results are then benchmarked to the results generated by the four well known importance measures in conventional fault tree analysis. The results
a Probability Model for Drought Prediction Using Fusion of Markov Chain and SAX Methods
Jouybari-Moghaddam, Y.; Saradjian, M. R.; Forati, A. M.
2017-09-01
Drought is one of the most powerful natural disasters which are affected on different aspects of the environment. Most of the time this phenomenon is immense in the arid and semi-arid area. Monitoring and prediction the severity of the drought can be useful in the management of the natural disaster caused by drought. Many indices were used in predicting droughts such as SPI, VCI, and TVX. In this paper, based on three data sets (rainfall, NDVI, and land surface temperature) which are acquired from MODIS satellite imagery, time series of SPI, VCI, and TVX in time limited between winters 2000 to summer 2015 for the east region of Isfahan province were created. Using these indices and fusion of symbolic aggregation approximation and hidden Markov chain drought was predicted for fall 2015. For this purpose, at first, each time series was transformed into the set of quality data based on the state of drought (5 group) by using SAX algorithm then the probability matrix for the future state was created by using Markov hidden chain. The fall drought severity was predicted by fusion the probability matrix and state of drought severity in summer 2015. The prediction based on the likelihood for each state of drought includes severe drought, middle drought, normal drought, severe wet and middle wet. The analysis and experimental result from proposed algorithm show that the product of this algorithm is acceptable and the proposed algorithm is appropriate and efficient for predicting drought using remote sensor data.
International Nuclear Information System (INIS)
Green, T.A.
1978-10-01
For one-electron heteropolar systems, the wave-theoretic Lagrangian of Paper I 2 is simplified in two distinct approximations. The first is semiclassical; the second is quantal, for velocities below those for which the semiclassical treatment is reliable. For each approximation, unitarity and detailed balancing are discussed. Then, the variational method as described by Demkov is used to determine the coupled equations for the radial functions and the Euler-Lagrange equations for the translational factors which are part of the theory. Specific semiclassical formulae for the translational factors are given in a many-state approximation. Low-velocity quantal formulae are obtained in a one-state approximation. The one-state results of both approximations agree with an earlier determination by Riley. 14 references
Physical method to assess a probable maximum precipitation, using CRCM datas
International Nuclear Information System (INIS)
Beauchamp, J.
2009-01-01
'Full text:' For Nordic hydropower facilities, spillways are designed with a peak discharge based on extreme conditions. This peak discharge is generally derived using the concept of a probable maximum flood (PMF), which results from the combined effect of abundant downpours (probable maximum precipitation - PMP) and rapid snowmelt. On a gauged basin, the weather data record allows for the computation of the PMF. However, uncertainty in the future climate raises questions as to the accuracy of current PMP estimates for existing and future hydropower facilities. This project looks at the potential use of the Canadian Regional Climate Model (CRCM) data to compute the PMF in ungauged basins and to assess potential changes to the PMF in a changing climate. Several steps will be needed to accomplish this task. This paper presents the first step that aims at applying/adapting to CRCM data the in situ moisture maximization technique developed by the World Meteorological Organization, in order to compute the PMP at the watershed scale. The CRCM provides output data on a 45km grid at a six hour time step. All of the needed atmospheric data is available at sixteen different pressure levels. The methodology consists in first identifying extreme precipitation events under current climate conditions. Then, a maximum persisting twelve hours dew point is determined at each grid point and pressure level for the storm duration. Afterwards, the maximization ratio is approximated by merging the effective temperature with dew point and relative humidity values. The variables and maximization ratio are four-dimensional (x, y, z, t) values. Consequently, two different approaches are explored: a partial ratio at each step and a global ratio for the storm duration. For every identified extreme precipitation event, a maximized hyetograph is computed from the application of this ratio, either partial or global, on CRCM precipitation rates. Ultimately, the PMP is the depth of the
Physical method to assess a probable maximum precipitation, using CRCM datas
Energy Technology Data Exchange (ETDEWEB)
Beauchamp, J. [Univ. de Quebec, Ecole de technologie superior, Quebec (Canada)
2009-07-01
'Full text:' For Nordic hydropower facilities, spillways are designed with a peak discharge based on extreme conditions. This peak discharge is generally derived using the concept of a probable maximum flood (PMF), which results from the combined effect of abundant downpours (probable maximum precipitation - PMP) and rapid snowmelt. On a gauged basin, the weather data record allows for the computation of the PMF. However, uncertainty in the future climate raises questions as to the accuracy of current PMP estimates for existing and future hydropower facilities. This project looks at the potential use of the Canadian Regional Climate Model (CRCM) data to compute the PMF in ungauged basins and to assess potential changes to the PMF in a changing climate. Several steps will be needed to accomplish this task. This paper presents the first step that aims at applying/adapting to CRCM data the in situ moisture maximization technique developed by the World Meteorological Organization, in order to compute the PMP at the watershed scale. The CRCM provides output data on a 45km grid at a six hour time step. All of the needed atmospheric data is available at sixteen different pressure levels. The methodology consists in first identifying extreme precipitation events under current climate conditions. Then, a maximum persisting twelve hours dew point is determined at each grid point and pressure level for the storm duration. Afterwards, the maximization ratio is approximated by merging the effective temperature with dew point and relative humidity values. The variables and maximization ratio are four-dimensional (x, y, z, t) values. Consequently, two different approaches are explored: a partial ratio at each step and a global ratio for the storm duration. For every identified extreme precipitation event, a maximized hyetograph is computed from the application of this ratio, either partial or global, on CRCM precipitation rates. Ultimately, the PMP is the depth of the
The Most Probable Limit of Detection (MPL) for rapid microbiological methods
Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, E.R. van den
Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on
The most probable limit of detection (MPL) for rapid microbiological methods
Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, van den E.R.
2010-01-01
Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on
International Nuclear Information System (INIS)
Tsuchihashi, Keichiro
1985-03-01
A series of formulations to evaluate collision probability for multi-region cells expressed by either of three one-dimensional coordinate systems (plane, sphere and cylinder) or by the general two-dimensional cylindrical coordinate system is presented. They are expressed in a suitable form to have a common numerical process named ''Ray-Trace'' method. Applications of the collision probability method to two optional treatments for the resonance absorption are presented. One is a modified table-look-up method based on the intermediate resonance approximation, and the other is a rigorous method to calculate the resonance absorption in a multi-region cell in which nearly continuous energy spectra of the resonance neutron range can be solved and interaction effect between different resonance nuclides can be evaluated. Two works on resonance absorption in a doubly heterogeneous system with grain structure are presented. First, the effect of a random distribution of particles embedded in graphite diluent on the resonance integral is studied. Next, the ''Accretion'' method proposed by Leslie and Jonsson to define the collision probability in a doubly heterogeneous system is applied to evaluate the resonance absorption in coated particles dispersed in fuel pellet of the HTGR. Several optional models are proposed to define the collision rates in the medium with the microscopic heterogeneity. By making use of the collision probability method developed by the present study, the JAERI thermal reactor standard nuclear design code system SRAC has been developed. Results of several benchmark tests for the SRAC are presented. The analyses of critical experiments of the SHE, DCA, and FNR show good agreement of critical masses with their experimental values. (J.P.N.)
International Nuclear Information System (INIS)
Du Zeng-Ji; Lin Wan-Tao; Mo Jia-Qi
2012-01-01
The EI Niño-southern oscillation (ENSO) is an interannual phenomenon involved in tropical Pacific ocean-atmosphere interactions. In this paper, we develop an asymptotic method of solving the nonlinear equation using the ENSO model. Based on a class of the oscillator of the ENSO model, a approximate solution of the corresponding problem is studied employing the perturbation method
Method for Evaluation of Outage Probability on Random Access Channel in Mobile Communication Systems
Kollár, Martin
2012-05-01
In order to access the cell in all mobile communication technologies a so called random-access procedure is used. For example in GSM this is represented by sending the CHANNEL REQUEST message from Mobile Station (MS) to Base Transceiver Station (BTS) which is consequently forwarded as an CHANNEL REQUIRED message to the Base Station Controller (BSC). If the BTS decodes some noise on the Random Access Channel (RACH) as random access by mistake (so- called ‘phantom RACH') then it is a question of pure coincidence which èstablishment cause’ the BTS thinks to have recognized. A typical invalid channel access request or phantom RACH is characterized by an IMMEDIATE ASSIGNMENT procedure (assignment of an SDCCH or TCH) which is not followed by sending an ESTABLISH INDICATION from MS to BTS. In this paper a mathematical model for evaluation of the Power RACH Busy Threshold (RACHBT) in order to guaranty in advance determined outage probability on RACH is described and discussed as well. It focuses on Global System for Mobile Communications (GSM) however the obtained results can be generalized on remaining mobile technologies (ie WCDMA and LTE).
Burst suppression probability algorithms: state-space methods for tracking EEG burst suppression
Chemali, Jessica; Ching, ShiNung; Purdon, Patrick L.; Solt, Ken; Brown, Emery N.
2013-10-01
Objective. Burst suppression is an electroencephalogram pattern in which bursts of electrical activity alternate with an isoelectric state. This pattern is commonly seen in states of severely reduced brain activity such as profound general anesthesia, anoxic brain injuries, hypothermia and certain developmental disorders. Devising accurate, reliable ways to quantify burst suppression is an important clinical and research problem. Although thresholding and segmentation algorithms readily identify burst suppression periods, analysis algorithms require long intervals of data to characterize burst suppression at a given time and provide no framework for statistical inference. Approach. We introduce the concept of the burst suppression probability (BSP) to define the brain's instantaneous propensity of being in the suppressed state. To conduct dynamic analyses of burst suppression we propose a state-space model in which the observation process is a binomial model and the state equation is a Gaussian random walk. We estimate the model using an approximate expectation maximization algorithm and illustrate its application in the analysis of rodent burst suppression recordings under general anesthesia and a patient during induction of controlled hypothermia. Main result. The BSP algorithms track burst suppression on a second-to-second time scale, and make possible formal statistical comparisons of burst suppression at different times. Significance. The state-space approach suggests a principled and informative way to analyze burst suppression that can be used to monitor, and eventually to control, the brain states of patients in the operating room and in the intensive care unit.
C. Colloca TS/FM
2004-01-01
TS/FM group informs you that, for the progress of the works at the Prévessin site entrance, some perturbation of the traffic may occur during the week between the 14th and 18th of June for a short duration. Access will be assured at any time. For more information, please contact 160239. C. Colloca TS/FM
Gilstrap, Donald L.
2013-01-01
In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…
Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H
2016-01-01
Objective: To examine sociodemographic and behavioural differences between men whohave sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey.\\ud Methods: We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men inthe same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European...
International Nuclear Information System (INIS)
Queiroz Bogado Leite, S. de.
1989-10-01
A widely used but otherwise physically incorrect assumption in unit-cell calculations by the method of interface currents in cylindrical or spherical geometries, is that of that of isotropic fluxes at the surfaces of the cell annular regions, when computing transmission probabilities. In this work, new interface-current relations are developed without making use of this assumption and the effects on calculated integral parameters are shown for an idealized unit-cell example. (author) [pt
International Nuclear Information System (INIS)
Choi, Sooyoung; Choe, Jiwon; Lee, Deokjung
2016-01-01
STREAM uses a pin-based slowing-down method (PSM) which solves pointwise energy slowing-down problems with sub-divided fuel pellet, and shows a great performance in calculating effective cross-section (XS). Various issues in the conventional resonance treatment methods (i.e., approximations on resonance scattering source, resonance interference effect, and intrapellet self-shielding effect) were successfully resolved by PSM. PSM assumes that a fuel rod has a uniform material composition and temperature even though PSM calculates spatially dependent effective XSs of fuel subregions. When the depletion calculation or thermal/hydraulic (T/H) coupling are performed with sub-divided material meshes, each subregion has its own material condition depending on position. It was reported that the treatment of distributed temperature is important to calculate an accurate fuel temperature coefficient (FTC). In order to avoid the approximation in PSM, the collision probability method (CPM) has been incorporated as a calculation option. The resonance treatment method, PSM, used in the transport code STREAM has been enhanced to accurately consider a non-uniform material condition. The method incorporates CPM in computing collision probability of isolated fuel pin. From numerical tests with pin-cell problems, STREAM with the method showed very accurate multiplication factor and FTC results less than 83 pcm and 1.43 % differences from the references, respectively. The original PSM showed larger differences than the proposed method but still has a high accuracy
Energy Technology Data Exchange (ETDEWEB)
Choi, Sooyoung; Choe, Jiwon; Lee, Deokjung [UNIST, Ulsan (Korea, Republic of)
2016-10-15
STREAM uses a pin-based slowing-down method (PSM) which solves pointwise energy slowing-down problems with sub-divided fuel pellet, and shows a great performance in calculating effective cross-section (XS). Various issues in the conventional resonance treatment methods (i.e., approximations on resonance scattering source, resonance interference effect, and intrapellet self-shielding effect) were successfully resolved by PSM. PSM assumes that a fuel rod has a uniform material composition and temperature even though PSM calculates spatially dependent effective XSs of fuel subregions. When the depletion calculation or thermal/hydraulic (T/H) coupling are performed with sub-divided material meshes, each subregion has its own material condition depending on position. It was reported that the treatment of distributed temperature is important to calculate an accurate fuel temperature coefficient (FTC). In order to avoid the approximation in PSM, the collision probability method (CPM) has been incorporated as a calculation option. The resonance treatment method, PSM, used in the transport code STREAM has been enhanced to accurately consider a non-uniform material condition. The method incorporates CPM in computing collision probability of isolated fuel pin. From numerical tests with pin-cell problems, STREAM with the method showed very accurate multiplication factor and FTC results less than 83 pcm and 1.43 % differences from the references, respectively. The original PSM showed larger differences than the proposed method but still has a high accuracy.
Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation
Energy Technology Data Exchange (ETDEWEB)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
2018-01-01
We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advective dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.
Edmonds, L. D.
2016-01-01
Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.
A qualitative botanical identification method (BIM) is an analytical procedure which returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) mate...
Heyvaert, Mieke; Deleye, Maarten; Saenen, Lore; Van Dooren, Wim; Onghena, Patrick
2018-01-01
When studying a complex research phenomenon, a mixed methods design allows to answer a broader set of research questions and to tap into different aspects of this phenomenon, compared to a monomethod design. This paper reports on how a sequential equal status design (QUAN ? QUAL) was used to examine students' reasoning processes when solving…
International Nuclear Information System (INIS)
Zegong, Zhou; Changhong, Liu
1995-01-01
On the basis of the research into original distribution function as the importance function after shifting an appropriate distance, this paper takes the variation of similar ratio of the original function to the importance function as the objective function, the optimum shifting distance obtained by use of an optimization method. The optimum importance function resulting from the optimization method can ensure that the number of Monte Carlo simulations is decreased and at the same time the good estimates of the yearly failure probabilities are obtained
The General Necessary Condition for the Validity of Dirac's Transition Perturbation Theory
Quang, Nguyen Vinh
1996-01-01
For the first time, from the natural requirements for the successive approximation the general necessary condition of validity of the Dirac's method is explicitly established. It is proved that the conception of 'the transition probability per unit time' is not valid. The 'super-platinium rules' for calculating the transition probability are derived for the arbitrarily strong time-independent perturbation case.
A Modified Generalized Fisher Method for Combining Probabilities from Dependent Tests
Directory of Open Access Journals (Sweden)
Hongying (Daisy eDai
2014-02-01
Full Text Available Rapid developments in molecular technology have yielded a large amount of high throughput genetic data to understand the mechanism for complex traits. The increase of genetic variants requires hundreds and thousands of statistical tests to be performed simultaneously in analysis, which poses a challenge to control the overall Type I error rate. Combining p-values from multiple hypothesis testing has shown promise for aggregating effects in high-dimensional genetic data analysis. Several p-value combining methods have been developed and applied to genetic data; see [Dai, et al. 2012b] for a comprehensive review. However, there is a lack of investigations conducted for dependent genetic data, especially for weighted p-value combining methods. Single nucleotide polymorphisms (SNPs are often correlated due to linkage disequilibrium. Other genetic data, including variants from next generation sequencing, gene expression levels measured by microarray, protein and DNA methylation data, etc. also contain complex correlation structures. Ignoring correlation structures among genetic variants may lead to severe inflation of Type I error rates for omnibus testing of p-values. In this work, we propose modifications to the Lancaster procedure by taking the correlation structure among p-values into account. The weight function in the Lancaster procedure allows meaningful biological information to be incorporated into the statistical analysis, which can increase the power of the statistical testing and/or remove the bias in the process. Extensive empirical assessments demonstrate that the modified Lancaster procedure largely reduces the Type I error rates due to correlation among p-values, and retains considerable power to detect signals among p-values. We applied our method to reassess published renal transplant data, and identified a novel association between B cell pathways and allograft tolerance.
ABOUT PROBABILITY OF RESEARCH OF THE NN Ser SPECTRUM BY MODEL ATMOSPHERES METHOD
Sakhibullin, N. A.; Shimansky, V. V.
2017-01-01
The spectrum of close binary system NN Ser is investigated by a models atmospheres method. It is show that the atmosphere near the centrum of a hot spot on surface of red dwarf has powerful chromospheres, arising from heating in Laiman continua. Four models of binary system with various of parameters are constructed and their theoretical spectra are obtained. Temperature of white dwarf Tef = 62000 K, radius of the red dwarf RT = 0.20139 and angle inclination of system i = 82“ are determined. ...
On the application of probability representations for estimation of the argon method resolution
International Nuclear Information System (INIS)
Kol'tsova, T.V.
1976-01-01
By considering the dating of amphiboles and biotites by the argon method, it is shown that there is a possibility to use the common F and t criteria for revealing any meaning difference in their ages. The dependence of the alternative inference of possible variations of the active parameters is considered, and a graphical procedure for selecting the optimum number of determinations for a given accuracy of analysis is suggested. The meaning difference in the age of amphiboles and biotites from the Northern Ladoga Lake region permits interesting conclusions to be made on the paleothermal history of the investigated rocks
International Nuclear Information System (INIS)
Zhu Shengyun; Li Anli; Gou Zhenghui; Zheng Shengnan; Li Guangsheng
1994-01-01
The g-factor hence the magnetic moment, of the isomeric state 43 Sc(19/2 - , 3.1232 MeV) has been measured by the time differential perturbed angular distribution method. The measured values are g = 0.3279(19) and μ/μN = 3.108(18) nm
Studies od radioactive decay after-effects by the method of perturbed angular γγ-correlation
International Nuclear Information System (INIS)
Shpinkova, L.G.
2002-01-01
One of the methods applied for electron capture (Ec) after-effects studied is the time differential perturbed angular γγ-correlation (Tdpa( technique, which allows investigating hyperfine interactions of electromagnetic moments of nuclei with extranuclear fields created by electrons and ions around the probe atom in the studied matrix. After-effects can differentially affect the observed angular correlation and, thus, be studied by this method. The experiments performed so far with different nuclei in different matrixes showed that the after-effects are not important in TDPAC studies of metallic systems because of a considerable lag caused by a finite lifetime of the initial state of the γγ-cascade and the fast relaxation due to conduction electrons. In insulators and oxides. the after-effects should be taken into account while interpreting experimental data . A problem of molecular dynamic studies in liquids obscured by after-effects was also mentioned in the literature. A possibility of molecule disintegration caused by EC after-effects, initiated by the Auger-process was studied for 111 In-complexes with diethylenetriaminepentaacetic acid in neutral aqueous solutions. The results of the work showed directly that the AC after-effects could cause the metal-legand complexes disintegration. The observation of the non-equilibrium fraction with presumably high transient gradients caused by both a relaxation from the highly ionised state od 111 Cd (the daughter nucleus in the EC decay of 111 In) and rearrangement of the chemical bonds allowed assessing the time required for these transient processes (before complex disintegration or complex relaxation to the equilibrium state)
Analysis of Cleaning Process for Several Kinds of Soil by Probability Density Functional Method.
Fujimoto, Akihiro; Tanaka, Terumasa; Oya, Masaru
2017-10-01
A method of analyzing the detergency of various soils by assuming normal distributions for the soil adhesion and soil removal forces was developed by considering the relationship between the soil type and the distribution profile of the soil removal force. The effect of the agitation speed on the soil removal was also analyzed by this method. Washing test samples were prepared by soiling fabrics with individual soils such as particulate soils, oily dyes, and water-soluble dyes. Washing tests were conducted using a Terg-O-Tometer and four repetitive washing cycles of 5 min each. The transition of the removal efficiencies was recorded in order to calculate the mean value (μ rl ) and the standard deviation (σ rl ) of the removal strength distribution. The level of detergency and the temporal alteration in the detergency can be represented by μ rl and σ rl , respectively. A smaller σ rl indicates a smaller increase in the detergency with time, which also indicates the existence of a certain amount of soil with a strong adhesion force. As a general trend, the values of σ rl were the greatest for the oily soils, followed by those of the water-soluble soils and particulate soils in succession. The relationship between the soil removal processes and the soil adhesion force was expressed on the basis of the transition of the distribution of residual soil. Evaluation of the effects of the agitation speed on µ rl and ơ rl showed that σ rl was not affected by the agitation speed; the value of µ rl for solid soil and oily soil increased with increasing agitation, and the µ rl of water-soluble soil was not specifically affected by the agitation speed. It can be assumed that the parameter ơ rl is related to the characteristics of the soil and the adhesion condition, and can be applied to estimating the soil removal mechanism.
International Nuclear Information System (INIS)
Turati, Pietro; Pedroni, Nicola; Zio, Enrico
2016-01-01
The efficient estimation of system reliability characteristics is of paramount importance for many engineering applications. Real world system reliability modeling calls for the capability of treating systems that are: i) dynamic, ii) complex, iii) hybrid and iv) highly reliable. Advanced Monte Carlo (MC) methods offer a way to solve these types of problems, which are feasible according to the potentially high computational costs. In this paper, the REpetitive Simulation Trials After Reaching Thresholds (RESTART) method is employed, extending it to hybrid systems for the first time (to the authors’ knowledge). The estimation accuracy and precision of RESTART highly depend on the choice of the Importance Function (IF) indicating how close the system is to failure: in this respect, proper IFs are here originally proposed to improve the performance of RESTART for the analysis of hybrid systems. The resulting overall simulation approach is applied to estimate the probability of failure of the control system of a liquid hold-up tank and of a pump-valve subsystem subject to degradation induced by fatigue. The results are compared to those obtained by standard MC simulation and by RESTART with classical IFs available in the literature. The comparison shows the improvement in the performance obtained by our approach. - Highlights: • We consider the issue of estimating small failure probabilities in dynamic systems. • We employ the RESTART method to estimate the failure probabilities. • New Importance Functions (IFs) are introduced to increase the method performance. • We adopt two dynamic, hybrid, highly reliable systems as case studies. • A comparison with literature IFs proves the effectiveness of the new IFs.
Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei
2016-03-01
Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.
Directory of Open Access Journals (Sweden)
Chih-Ta Yen
2015-01-01
Full Text Available This study proposes novel three-dimensional (3D matrices of wavelength/time/spatial code for code-division multiple-access (OCDMA networks, with a double balanced detection mechanism. We construct 3D carrier-hopping prime/modified prime (CHP/MP codes by extending a two-dimensional (2D CHP code integrated with a one-dimensional (1D MP code. The corresponding coder/decoder pairs were based on fiber Bragg gratings (FBGs and tunable optical delay lines integrated with splitters/combiners. System performance was enhanced by the low cross correlation properties of the 3D code designed to avoid the beat noise phenomenon. The CHP/MP code cardinality increased significantly compared to the CHP code under the same bit error rate (BER. The results indicate that the 3D code method can enhance system performance because both the beating terms and multiple-access interference (MAI were reduced by the double balanced detection mechanism. Additionally, the optical component can also be relaxed for high transmission scenery.
International Nuclear Information System (INIS)
Zheng, S.H.
1994-01-01
It is indispensable to know the fluence on the nuclear reactor pressure vessel. The cross sections and their treatment have an important rule to this problem. In this study, two ''benchmarks'' have been interpreted by the Monte Carlo transport program TRIPOLI to qualify the calculational method and the cross sections used in the calculations. For the treatment of the cross sections, the multigroup method is usually used but it exists some problems such as the difficulty to choose the weighting function and the necessity of a great number of energy to represent well the cross section's fluctuation. In this thesis, we propose a new method called ''Probability Table Method'' to treat the neutron cross sections. For the qualification, a program of the simulation of neutron transport by the Monte Carlo method in one dimension has been written; the comparison of multigroup's results and probability table's results shows the advantages of this new method. The probability table has also been introduced in the TRIPOLI program; the calculational results of the iron deep penetration benchmark has been improved by comparing with the experimental results. So it is interest to use this new method in the shielding and neutronic calculation. (author). 42 refs., 109 figs., 36 tabs
Perturbed effects at radiation physics
International Nuclear Information System (INIS)
Külahcı, Fatih; Şen, Zekâi
2013-01-01
Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer–Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables. - Highlights: • Perturbation methodology is applied to Radiation Physics. • Layer attenuation coefficient (LAC) and perturbed LAC are proposed for contact materials. • Perturbed linear attenuation coefficient is proposed. • Perturbed mass attenuation coefficient (PMAC) is proposed. • Perturbed cross-section is proposed
PDE-Foam - a probability-density estimation method using self-adapting phase-space binning
Dannheim, Dominik; Voigt, Alexander; Grahn, Karl-Johan; Speckmayer, Peter
2009-01-01
Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. To efficiently use large event samples to estimate the probability density, a binary search tree (range searching) is used in the PDE-RS implementation. It is a generalisation of standard likelihood methods and a powerful classification tool for problems with highly non-linearly correlated observables. In this paper, we present an innovative improvement of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multidimensional phase space, minimizing the variance of the signal and background densities inside the cells. The binned density information is stored in binary trees, allowing for a very ...
Energy Technology Data Exchange (ETDEWEB)
Wang Jun; Liu Haiyan [University of Science and Technology of China, Hefei National Laboratory for Physical Sciences at the Microscale, and Key Laboratory of Structural Biology, School of Life Sciences (China)], E-mail: hyliu@ustc.edu.cn
2007-01-15
Chemical shifts contain substantial information about protein local conformations. We present a method to assign individual protein backbone dihedral angles into specific regions on the Ramachandran map based on the amino acid sequences and the chemical shifts of backbone atoms of tripeptide segments. The method uses a scoring function derived from the Bayesian probability for the central residue of a query tripeptide segment to have a particular conformation. The Ramachandran map is partitioned into representative regions at two levels of resolution. The lower resolution partitioning is equivalent to the conventional definitions of different secondary structure regions on the map. At the higher resolution level, the {alpha} and {beta} regions are further divided into subregions. Predictions are attempted at both levels of resolution. We compared our method with TALOS using the original TALOS database, and obtained comparable results. Although TALOS may produce the best results with currently available databases which are much enlarged, the Bayesian-probability-based approach can provide a quantitative measure for the reliability of predictions.
Munoz, E. F.; Silverman, M. P.
1979-01-01
A single-step most-probable-number method for determining the number of fecal coliform bacteria present in sewage treatment plant effluents is discussed. A single growth medium based on that of Reasoner et al. (1976) and consisting of 5.0 gr. proteose peptone, 3.0 gr. yeast extract, 10.0 gr. lactose, 7.5 gr. NaCl, 0.2 gr. sodium lauryl sulfate, and 0.1 gr. sodium desoxycholate per liter is used. The pH is adjusted to 6.5, and samples are incubated at 44.5 deg C. Bacterial growth is detected either by measuring the increase with time in the electrical impedance ratio between the innoculated sample vial and an uninnoculated reference vial or by visual examination for turbidity. Results obtained by the single-step method for chlorinated and unchlorinated effluent samples are in excellent agreement with those obtained by the standard method. It is suggested that in automated treatment plants impedance ratio data could be automatically matched by computer programs with the appropriate dilution factors and most probable number tables already in the computer memory, with the corresponding result displayed as fecal coliforms per 100 ml of effluent.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
Directory of Open Access Journals (Sweden)
D. S. Vakhlyarskiy
2016-01-01
Full Text Available This paper proposes a method to calculate the splitting of natural frequency of the shell of hemispherical resonator gyro. (HRG. The paper considers splitting that arises from the small defect of the middle surface, which makes the resonator different from the rotary shell. The presented method is a combination of the perturbation method and the finite element method. The method allows us to find the frequency splitting caused by defects in shape, arbitrary distributed in the circumferential direction. This is achieved by calculating the perturbations of multiple natural frequencies of the second and higher orders. The proposed method allows us to calculate the splitting of multiple frequencies for the shell with the meridian of arbitrary shape.A developed finite element is an annular element of the shell and has two nodes. Projections of movements are used on the axis of the global cylindrical system of coordinates, as the unknown. To approximate the movements are used polynomials of the second degree. Within the finite element the geometric characteristics are arranged in a series according to the small parameter of perturbations of the middle surface geometry.Movements on the final element are arranged in series according to the small parameter, and in a series according to circumferential angle. With computer used to implement the method, three-dimensional arrays are used to store the perturbed quantities. This allows the use of regular expressions for the mass and stiffness matrices, when building the finite element, instead of analytic dependencies for each perturbation of these matrices of the required order with desirable mathematical operations redefined in accordance with the perturbation method.As a test task, is calculated frequency splitting of non-circular cylindrical resonator with Navier boundary conditions. The discrepancy between the results and semi-analytic solution to this problem is less than 1%. For a cylindrical shell is
Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H
2016-01-01
Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869
Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H
2016-09-01
To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. We compared 148 MSM aged 18-64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010-2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%-95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie; Chang, Kyung Hwan
2018-01-01
The aim of this study was to derive a new plan-scoring index using normal tissue complication probabilities to verify different plans in the selection of personalized treatment. Plans for 12 patients treated with tomotherapy were used to compare scoring for ranking. Dosimetric and biological indexes were analyzed for the plans for a clearly distinguishable group ( n = 7) and a similar group ( n = 12), using treatment plan verification software that we developed. The quality factor ( QF) of our support software for treatment decisions was consistent with the final treatment plan for the clearly distinguishable group (average QF = 1.202, 100% match rate, n = 7) and the similar group (average QF = 1.058, 33% match rate, n = 12). Therefore, we propose a normal tissue complication probability (NTCP) based on the plan scoring index for verification of different plans for personalized treatment-plan selection. Scoring using the new QF showed a 100% match rate (average NTCP QF = 1.0420). The NTCP-based new QF scoring method was adequate for obtaining biological verification quality and organ risk saving using the treatment-planning decision-support software we developed for prostate cancer.
Dobramysl, U; Holcman, D
2018-02-15
Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.
Singular perturbation of simple eigenvalues
International Nuclear Information System (INIS)
Greenlee, W.M.
1976-01-01
Two operator theoretic theorems which generalize those of asymptotic regular perturbation theory and which apply to singular perturbation problems are proved. Application of these theorems to concrete problems is involved, but the perturbation expansions for eigenvalues and eigenvectors are developed in terms of solutions of linear operator equations. The method of correctors, as well as traditional boundary layer techniques, can be used to apply these theorems. The current formulation should be applicable to highly singular ''hard core'' potential perturbations of the radial equation of quantum mechanics. The theorems are applied to a comparatively simple model problem whose analysis is basic to that of the quantum mechanical problem
Energy Technology Data Exchange (ETDEWEB)
Freire, Fernando S.; Silva, Fernando C.; Martinez, Aquilino S. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: ffreire@con.ufrj.br; fernando@con.ufrj.br; aquilino@.con.ufrj.br
2005-07-01
Frequently it is necessary to compute the change in core multiplication caused by a change in the core temperature or composition. Even when this perturbation is localized, such as a control rod inserted into the core, one does not have to repeat the original criticality calculation, but instead we can use the well-known pseudo-harmonics perturbation method to express the corresponding change in the multiplication factor in terms of the neutron flux expanded in the basis vectors characterizing the unperturbed core. Therefore we may compute the control rod worth to find the most reactivity control rod to calculate the fast shutdown margin. In this thesis we propose a simple and precise method to identify the most reactivity control rod. (author)
Renormalized Lie perturbation theory
International Nuclear Information System (INIS)
Rosengaus, E.; Dewar, R.L.
1981-07-01
A Lie operator method for constructing action-angle transformations continuously connected to the identity is developed for area preserving mappings. By a simple change of variable from action to angular frequency a perturbation expansion is obtained in which the small denominators have been renormalized. The method is shown to lead to the same series as the Lagrangian perturbation method of Greene and Percival, which converges on KAM surfaces. The method is not superconvergent, but yields simple recursion relations which allow automatic algebraic manipulation techniques to be used to develop the series to high order. It is argued that the operator method can be justified by analytically continuing from the complex angular frequency plane onto the real line. The resulting picture is one where preserved primary KAM surfaces are continuously connected to one another
Penkov, V. B.; Ivanychev, D. A.; Novikova, O. S.; Levina, L. V.
2018-03-01
The article substantiates the possibility of building full parametric analytical solutions of mathematical physics problems in arbitrary regions by means of computer systems. The suggested effective means for such solutions is the method of boundary states with perturbations, which aptly incorporates all parameters of an orthotropic medium in a general solution. We performed check calculations of elastic fields of an anisotropic rectangular region (test and calculation problems) for a generalized plane stress state.
International Nuclear Information System (INIS)
Ball, G.
1990-01-01
The development and analysis of methods for generating first-flight collision probabilities in two-dimensional geometries consistent with Light Water Moderated (LWR) fuel assemblies are examined. A new ray-tracing algorithm is discussed. A number of numerical results are given demonstrating the feasibility of this algorithm and the effects of the moderator (and fuel) sectorizations on the resulting flux distributions. The collision probabilties have been introduced and their subsequent utilization in the flux calculation procedures illustrated. A brief description of the Coxy-1 and Coxy-2 programs (which were developed in the Reactor Theory Division of the Atomic Energy Agency of South Africa Ltd) has also been added. 41 figs., 9 tabs., 18 refs
DEFF Research Database (Denmark)
Asmussen, Søren; Albrecher, Hansjörg
The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....
On summation of perturbation expansions
International Nuclear Information System (INIS)
Horzela, A.
1985-04-01
The problem of the restoration of physical quantities defined by divergent perturbation expansions is analysed. The Pad'e and Borel summability is proved for alternating perturbation expansions with factorially growing coefficients. The proof is based on the methods of the classical moments theory. 17 refs. (author)
Generalized Probability-Probability Plots
Mushkudiani, N.A.; Einmahl, J.H.J.
2004-01-01
We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P
International Nuclear Information System (INIS)
Corana, A.; Bortolan, G.; Casaleggio, A.
2004-01-01
We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems
Shiryaev, Albert N
2016-01-01
This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.
Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...
International Nuclear Information System (INIS)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Shen, Aiguo; Hu, Jiming; Jia, Jun
2013-01-01
The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory. (paper)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming
2013-03-01
The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.
Madigan, Michael L; Aviles, Jessica; Allin, Leigh J; Nussbaum, Maury A; Alexander, Neil B
2018-04-16
A growing number of studies are using modified treadmills to train reactive balance after trip-like perturbations that require multiple steps to recover balance. The goal of this study was thus to develop and validate a low-tech reactive balance rating method in the context of trip-like treadmill perturbations to facilitate the implementation of this training outside the research setting. Thirty-five residents of five senior congregate housing facilities participated in the study. Subjects completed a series of reactive balance tests on a modified treadmill from which the reactive balance rating was determined, along with a battery of standard clinical balance and mobility tests that predict fall risk. We investigated the strength of correlation between the reactive balance rating and reactive balance kinematics. We compared the strength of correlation between the reactive balance rating and clinical tests predictive of fall risk, with the strength of correlation between reactive balance kinematics and the same clinical tests. We also compared the reactive balance rating between subjects predicted to be at a high or low risk of falling. The reactive balance rating was correlated with reactive balance kinematics (Spearman's rho squared = .04 - .30), exhibited stronger correlations with clinical tests than most kinematic measures (Spearman's rho squared = .00 - .23), and was 42-60% lower among subjects predicted to be at a high risk for falling. The reactive balance rating method may provide a low-tech, valid measure of reactive balance kinematics, and an indicator of fall risk, after trip-like postural perturbations.
Institute of Scientific and Technical Information of China (English)
朱卫平; 黄黔
2002-01-01
In order to analyze bellows effectively and practically, the finite-element-displacement-perturbation method (FEDPM) is proposed for the geometric nonlinearbehaviors of shells of revolution subjected to pure bending moments or lateral forces in one of their meridional planes. The formulations are mainly based upon the idea of perturba-tion that the nodal displacement vector and the nodal force vector of each finite elementare expanded by taking root-mean-square value of circumferential strains of the shells as aperturbation parameter. The load steps and the iteration times are not cs arbitrary andunpredictable as in usual nonlinear analysis. Instead, there are certain relations betweenthe load steps and the displacement increments, and no need of iteration for each loadstep. Besides, in the formulations, the shell is idealized into a series of conical frusta for the convenience of practice, Sander' s nonlinear geometric equations of moderate smallrotation are used, and the shell made of more than one material ply is also considered.
Energy Technology Data Exchange (ETDEWEB)
Bruna, J G; Brunet, J P; Clouet D' Orval, Ch; Caizergues, R; Verriere, Ph [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1964-07-01
The {alpha} decay constant of prompt neutrons has been studied in the homogeneous plutonium-fueled, light-water-moderated reactor Alecto, by the probability method. In this method, the probability to count one, two,.... neutrons during a given time is measured. The value of {alpha} can be deduced from this measurement, for various subcritical states of the reactor. The experimental results were then compared with values obtained, for the same reactivities, by the pulsed neutron technique. (authors) [French] On a etudie sur Alecto, reacteur homogene au plutonium, modere a l'eau legere, la constante de decroissance {alpha} des neutrons prompts par la methode des probabilites. Celle-ci consiste a mesurer la probabilite de compter un, deux, etc..., neutrons pendant un intervalle de temps donne. On a pu en deduire la valeur de {alpha}, dans divers etats sous-critiques du reacteur. On a compare les resultats experimentaux a d'autres valeurs obtenues, aux memes reactivites, par la methode des neutrons pulses. (auteurs)
Yaşar, Elif; Yıldırım, Yakup; Yaşar, Emrullah
2018-06-01
This paper devotes to conformable fractional space-time perturbed Gerdjikov-Ivanov (GI) equation which appears in nonlinear fiber optics and photonic crystal fibers (PCF). We consider the model with full nonlinearity in order to give a generalized flavor. The sine-Gordon equation approach is carried out to model equation for retrieving the dark, bright, dark-bright, singular and combined singular optical solitons. The constraint conditions are also reported for guaranteeing the existence of these solitons. We also present some graphical simulations of the solutions for better understanding the physical phenomena of the behind the considered model.
International Nuclear Information System (INIS)
Suck Salk, S.H.
1985-01-01
With the use of projection operators, the formal expressions of distorted-wave and coupled-channel-wave transition amplitudes for rearrangement collisions are derived. Use of projection operators (for the transition amplitudes) sharpens our understanding of the structural differences between the two transition amplitudes. The merit of each representation of the transition amplitudes is discussed. Derived perturbation potentials are found to have different structures. The rigorously derived distorted-wave Born-approximation (DWBA) transition amplitude is shown to be a generalization of the earlier DWBA expression obtained from the assumption of the dominance of elastic scattering in rearrangement collisions
Dynamically constrained ensemble perturbations – application to tides on the West Florida Shelf
Directory of Open Access Journals (Sweden)
F. Lenartz
2009-07-01
Full Text Available A method is presented to create an ensemble of perturbations that satisfies linear dynamical constraints. A cost function is formulated defining the probability of each perturbation. It is shown that the perturbations created with this approach take the land-sea mask into account in a similar way as variational analysis techniques. The impact of the land-sea mask is illustrated with an idealized configuration of a barrier island. Perturbations with a spatially variable correlation length can be also created by this approach. The method is applied to a realistic configuration of the West Florida Shelf to create perturbations of the M2 tidal parameters for elevation and depth-averaged currents. The perturbations are weakly constrained to satisfy the linear shallow-water equations. Despite that the constraint is derived from an idealized assumption, it is shown that this approach is applicable to a non-linear and baroclinic model. The amplitude of spurious transient motions created by constrained perturbations of initial and boundary conditions is significantly lower compared to perturbing the variables independently or to using only the momentum equation to compute the velocity perturbations from the elevation.
International Nuclear Information System (INIS)
Mathiak, E.; Schuetz, B.
1980-01-01
The authors explain purpose, latest developments and application of probabilistic methods in safety assessments of nuclear facilities, and of non-nuclear installations. Their findings show that the methods of probabilistic systems analysis and of structural reliability analysis proved to be successful, above all with regard to systematics and reproducibility. Above all probabilistic systems analyses have been applied to a large extent in the Rasmussen study. Although this study has been intended to present objective information on the risks to be expected from nuclear power plant operation, the results of the study have not been accepted by the public as an unbiased presentation. It is worth mentioning that in the opinion of a number of social scientists, solutions accepted by the whole of society cannot be reached by defining and adhering to risk standards, but rather by entering into discussions with those groups directly affected, working out compromises meeting all interests. Risk analyses supply information that facilitates practical planning of emergency measures. A description of probable accidents allows conclusions to be drawn in terms of quality and quantity as to how and to what extent appropriate precautionary measures can be taken and planned. Risk analyses offer the possibility of preventing damage hitherto known only by experience (e.g. through accident analyses) by precalculating possible events, and then initiating the required improvements. It is these positive effects that make up the importance of such analyses. (orig./HSCH) [de
Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C
2012-02-15
The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.
Supersingular quantum perturbations
International Nuclear Information System (INIS)
Detwiler, L.C.; Klauder, J.R.
1975-01-01
A perturbation potential is called supersingular whenever generally every matrix element of the perturbation in the unperturbed eigenstates is infinite. It follows that supersingular perturbations do not have conventional perturbation expansions, say for energy eigenvalues. By invoking variational arguments, we determine the asymptotic behavior of the energy eigenvalues for asymptotically small values of the coupling constant of the supersingular perturbation
International Nuclear Information System (INIS)
Suslov, I.M.
2005-01-01
Various perturbation series are factorially divergent. The behavior of their high-order terms can be determined by Lipatov's method, which involves the use of instanton configurations of appropriate functional integrals. When the Lipatov asymptotic form is known and several lowest order terms of the perturbation series are found by direct calculation of diagrams, one can gain insight into the behavior of the remaining terms of the series, which can be resummed to solve various strong-coupling problems in a certain approximation. This approach is demonstrated by determining the Gell-Mann-Low functions in φ 4 theory, QED, and QCD with arbitrary coupling constants. An overview of the mathematical theory of divergent series is presented, and interpretation of perturbation series is discussed. Explicit derivations of the Lipatov asymptotic form are presented for some basic problems in theoretical physics. A solution is proposed to the problem of renormalon contributions, which hampered progress in this field in the late 1970s. Practical perturbation-series summation schemes are described both for a coupling constant of order unity and in the strong-coupling limit. An interpretation of the Borel integral is given for 'non-Borel-summable' series. Higher order corrections to the Lipatov asymptotic form are discussed
Quantum Probabilities as Behavioral Probabilities
Directory of Open Access Journals (Sweden)
Vyacheslav I. Yukalov
2017-03-01
Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.
DEFF Research Database (Denmark)
Rojas-Nandayapa, Leonardo
Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... analytic expression for the distribution function of a sum of random variables. The presence of heavy-tailed random variables complicates the problem even more. The objective of this dissertation is to provide better approximations by means of sharp asymptotic expressions and Monte Carlo estimators...
On dark energy isocurvature perturbation
International Nuclear Information System (INIS)
Liu, Jie; Zhang, Xinmin; Li, Mingzhe
2011-01-01
Determining the equation of state of dark energy with astronomical observations is crucially important to understand the nature of dark energy. In performing a likelihood analysis of the data, especially of the cosmic microwave background and large scale structure data the dark energy perturbations have to be taken into account both for theoretical consistency and for numerical accuracy. Usually, one assumes in the global fitting analysis that the dark energy perturbations are adiabatic. In this paper, we study the dark energy isocurvature perturbation analytically and discuss its implications for the cosmic microwave background radiation and large scale structure. Furthermore, with the current astronomical observational data and by employing Markov Chain Monte Carlo method, we perform a global analysis of cosmological parameters assuming general initial conditions for the dark energy perturbations. The results show that the dark energy isocurvature perturbations are very weakly constrained and that purely adiabatic initial conditions are consistent with the data
International Nuclear Information System (INIS)
Jin, Jianghong; Pang, Lei; Zhao, Shoutang; Hu, Bin
2015-01-01
Highlights: • Models of PFS for SIS were established by using the reliability block diagram. • The more accurate calculation of PFS for SIS can be acquired by using SL. • Degraded operation of complex SIS does not affect the availability of SIS. • The safe undetected failure is the largest contribution to the PFS of SIS. - Abstract: The spurious trip of safety instrumented system (SIS) brings great economic losses to production. How to ensure the safety instrumented system is reliable and available has been put on the schedule. But the existing models on spurious trip rate (STR) or probability of failing safely (PFS) are too simplified and not accurate, in-depth studies of availability to obtain more accurate PFS for SIS are required. Based on the analysis of factors that influence the PFS for the SIS, using reliability block diagram method (RBD), the quantitative study of PFS for the SIS is carried out, and gives some application examples. The results show that, the common cause failure will increase the PFS; degraded operation does not affect the availability of the SIS; if the equipment was tested and repaired one by one, the unavailability of the SIS can be ignored; the corresponding occurrence time of independent safe undetected failure should be the system lifecycle (SL) rather than the proof test interval and the independent safe undetected failure is the largest contribution to the PFS for the SIS
International Nuclear Information System (INIS)
Varella, Marcio Teixeira do Nascimento
2001-12-01
We have calculated annihilation probability densities (APD) for positron collisions against He atom and H 2 molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10 -2 eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e + -H 2 collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Z eff ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e - -H 2 O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)
International Nuclear Information System (INIS)
Kemshell, P.B.; Wright, W.V.; Sanders, L.G.
1984-01-01
DUCKPOND, the sensitivity option of the Monte Carlo code McBEND, is being used to study the effect of environmental perturbations on the response of a dual detector neutron porosity logging tool. Using a detailed model of an actual tool, calculations have been performed for a 19% porosity limestone rock sample in the API Test Pit. Within a single computer run, the tool response, or near-to-far detector count ratio, and the sensitivity of this response to the concentration of each isotope present in the formation have been estimated. The calculated tool response underestimates the measured value by about 10%, which is equal to 1.5 ''standard errors'', but this apparent discrepancy is shown to be within the spread of calculated values arising from uncertainties on the rock composition
The computation of stationary distributions of Markov chains through perturbations
Directory of Open Access Journals (Sweden)
Jeffery J. Hunter
1991-01-01
Full Text Available An algorithmic procedure for the determination of the stationary distribution of a finite, m-state, irreducible Markov chain, that does not require the use of methods for solving systems of linear equations, is presented. The technique is based upon a succession of m, rank one, perturbations of the trivial doubly stochastic matrix whose known steady state vector is updated at each stage to yield the required stationary probability vector.
International Nuclear Information System (INIS)
Arab, M.N.; Ayaz, M.
2004-01-01
The performance of transmission line insulator is greatly affected by dust, fumes from industrial areas and saline deposit near the coast. Such pollutants in the presence of moisture form a coating on the surface of the insulator, which in turn allows the passage of leakage current. This leakage builds up to a point where flashover develops. The flashover is often followed by permanent failure of insulation resulting in prolong outages. With the increase in system voltage owing to the greater demand of electrical energy over the past few decades, the importance of flashover due to pollution has received special attention. The objective of the present work was to study the performance of overhead line insulators in the presence of contaminants such as induced salts. A detailed review of the literature and the mechanisms of insulator flashover due to the pollution are presented. Experimental investigations on the behavior of overhead line insulators under industrial salt contamination are carried out. A special fog chamber was designed in which the contamination testing of insulators was carried out. Flashover behavior under various degrees of contamination of insulators with the most common industrial fume components such as Nitrate and Sulphate compounds was studied. Substituting the normal distribution parameter in the probability distribution function based on maximum likelihood develops a statistical method. The method gives a high accuracy in the estimation of the 50% flashover voltage, which is then used to evaluate the critical flashover index at various contamination levels. The critical flashover index is a valuable parameter in insulation design for numerous applications. (author)
Fan, Zhichao; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui
2018-02-01
Mechanically-guided 3D assembly based on controlled, compressive buckling represents a promising, emerging approach for forming complex 3D mesostructures in advanced materials. Due to the versatile applicability to a broad set of material types (including device-grade single-crystal silicon) over length scales from nanometers to centimeters, a wide range of novel applications have been demonstrated in soft electronic systems, interactive bio-interfaces as well as tunable electromagnetic devices. Previously reported 3D designs relied mainly on finite element analyses (FEA) as a guide, but the massive numerical simulations and computational efforts necessary to obtain the assembly parameters for a targeted 3D geometry prevent rapid exploration of engineering options. A systematic understanding of the relationship between a 3D shape and the associated parameters for assembly requires the development of a general theory for the postbuckling process. In this paper, a double perturbation method is established for the postbuckling analyses of planar curved beams, of direct relevance to the assembly of ribbon-shaped 3D mesostructures. By introducing two perturbation parameters related to the initial configuration and the deformation, the highly nonlinear governing equations can be transformed into a series of solvable, linear equations that give analytic solutions to the displacements and curvatures during postbuckling. Systematic analyses of postbuckling in three representative ribbon shapes (sinusoidal, polynomial and arc configurations) illustrate the validity of theoretical method, through comparisons to the results of experiment and FEA. These results shed light on the relationship between the important deformation quantities (e.g., mode ratio and maximum strain) and the assembly parameters (e.g., initial configuration and the applied strain). This double perturbation method provides an attractive route to the inverse design of ribbon-shaped 3D geometries, as
Grinstead, Charles M; Snell, J Laurie
2011-01-01
This book explores four real-world topics through the lens of probability theory. It can be used to supplement a standard text in probability or statistics. Most elementary textbooks present the basic theory and then illustrate the ideas with some neatly packaged examples. Here the authors assume that the reader has seen, or is learning, the basic theory from another book and concentrate in some depth on the following topics: streaks, the stock market, lotteries, and fingerprints. This extended format allows the authors to present multiple approaches to problems and to pursue promising side discussions in ways that would not be possible in a book constrained to cover a fixed set of topics. To keep the main narrative accessible, the authors have placed the more technical mathematical details in appendices. The appendices can be understood by someone who has taken one or two semesters of calculus.
Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.
2010-01-01
A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].
Dorogovtsev, A Ya; Skorokhod, A V; Silvestrov, D S; Skorokhod, A V
1997-01-01
This book of problems is intended for students in pure and applied mathematics. There are problems in traditional areas of probability theory and problems in the theory of stochastic processes, which has wide applications in the theory of automatic control, queuing and reliability theories, and in many other modern science and engineering fields. Answers to most of the problems are given, and the book provides hints and solutions for more complicated problems.
Non-perturbative treatment of excitation and ionization in U92++U91+ collisions at 1 GeV/amu
International Nuclear Information System (INIS)
Becker, U.; Gruen, N.; Scheid, W.; Soff, G.
1986-01-01
Inner shell excitation and ionization processes in relativistic collisions of very heavy ions are treated by a non-perturbative method for the first time. The time-dependent Dirac equation is solved by a finite difference method for the scattering of U 92+ on U 91+ at Esub(lab) = 1 GeV/amu and zero impact parameter. The K-shell ionization probabilities are compared with those resulting from first-order perturbation theory. (orig.)
Directory of Open Access Journals (Sweden)
I-Chung Liu
2012-01-01
Full Text Available We have analyzed the effects of variable heat flux and internal heat generation on the flow and heat transfer in a thin film on a horizontal sheet in the presence of thermal radiation. Similarity transformations are used to transform the governing equations to a set of coupled nonlinear ordinary differential equations. The obtained differential equations are solved approximately by the homotopy perturbation method (HPM. The effects of various parameters governing the flow and heat transfer in this study are discussed and presented graphically. Comparison of numerical results is made with the earlier published results under limiting cases.
Geometric Hamiltonian structures and perturbation theory
International Nuclear Information System (INIS)
Omohundro, S.
1984-08-01
We have been engaged in a program of investigating the Hamiltonian structure of the various perturbation theories used in practice. We describe the geometry of a Hamiltonian structure for non-singular perturbation theory applied to Hamiltonian systems on symplectic manifolds and the connection with singular perturbation techniques based on the method of averaging
Guo, Yang; Becker, Ute; Neese, Frank
2018-03-01
Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.
Directory of Open Access Journals (Sweden)
Hubert S. Gabryś
2018-03-01
Full Text Available PurposeThe purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP models based on the mean radiation dose to parotid glands.Material and methodsA cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0–6 months (early, 6–15 months (late, 15–24 months (long-term, and at any time (a longitudinal model after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis.ResultsNTCP models based on the parotid mean dose failed to predict xerostomia (AUCs < 0.60. The most informative predictors were found for late and long-term xerostomia. Late xerostomia correlated with the contralateral dose gradient in the anterior–posterior (AUC = 0.72 and the right–left (AUC = 0.68 direction, whereas long-term xerostomia was associated with parotid volumes (AUCs > 0.85, dose gradients in the right–left (AUCs > 0.78, and the anterior–posterior (AUCs > 0.72 direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose–volume histogram (DVH spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing
Studying the perturbative Reggeon
International Nuclear Information System (INIS)
Griffiths, S.; Ross, D.A.
2000-01-01
We consider the flavour non-singlet Reggeon within the context of perturbative QCD. This consists of ladders built out of ''reggeized'' quarks. We propose a method for the numerical solution of the integro-differential equation for the amplitude describing the exchange of such a Reggeon. The solution is known to have a sharp rise at low values of Bjorken-x when applied to non-singlet quantities in deep-inelastic scattering. We show that when the running of the coupling is taken into account this sharp rise is further enhanced, although the Q 2 dependence is suppressed by the introduction of the running coupling. We also investigate the effects of simulating non-perturbative physics by introducing a constituent mass for the soft quarks and an effective mass for the soft gluons exchanged in the t-channel. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Kharbukh, Dzh U; Davton, Dzh Kh; Devis, Dzh K
1981-01-01
The experience of using probability methods in different geological conditions on the US territory is generalized. The efficiency of using systems analysis, imitation modeling of prospecting-exploration process and conditions for arrangement of fields, machine processing of data in plotting different types of structural maps, probability forecasting of the presence of fields is shown. Especial attention is focused on nonstructural traps. A brief dictionary of terms is presented used in the mathematical apparatus and the computer in oil geology.
The theory of singular perturbations
De Jager, E M
1996-01-01
The subject of this textbook is the mathematical theory of singular perturbations, which despite its respectable history is still in a state of vigorous development. Singular perturbations of cumulative and of boundary layer type are presented. Attention has been given to composite expansions of solutions of initial and boundary value problems for ordinary and partial differential equations, linear as well as quasilinear; also turning points are discussed. The main emphasis lies on several methods of approximation for solutions of singularly perturbed differential equations and on the mathemat
Probability of satellite collision
Mccarter, J. W.
1972-01-01
A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.
Gabryś, Hubert S; Buettner, Florian; Sterzing, Florian; Hauswald, Henrik; Bangert, Mark
2018-01-01
The purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP) models based on the mean radiation dose to parotid glands. A cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0-6 months (early), 6-15 months (late), 15-24 months (long-term), and at any time (a longitudinal model) after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC) of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis. NTCP models based on the parotid mean dose failed to predict xerostomia (AUCs 0.85), dose gradients in the right-left (AUCs > 0.78), and the anterior-posterior (AUCs > 0.72) direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose-volume histogram (DVH) spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing methods. We demonstrated that incorporation of organ- and dose-shape descriptors is beneficial for xerostomia prediction in highly conformal radiotherapy treatments. Due to strong reliance on patient-specific, dose-independent factors, our results underscore the need for development of personalized data-driven risk profiles for NTCP models of xerostomia. The facilitated
Directory of Open Access Journals (Sweden)
Chi-Chang Wang
2013-09-01
Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.
International Nuclear Information System (INIS)
Belendez, A.; Belendez, T.; Neipp, C.; Hernandez, A.; Alvarez, M.L.
2009-01-01
The homotopy perturbation method is used to solve the nonlinear differential equation that governs the nonlinear oscillations of a system typified as a mass attached to a stretched elastic wire. The restoring force for this oscillator has an irrational term with a parameter λ that characterizes the system (0 ≤ λ ≤ 1). For λ = 1 and small values of x, the restoring force does not have a dominant term proportional to x. We find this perturbation method works very well for the whole range of parameters involved, and excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. Only one iteration leads to high accuracy of the solutions and the maximal relative error for the approximate frequency is less than 2.2% for small and large values of oscillation amplitude. This error corresponds to λ = 1, while for λ < 1 the relative error is much lower. For example, its value is as low as 0.062% for λ = 0.5.
Energy Technology Data Exchange (ETDEWEB)
Samuels, Stuart E.; Eisbruch, Avraham; Vineberg, Karen; Lee, Jae; Lee, Choonik; Matuszak, Martha M.; Ten Haken, Randall K.; Brock, Kristy K., E-mail: kbrock@med.umich.edu
2016-11-01
Purpose: Strategies to reduce the toxicities of head and neck radiation (ie, dysphagia [difficulty swallowing] and xerostomia [dry mouth]) are currently underway. However, the predicted benefit of dose and planning target volume (PTV) reduction strategies is unknown. The purpose of the present study was to compare the normal tissue complication probabilities (NTCP) for swallowing and salivary structures in standard plans (70 Gy [P70]), dose-reduced plans (60 Gy [P60]), and plans eliminating the PTV margin. Methods and Materials: A total of 38 oropharyngeal cancer (OPC) plans were analyzed. Standard organ-sparing volumetric modulated arc therapy plans (P70) were created and then modified by eliminating the PTVs and treating the clinical tumor volumes (CTVs) only (C70) or maintaining the PTV but reducing the dose to 60 Gy (P60). NTCP dose models for the pharyngeal constrictors, glottis/supraglottic larynx, parotid glands (PGs), and submandibular glands (SMGs) were analyzed. The minimal clinically important benefit was defined as a mean change in NTCP of >5%. The P70 NTCP thresholds and overlap percentages of the organs at risk with the PTVs (56-59 Gy, vPTV{sub 56}) were evaluated to identify the predictors for NTCP improvement. Results: With the P60 plans, only the ipsilateral PG (iPG) benefited (23.9% vs 16.2%; P<.01). With the C70 plans, only the iPG (23.9% vs 17.5%; P<.01) and contralateral SMG (cSMG) (NTCP 32.1% vs 22.9%; P<.01) benefited. An iPG NTCP threshold of 20% and 30% predicted NTCP benefits for the P60 and C70 plans, respectively (P<.001). A cSMG NTCP threshold of 30% predicted for an NTCP benefit with the C70 plans (P<.001). Furthermore, for the iPG, a vPTV{sub 56} >13% predicted benefit with P60 (P<.001) and C70 (P=.002). For the cSMG, a vPTV{sub 56} >22% predicted benefit with C70 (P<.01). Conclusions: PTV elimination and dose-reduction lowered the NTCP of the iPG, and PTV elimination lowered the NTCP of the cSMG. NTCP thresholds and the
Varroa destructor is a mite parasite of European honey bees, Apis mellifera, that weakens the population, can lead to the death of an entire honey bee colony, and is believed to be the parasite with the most economic impact on beekeeping. The purpose of this study was to estimate the probability of ...
2014-01-01
Regression analysis techniques were used to develop a : set of equations for rural ungaged stream sites for estimating : discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent : annual exceedance probabilities, which are equivalent to : ann...
Toepoel, V.; Emerson, Hannah
2017-01-01
Weighting techniques in web surveys based on no probability schemes are devised to correct biases due to self-selection, undercoverage, and nonresponse. In an interactive panel, 38 survey experts addressed weighting techniques and auxiliary variables in web surveys. Most of them corrected all biases
Energy Technology Data Exchange (ETDEWEB)
Borges, Antonio Andrade
1998-07-01
A new method for the calculation of sensitivity coefficients is developed. The new method is a combination of two methodologies used for calculating theses coefficients, which are the differential and the generalized perturbation theory methods. The method utilizes as integral parameter the average flux in an arbitrary region of the system. Thus, the sensitivity coefficient contains only the component corresponding to the neutron flux. To obtain the new sensitivity coefficient, the derivatives of the integral parameter, {phi}, with respect to {sigma} are calculated using the perturbation method and the functional derivatives of this generic integral parameter with respect to {sigma} and {phi} are calculated using the differential method. (author)
International Nuclear Information System (INIS)
Junqueira, Astrogildo de Carvalho
1999-01-01
The electric field gradient (efg) at the Nb site in the intermetallic compounds Nb 3 M (M = Al, Si, Ge, Sn) and at the T site in the intermetallic compounds T 3 Al (T = Ti, Zr, Hf, V, Nb, Ta) was measured by Perturbed Angular Correlation (PAC) method using the well known gamma-gamma cascade of 133-482 keV in 181 Ta from the β - decay of 181 Hf. The compounds were prepared by arc melting the constituent elements under argon atmosphere along with radioactive 181 Hf substituting approximately 0.1 atomic percent of Nb and T elements. The PAC measurements were carried out at 295 K for all compounds and the efg was obtained for each alloy. The results for the efg in the T 3 Al compounds showed a strong correlation with the number of conduction electrons, while for the Nbs M compounds the efg behavior is influenced mainly by the p electrons of the M elements. The so-called universal correlation between the electronic and lattice contribution for the efg in metals was not verified in this work for all studied compounds. Measurements of the quadrupole frequency in the range of 100 to 1210 K for the Nb 3 Al compound showed a linear behaviour with the temperature. Superconducting properties of this alloys may probably be related with this observed behaviour. The efg results are compared to those reported for other binary alloys and discussed with the help of ab-initio methods. (author)
Base case and perturbation scenarios
Energy Technology Data Exchange (ETDEWEB)
Edmunds, T
1998-10-01
This report describes fourteen energy factors that could affect electricity markets in the future (demand, process, source mix, etc.). These fourteen factors are believed to have the most influence on the State's energy environment. A base case, or most probable, characterization is given for each of these fourteen factors over a twenty year time horizon. The base case characterization is derived from quantitative and qualitative information provided by State of California government agencies, where possible. Federal government databases are nsed where needed to supplement the California data. It is envisioned that a initial selection of issue areas will be based upon an evaluation of them under base case conditions. For most of the fourteen factors, the report identities possible perturbations from base case values or assumptions that may be used to construct additional scenarios. Only those perturbations that are plausible and would have a significant effect on energy markets are included in the table. The fourteen factors and potential perturbations of the factors are listed in Table 1.1. These perturbations can be combined to generate internally consist.ent. combinations of perturbations relative to the base case. For example, a low natural gas price perturbation should be combined with a high natural gas demand perturbation. The factor perturbations are based upon alternative quantitative forecasts provided by other institutions (the Department of Energy - Energy Information Administration in some cases), changes in assumptions that drive the quantitative forecasts, or changes in assumptions about the structure of the California energy markets. The perturbations are intended to be used for a qualitative reexamination of issue areas after an initial evaluation under the base case. The perturbation information would be used as a "tiebreaker;" to make decisions regarding those issue areas that were marginally accepted or rejected under the base case. Hf a
Hong, Youngjoon; Nicholls, David P.
2017-09-01
The capability to rapidly and robustly simulate the scattering of linear waves by periodic, multiply layered media in two and three dimensions is crucial in many engineering applications. In this regard, we present a High-Order Perturbation of Surfaces method for linear wave scattering in a multiply layered periodic medium to find an accurate numerical solution of the governing Helmholtz equations. For this we truncate the bi-infinite computational domain to a finite one with artificial boundaries, above and below the structure, and enforce transparent boundary conditions there via Dirichlet-Neumann Operators. This is followed by a Transformed Field Expansion resulting in a Fourier collocation, Legendre-Galerkin, Taylor series method for solving the problem in a transformed set of coordinates. Assorted numerical simulations display the spectral convergence of the proposed algorithm.
Salgado, Iván; Mera-Hernández, Manuel; Chairez, Isaac
2017-11-01
This study addresses the problem of designing an output-based controller to stabilize multi-input multi-output (MIMO) systems in the presence of parametric disturbances as well as uncertainties in the state model and output noise measurements. The controller design includes a linear state transformation which separates uncertainties matched to the control input and the unmatched ones. A differential neural network (DNN) observer produces a nonlinear approximation of the matched perturbation and the unknown states simultaneously in the transformed coordinates. This study proposes the use of the Attractive Ellipsoid Method (AEM) to optimize the gains of the controller and the gain observer in the DNN structure. As a consequence, the obtained control input minimizes the convergence zone for the estimation error. Moreover, the control design uses the estimated disturbance provided by the DNN to obtain a better performance in the stabilization task in comparison with a quasi-minimal output feedback controller based on a Luenberger observer and a sliding mode controller. Numerical results pointed out the advantages obtained by the nonlinear control based on the DNN observer. The first example deals with the stabilization of an academic linear MIMO perturbed system and the second example stabilizes the trajectories of a DC-motor into a predefined operation point. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Perturbation theory from stochastic quantization
International Nuclear Information System (INIS)
Hueffel, H.
1984-01-01
By using a diagrammatical method it is shown that in scalar theories the stochastic quantization method of Parisi and Wu gives the usual perturbation series in Feynman diagrams. It is further explained how to apply the diagrammatical method to gauge theories, discussing the origin of ghost effects. (Author)
Perturbative quantum chromodynamics
International Nuclear Information System (INIS)
Radyushkin, A.V.
1987-01-01
The latest achievements in perturbative quantum chromodynamics (QCD) relating to the progress in factorization of small and large distances are presented. The following problems are concerned: Development of the theory of Sudakov effects on the basis of mean contour formalism. Development of nonlocal condensate formalism. Calculation of hadron wave functions and hadron distribution functions using QCD method of sum rules. Development of the theory of Regge behaviour in QCD, behaviour of structure functions at small x. Study of polarization effects in hadron processes with high momentum transfer
International Nuclear Information System (INIS)
Ixaru, G.L.
1978-03-01
The method developed in the previous paper (preprint, C.I.Ph. (Bucharest), MC-2-78, 1978) is here investigated from computational point of view. Special emphasis is paid to the two basic descriptors of the efficiency: the volume of memory required and the computational effort (timing). Next, two experimental cases are reported. They (i) confirm the theoretical estimates for the rate cf convergence of each version of the present method and (ii) show that the present method is substantially faster than the others. Specifically, it is found that for typical physical problems it is faster by a factor of ten up to twenty than the methods commonly used, viz. Numerov and de Vogelaere. The data reported also allow an inUirect comparison with the method of Gordon. I l/ allow an indirect comparison with the method of Gordon. It is shown that, while this exhibits the same rate as our basic, lowest order version, the computational effort for the latter is, in case of systems with nine equations, only half than for the method of Gordon. At the end of the paper some types of physical problems are suggested which should be the most benefitting if solved numerically with the present method. (author)
International Nuclear Information System (INIS)
Fehlau, P.E.
1993-01-01
The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal
Generalized perturbation theory in DRAGON: application to CANDU cell calculations
International Nuclear Information System (INIS)
Courau, T.; Marleau, G.
2001-01-01
Generalized perturbation theory (GPT) in neutron transport is a means to evaluate eigenvalue and reaction rate variations due to small changes in the reactor properties (macroscopic cross sections). These variations can be decomposed in two terms: a direct term corresponding to the changes in the cross section themselves and an indirect term that takes into account the perturbations in the neutron flux. As we will show, taking into account the indirect term using a GPT method is generally straight forward since this term is the scalar product of the unperturbed generalized adjoint with the product of the variation of the transport operator and the unperturbed flux. In the case where the collision probability (CP) method is used to solve the transport equation, evaluating the perturbed transport operator involves calculating the variations in the CP matrix for each change in the reactor properties. Because most of the computational effort is dedicated to the CP matrix calculation the gains expected form the GPT method would therefore be annihilated. Here we will present a technique to approximate the variations in the CP matrices thereby replacing the variations in the transport operator with source term variations. We will show that this approximation yields errors fully compatible with the standard generalized perturbation theory errors. Results for 2D CANDU cell calculations will be presented. (author)
Abou-zeid, Mohamed Y.; Mohamed, Mona A. A.
2017-09-01
This article is an analytic discussion for the motion of power-law nanofluid with heat transfer under the effect of viscous dissipation, radiation, and internal heat generation. The governing equations are discussed under the assumptions of long wavelength and low Reynolds number. The solutions for temperature and nanoparticle profiles are obtained by using homotopy perturbation method. Results for the behaviours of the axial velocity, temperature, and nanoparticles as well as the skin friction coefficient, reduced Nusselt number, and Sherwood number with other physical parameters are obtained graphically and analytically. It is found that as the power-law exponent increases, both the axial velocity and temperature increase, whereas nanoparticles decreases. These results may have applicable importance in the research discussions of nanofluid flow in channels with small diameters under the effect of different temperature distributions.
Application of linear and higher perturbation theory in reactor physics
International Nuclear Information System (INIS)
Woerner, D.
1978-01-01
For small perturbations in the material composition of a reactor according to the first approximation of perturbation theory the eigenvalue perturbation is proportional to the perturbation of the system. This assumption is true for the neutron flux not influenced by the perturbance. The two-dimensional code LINESTO developed for such problems in this paper on the basis of diffusion theory determines the relative change of the multiplication constant. For perturbations varying the neutron flux in the space of energy and position the eigenvalue perturbation is also influenced by this changed neutron flux. In such cases linear perturbation theory yields larger errors. Starting from the methods of calculus of variations there is additionally developed in this paper a perturbation method of calculation permitting in a quick and simple manner to assess the influence of flux perturbation on the eigenvalue perturbation. While the source of perturbations is evaluated in isotropic approximation of diffusion theory the associated inhomogeneous equation may be used to determine the flux perturbation by means of diffusion or transport theory. Possibilities of application and limitations of this method are studied in further systematic investigations on local perturbations. It is shown that with the integrated code system developed in this paper a number of local perturbations may be checked requiring little computing time. With it flux perturbations in first approximation and perturbations of the multiplication constant in second approximation can be evaluated. (orig./RW) [de
Pedesseau, Laurent; Jouanna, Paul
2004-12-01
The SASP (semianalytical stochastic perturbations) method is an original mixed macro-nano-approach dedicated to the mass equilibrium of multispecies phases, periphases, and interphases. This general method, applied here to the reflexive relation Ck⇔μk between the concentrations Ck and the chemical potentials μk of k species within a fluid in equilibrium, leads to the distribution of the particles at the atomic scale. The macroaspects of the method, based on analytical Taylor's developments of chemical potentials, are intimately mixed with the nanoaspects of molecular mechanics computations on stochastically perturbed states. This numerical approach, directly linked to definitions, is universal by comparison with current approaches, DLVO Derjaguin-Landau-Verwey-Overbeek, grand canonical Monte Carlo, etc., without any restriction on the number of species, concentrations, or boundary conditions. The determination of the relation Ck⇔μk implies in fact two problems: a direct problem Ck⇒μk and an inverse problem μk⇒Ck. Validation of the method is demonstrated in case studies A and B which treat, respectively, a direct problem and an inverse problem within a free saturated gypsum solution. The flexibility of the method is illustrated in case study C dealing with an inverse problem within a solution interphase, confined between two (120) gypsum faces, remaining in connection with a reference solution. This last inverse problem leads to the mass equilibrium of ions and water molecules within a 3 Å thick gypsum interface. The major unexpected observation is the repulsion of SO42- ions towards the reference solution and the attraction of Ca2+ ions from the reference solution, the concentration being 50 times higher within the interphase as compared to the free solution. The SASP method is today the unique approach able to tackle the simulation of the number and distribution of ions plus water molecules in such extreme confined conditions. This result is of prime
Principles of chiral perturbation theory
International Nuclear Information System (INIS)
Leutwyler, H.
1995-01-01
An elementary discussion of the main concepts used in chiral perturbation theory is given in textbooks and a more detailed picture of the applications may be obtained from the reviews. Concerning the foundations of the method, the literature is comparatively scarce. So, I will concentrate on the basic concepts and explain why the method works. (author)
Wickman, J.; Diehl, S.; Blasius, B.; Klausmeier, C.; Ryabov, A.; Brännström, Å.
2017-01-01
Spatial structure can decisively influence the way evolutionary processes unfold. Several methods have thus far been used to study evolution in spatial systems, including population genetics, quantitative genetics, momentclosure approximations, and individual-based models. Here we extend the study of spatial evolutionary dynamics to eco-evolutionary models based on reaction-diffusion equations and adaptive dynamics. Specifically, we derive expressions for the strength of directional and stabi...
Energy Technology Data Exchange (ETDEWEB)
Feygelman, Vladimir, E-mail: vladimir.feygelman@moffitt.org; Tonner, Brian; Hunt, Dylan; Zhang, Geoffrey; Moros, Eduardo [Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida 33612 (United States); Stambaugh, Cassandra [Department of Physics, University of South Florida, Tampa, Florida 33612 (United States); Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States)
2015-11-15
Purpose: Previous studies show that dose to a moving target can be estimated using 4D measurement-guided dose reconstruction based on a process called virtual motion simulation, or VMS. A potential extension of VMS is to estimate dose during dynamic multileaf collimator (MLC)-tracking treatments. The authors introduce a modified VMS method and quantify its performance as proof-of-concept for tracking applications. Methods: Direct measurements with a moving biplanar diode array were used to verify accuracy of the VMS dose estimates. A tracking environment for variably sized circular MLC apertures was simulated by sending preprogrammed control points to the MLC while simultaneously moving the accelerator treatment table. Sensitivity of the method to simulated tracking latency (0–700 ms) was also studied. Potential applicability of VMS to fast changing beam apertures was evaluated by modeling, based on the demonstrated dependence of the cumulative dose on the temporal dose gradient. Results: When physical and virtual latencies were matched, the agreement rates (2% global/2 mm gamma) between the VMS and the biplanar dosimeter were above 96%. When compared to their own reference dose (0 induced latency), the agreement rates for VMS and biplanar array track closely up to 200 ms of induced latency with 10% low-dose cutoff threshold and 300 ms with 50% cutoff. Time-resolved measurements suggest that even in the modulated beams, the error in the cumulative dose introduced by the 200 ms VMS time resolution is not likely to exceed 0.5%. Conclusions: Based on current results and prior benchmarks of VMS accuracy, the authors postulate that this approach should be applicable to any MLC-tracking treatments where leaf speeds do not exceed those of the current Varian accelerators.
International Nuclear Information System (INIS)
Feygelman, Vladimir; Tonner, Brian; Hunt, Dylan; Zhang, Geoffrey; Moros, Eduardo; Stambaugh, Cassandra; Nelms, Benjamin E.
2015-01-01
Purpose: Previous studies show that dose to a moving target can be estimated using 4D measurement-guided dose reconstruction based on a process called virtual motion simulation, or VMS. A potential extension of VMS is to estimate dose during dynamic multileaf collimator (MLC)-tracking treatments. The authors introduce a modified VMS method and quantify its performance as proof-of-concept for tracking applications. Methods: Direct measurements with a moving biplanar diode array were used to verify accuracy of the VMS dose estimates. A tracking environment for variably sized circular MLC apertures was simulated by sending preprogrammed control points to the MLC while simultaneously moving the accelerator treatment table. Sensitivity of the method to simulated tracking latency (0–700 ms) was also studied. Potential applicability of VMS to fast changing beam apertures was evaluated by modeling, based on the demonstrated dependence of the cumulative dose on the temporal dose gradient. Results: When physical and virtual latencies were matched, the agreement rates (2% global/2 mm gamma) between the VMS and the biplanar dosimeter were above 96%. When compared to their own reference dose (0 induced latency), the agreement rates for VMS and biplanar array track closely up to 200 ms of induced latency with 10% low-dose cutoff threshold and 300 ms with 50% cutoff. Time-resolved measurements suggest that even in the modulated beams, the error in the cumulative dose introduced by the 200 ms VMS time resolution is not likely to exceed 0.5%. Conclusions: Based on current results and prior benchmarks of VMS accuracy, the authors postulate that this approach should be applicable to any MLC-tracking treatments where leaf speeds do not exceed those of the current Varian accelerators
International Nuclear Information System (INIS)
Doriath, J.Y.
1983-05-01
The need for increasingly accurate nuclear reactor performance data has led to increasingly sophisticated methods for solving the Boltzmann transport equation. This work has revealed the need for analyzing the functional signatures of the neutron flux using pattern recognition techniques to relate the local and overall phases of reactor calculations according to the desired parameters. This approach makes it possible to develop procedures based on a reference calculations and designed to evaluate the disturbances due to changes in physical media and to media interface modifications [fr
Eisenbeis, J.; Roy, C.; Bland, E. C.; Occhipinti, G.
2017-12-01
Most recent methods in ionospheric tomography are based on the inversion of the total electron content measured by ground-based GPS receivers. As a consequence of the high frequency of the GPS signal and the absence of horizontal raypaths, the electron density structure is mainly reconstructed in the F2 region (300 km), where the ionosphere reaches the maximum of ionization, and is not sensitive to the lower ionospheric structure. We propose here a new tomographic method of the lower ionosphere (Roy et al., 2014), based on the full inversion of over-the-horizon (OTH) radar data and applicable to SuperDarn data. The major advantage of our methodology is taking into account, numerically and jointly, the effect that the electron density perturbations induce not only in the speed of electromagnetic waves but also on the raypath geometry. This last point is extremely critical for OTH/SuperDarn data inversions as the emitted signal propagates through the ionosphere between a fixed starting point (the radar) and an unknown end point on the Earth surface where the signal is backscattered. We detail our ionospheric tomography method with the aid of benchmark tests in order to highlight the sensitivity of the radar related to the explored observational parameters: frequencies, elevations, azimuths. Having proved the necessity to take into account both effects simultaneously, we apply our method to real backscattered data from Super Darn and OTH radar. The preliminary solution obtained with the Hokkaido East SuperDARN with only two frequencies (10MHz and 11MHz), showed here, is stable and push us to deeply explore a more complete dataset that we will present at the AGU 2017. This is, in our knowledge, the first time that an ionospheric tomography has been estimated with SuperDarn backscattered data. Reference: Roy, C., G. Occhipinti, L. Boschi, J.-P. Moliné, and M. Wieczorek (2014), Effect of ray and speed perturbations on ionospheric tomography by over-the-horizon radar: A
Transition probabilities for atoms
International Nuclear Information System (INIS)
Kim, Y.K.
1980-01-01
Current status of advanced theoretical methods for transition probabilities for atoms and ions is discussed. An experiment on the f values of the resonance transitions of the Kr and Xe isoelectronic sequences is suggested as a test for the theoretical methods
International Nuclear Information System (INIS)
Fabris, J.D.
1977-01-01
The electric quadrupolar interaction in some hafnium complexes, measured at the metal nucleus level is studied. For that purpose, the technique of γ-γ perturbed angular correlation is used: the frequencies of quadrupolar interaction are compared with some hafnium α-hydroxicarboxilates, namely glycolate, lactate, mandelate and benzylate; the influence of the temperature on the quadrupolar coupling on the hafnium tetramandelate is studied; finally, the effects associated with the capture of thermal neutrons by hafnium tetramandelate are examined locally at the nuclear level. The first group of results shows significant differences in a series of complexes derived from glycolic acid. On the other hand, the substitution of the protons in hafnium tetramandelate structure by some alkaline cations permits to verify a correlation between the variations in the quadrupolar coupling and the electronegativities of the substituent elements. Measurements at high temperatures show that this complex is thermally stable at 100 and 150 0 C. It is possible to see the appearance of two distinct sites for the probe nucleus, after heating the sample at 100 0 C for prolonged time. This fact is attributed to a probable interconversion among the postulated structural isomers for the octacoordinated compounds. Finally, measurements of angular correlation on the irradiated complex show that there is an effective destruction of the target molecule by neutron capture [pt
Directory of Open Access Journals (Sweden)
Li Wang
2017-02-01
Full Text Available The ability to obtain appropriate parameters for an advanced pressurized water reactor (PWR unit model is of great significance for power system analysis. The attributes of that ability include the following: nonlinear relationships, long transition time, intercoupled parameters and difficult obtainment from practical test, posed complexity and difficult parameter identification. In this paper, a model and a parameter identification method for the PWR primary loop system were investigated. A parameter identification process was proposed, using a particle swarm optimization (PSO algorithm that is based on random perturbation (RP-PSO. The identification process included model variable initialization based on the differential equations of each sub-module and program setting method, parameter obtainment through sub-module identification in the Matlab/Simulink Software (Math Works Inc., Natick, MA, USA as well as adaptation analysis for an integrated model. A lot of parameter identification work was carried out, the results of which verified the effectiveness of the method. It was found that the change of some parameters, like the fuel temperature and coolant temperature feedback coefficients, changed the model gain, of which the trajectory sensitivities were not zero. Thus, obtaining their appropriate values had significant effects on the simulation results. The trajectory sensitivities of some parameters in the core neutron dynamic module were interrelated, causing the parameters to be difficult to identify. The model parameter sensitivity could be different, which would be influenced by the model input conditions, reflecting the parameter identifiability difficulty degree for various input conditions.
Shorack, Galen R
2017-01-01
This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...
Wickman, Jonas; Diehl, Sebastian; Blasius, Bernd; Klausmeier, Christopher A; Ryabov, Alexey B; Brännström, Åke
2017-04-01
Spatial structure can decisively influence the way evolutionary processes unfold. To date, several methods have been used to study evolution in spatial systems, including population genetics, quantitative genetics, moment-closure approximations, and individual-based models. Here we extend the study of spatial evolutionary dynamics to eco-evolutionary models based on reaction-diffusion equations and adaptive dynamics. Specifically, we derive expressions for the strength of directional and stabilizing/disruptive selection that apply both in continuous space and to metacommunities with symmetrical dispersal between patches. For directional selection on a quantitative trait, this yields a way to integrate local directional selection across space and determine whether the trait value will increase or decrease. The robustness of this prediction is validated against quantitative genetics. For stabilizing/disruptive selection, we show that spatial heterogeneity always contributes to disruptive selection and hence always promotes evolutionary branching. The expression for directional selection is numerically very efficient and hence lends itself to simulation studies of evolutionary community assembly. We illustrate the application and utility of the expressions for this purpose with two examples of the evolution of resource utilization. Finally, we outline the domain of applicability of reaction-diffusion equations as a modeling framework and discuss their limitations.
International Nuclear Information System (INIS)
Shibui, M.
1989-01-01
A new method for fatigue-life assessment of a component containing defects is presented such that a probabilistic approach is incorporated into the CEGB two-criteria method. The present method assumes that aspect ratio of initial defect, proportional coefficient of fatigue crack growth law and threshold stress intensity range are treated as random variables. Examples are given to illustrate application of the method to the reliability analysis of conduit for an internally cooled cabled superconductor (ICCS) subjected to cyclic quench pressure. The possible failure mode and mechanical properties contributing to the fatigue life of the thin conduit are discussed using analytical and experimental results. 9 refs., 9 figs
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of
Painter, Colin C.; Heimann, David C.; Lanning-Rush, Jennifer L.
2017-08-14
A study was done by the U.S. Geological Survey in cooperation with the Kansas Department of Transportation and the Federal Emergency Management Agency to develop regression models to estimate peak streamflows of annual exceedance probabilities of 50, 20, 10, 4, 2, 1, 0.5, and 0.2 percent at ungaged locations in Kansas. Peak streamflow frequency statistics from selected streamgages were related to contributing drainage area and average precipitation using generalized least-squares regression analysis. The peak streamflow statistics were derived from 151 streamgages with at least 25 years of streamflow data through 2015. The developed equations can be used to predict peak streamflow magnitude and frequency within two hydrologic regions that were defined based on the effects of irrigation. The equations developed in this report are applicable to streams in Kansas that are not substantially affected by regulation, surface-water diversions, or urbanization. The equations are intended for use for streams with contributing drainage areas ranging from 0.17 to 14,901 square miles in the nonirrigation effects region and, 1.02 to 3,555 square miles in the irrigation-affected region, corresponding to the range of drainage areas of the streamgages used in the development of the regional equations.
COVAL, Compound Probability Distribution for Function of Probability Distribution
International Nuclear Information System (INIS)
Astolfi, M.; Elbaz, J.
1979-01-01
1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions
DEFF Research Database (Denmark)
Dimitrov, Nikolay Krasimirov
2016-01-01
We have tested the performance of statistical extrapolation methods in predicting the extreme response of a multi-megawatt wind turbine generator. We have applied the peaks-over-threshold, block maxima and average conditional exceedance rates (ACER) methods for peaks extraction, combined with four...... levels, based on the assumption that the response tail is asymptotically Gumbel distributed. Example analyses were carried out, aimed at comparing the different methods, analysing the statistical uncertainties and identifying the factors, which are critical to the accuracy and reliability...
Wang, Dingbao
2018-01-01
Following the Budyko framework, soil wetting ratio (the ratio between soil wetting and precipitation) as a function of soil storage index (the ratio between soil wetting capacity and precipitation) is derived from the SCS-CN method and the VIC type of model. For the SCS-CN method, soil wetting ratio approaches one when soil storage index approaches infinity, due to the limitation of the SCS-CN method in which the initial soil moisture condition is not explicitly represented. However, for the ...
Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.
Vich, M.; Romero, R.; Richard, E.; Arbogast, P.; Maynard, K.
2010-09-01
Heavy precipitation events occur regularly in the western Mediterranean region. These events often have a high impact on the society due to economic and personal losses. The improvement of the mesoscale numerical forecasts of these events can be used to prevent or minimize their impact on the society. In previous studies, two ensemble prediction systems (EPSs) based on perturbing the model initial and boundary conditions were developed and tested for a collection of high-impact MEDEX cyclonic episodes. These EPSs perturb the initial and boundary potential vorticity (PV) field through a PV inversion algorithm. This technique ensures modifications of all the meteorological fields without compromising the mass-wind balance. One EPS introduces the perturbations along the zones of the three-dimensional PV structure presenting the local most intense values and gradients of the field (a semi-objective choice, PV-gradient), while the other perturbs the PV field over the MM5 adjoint model calculated sensitivity zones (an objective method, PV-adjoint). The PV perturbations are set from a PV error climatology (PVEC) that characterizes typical PV errors in the ECMWF forecasts, both in intensity and displacement. This intensity and displacement perturbation of the PV field is chosen randomly, while its location is given by the perturbation zones defined in each ensemble generation method. Encouraged by the good results obtained by these two EPSs that perturb the PV field, a new approach based on a manual perturbation of the PV field has been tested and compared with the previous results. This technique uses the satellite water vapor (WV) observations to guide the correction of initial PV structures. The correction of the PV field intents to improve the match between the PV distribution and the WV image, taking advantage of the relation between dark and bright features of WV images and PV anomalies, under some assumptions. Afterwards, the PV inversion algorithm is applied to run
Perturbation theory in large order
International Nuclear Information System (INIS)
Bender, C.M.
1978-01-01
For many quantum mechanical models, the behavior of perturbation theory in large order is strikingly simple. For example, in the quantum anharmonic oscillator, which is defined by -y'' + (x 2 /4 + ex 4 /4 - E) y = 0, y ( +- infinity) = 0, the perturbation coefficients, A/sub n/, in the expansion for the ground-state energy, E(ground state) approx. EPSILON/sub n = 0//sup infinity/ A/sub n/epsilon/sup n/, simplify dramatically as n → infinity: A/sub n/ approx. (6/π 3 )/sup 1/2/(-3)/sup n/GAMMA(n + 1/2). Methods of applied mathematics are used to investigate the nature of perturbation theory in quantum mechanics and show that its large-order behavior is determined by the semiclassical content of the theory. In quantum field theory the perturbation coefficients are computed by summing Feynman graphs. A statistical procedure in a simple lambda phi 4 model for summing the set of all graphs as the number of vertices → infinity is presented. Finally, the connection between the large-order behavior of perturbation theory in quantum electrodynamics and the value of α, the charge on the electron, is discussed. 7 figures
Perturbative coherence in field theory
International Nuclear Information System (INIS)
Aldrovandi, R.; Kraenkel, R.A.
1987-01-01
A general condition for coherent quantization by perturbative methods is given, because the basic field equations of a fild theory are not always derivable from a Lagrangian. It's seen that non-lagrangian models way have well defined vertices, provided they satisfy what they call the 'coherence condition', which is less stringent than the condition for the existence of a Lagrangian. They note that Lagrangian theories are perturbatively coherent, in the sense that they have well defined vertices, and that they satisfy automatically that condition. (G.D.F.) [pt
International Nuclear Information System (INIS)
Heuser, F.W.
1980-01-01
On the basis of a deterministic safety concept that has been developed in nuclear engineering, approaches for a probabilistic interpretation of existing safety requirements and for a further risk assessment are described. The procedures in technical reliability analysis and its application in nuclear engineering are discussed. By the example of a reliability analysis for a reactor protection system the author discusses the question as to what extent methods of reliability analysis can be used to interpret deterministically derived safety requirements. The the author gives a survey of the current value and application of probabilistic reliability assessments in non-nuclear technology. The last part of this report deals with methods of risk analysis and its use for safety assessment in nuclear engineering. On the basis of WASH 1,400 the most important phases and tasks of research work in risk assessment are explained, showing the basic criteria and the methods to be applied in risk analysis. (orig./HSCH) [de
Superfield perturbation theory and renormalization
International Nuclear Information System (INIS)
Delbourgo, R.
1975-01-01
The perturbation theory graphs and divergences in super-symmetric Lagrangian models are studied by using superfield techniques. In super PHI 3 -theory very little effort is needed to arrive at the single infinite (wave function) renormalization counterterm, while in PHI 4 -theory the method indicates the counter-Lagrangians needed at the one-loop level and possibly beyond
Perturbations of the Friedmann universe
International Nuclear Information System (INIS)
Novello, M.; Salim, J.M.; Heintzmann, H.
1982-01-01
Correcting and extending previous work by Hawking (1966) and Olson (1976) the complete set of perturbation equations of a Friedmann Universe in the quasi-Maxwellian form is derived and analized. The formalism is then applied to scalar, vector and tensor perturbations of a phenomenological fluid, which is modelled such as to comprise shear and heat flux. Depending on the equation of state of the background it is found that there exist unstable (growing) modes of purely rotational character. It is further found that (to linear order at least) any vortex perturbation is equivalent to a certain heat flux vector. The equation for the gravitational waves are derived in a completely equivalent method as in case of the propagation, in a curved space-time, of electromagnetic waves in a plasma endowed with some definite constitutive relations. (Author) [pt
Renewal theory for perturbed random walks and similar processes
Iksanov, Alexander
2016-01-01
This book offers a detailed review of perturbed random walks, perpetuities, and random processes with immigration. Being of major importance in modern probability theory, both theoretical and applied, these objects have been used to model various phenomena in the natural sciences as well as in insurance and finance. The book also presents the many significant results and efficient techniques and methods that have been worked out in the last decade. The first chapter is devoted to perturbed random walks and discusses their asymptotic behavior and various functionals pertaining to them, including supremum and first-passage time. The second chapter examines perpetuities, presenting results on continuity of their distributions and the existence of moments, as well as weak convergence of divergent perpetuities. Focusing on random processes with immigration, the third chapter investigates the existence of moments, describes long-time behavior and discusses limit theorems, both with and without scaling. Chapters fou...
Non-perturbative approach for laser radiation interactions with solids
International Nuclear Information System (INIS)
Jalbert, G.
1985-01-01
Multiphoton transitions in direct-gap crystals are studied considering non-perturbative approaches. Two methods currently used for atoms and molecules are revised, generalized and applied to solids. In the first one, we construct an S-matrix which incorporates the eletromagnetic field to all orders in an approximated way leading to analytical solution for the multiphoton transition rates. In the second one, the transition probability is calculated within the Bloch-Floquet formalism applieed to the specific case of solids. This formalism is interpreted as a classical approximation to the quantum treatment of the field. In the weak field limit, we compare our results with the usual perturbation calculations. We also incorporate, in the first approach, the non homogeneity and the multimodes effects of a real laser. (author) [pt
James, Andrew J A; Konik, Robert M; Lecheminant, Philippe; Robinson, Neil J; Tsvelik, Alexei M
2018-02-26
We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1 + 1D quantum chromodynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.
James, Andrew J. A.; Konik, Robert M.; Lecheminant, Philippe; Robinson, Neil J.; Tsvelik, Alexei M.
2018-04-01
We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb–Liniger model, 1 + 1D quantum chromodynamics, as well as Landau–Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.
Kato expansion in quantum canonical perturbation theory
Energy Technology Data Exchange (ETDEWEB)
Nikolaev, Andrey, E-mail: Andrey.Nikolaev@rdtex.ru [Institute of Computing for Physics and Technology, Protvino, Moscow Region, Russia and RDTeX LTD, Moscow (Russian Federation)
2016-06-15
This work establishes a connection between canonical perturbation series in quantum mechanics and a Kato expansion for the resolvent of the Liouville superoperator. Our approach leads to an explicit expression for a generator of a block-diagonalizing Dyson’s ordered exponential in arbitrary perturbation order. Unitary intertwining of perturbed and unperturbed averaging superprojectors allows for a description of ambiguities in the generator and block-diagonalized Hamiltonian. We compare the efficiency of the corresponding computational algorithm with the efficiencies of the Van Vleck and Magnus methods for high perturbative orders.
Kato expansion in quantum canonical perturbation theory
International Nuclear Information System (INIS)
Nikolaev, Andrey
2016-01-01
This work establishes a connection between canonical perturbation series in quantum mechanics and a Kato expansion for the resolvent of the Liouville superoperator. Our approach leads to an explicit expression for a generator of a block-diagonalizing Dyson’s ordered exponential in arbitrary perturbation order. Unitary intertwining of perturbed and unperturbed averaging superprojectors allows for a description of ambiguities in the generator and block-diagonalized Hamiltonian. We compare the efficiency of the corresponding computational algorithm with the efficiencies of the Van Vleck and Magnus methods for high perturbative orders.
On adiabatic perturbations in the ekpyrotic scenario
International Nuclear Information System (INIS)
Linde, A.; Mukhanov, V.; Vikman, A.
2010-01-01
In a recent paper, Khoury and Steinhardt proposed a way to generate adiabatic cosmological perturbations with a nearly flat spectrum in a contracting Universe. To produce these perturbations they used a regime in which the equation of state exponentially rapidly changed during a short time interval. Leaving aside the singularity problem and the difficult question about the possibility to transmit these perturbations from a contracting Universe to the expanding phase, we will show that the methods used in Khoury are inapplicable for the description of the cosmological evolution and of the process of generation of perturbations in this scenario
Yepez-Martinez, Tochtli; Civitarese, Osvaldo; Hess, Peter O.
2018-02-01
Starting from an algebraic model based on the QCD-Hamiltonian and previously applied to study meson states, we have developed an extension of it in order to explore the structure of baryon states. In developing our approach we have adapted concepts taken from group theory and non-perturbative many-body methods to describe states built from effective quarks and anti-quarks degrees of freedom. As a Hamiltonian we have used the QCD Hamiltonian written in the Coulomb Gauge, and expressed it in terms of effective quark-antiquark, di-quarks and di-antiquark excitations. To gain some insights about the relevant interactions of quarks in hadronic states, the Hamiltonian was approximately diagonalized by mapping quark-antiquark pairs and di-quarks (di-antiquarks) onto phonon states. In dealing with the structure of the vacuum of the theory, color-scalar and color-vector states are introduced to account for ground-state correlations. While the use of a purely color-scalar ground state is an obvious choice, so that colorless hadrons contain at least three quarks, the presence of coupled color-vector pairs in the ground state allows for colorless excitations resulting from the action of color objects upon it.
Arslanturk, Cihat
2011-02-01
Although tapered fins transfer more rate of heat per unit volume, they are not found in every practical application because of the difficulty in manufacturing and fabrications. Therefore, there is a scope to modify the geometry of a constant thickness fin in view of the less difficulty in manufacturing and fabrication as well as betterment of heat transfer rate per unit volume of the fin material. For the better utilization of fin material, it is proposed a modified geometry of new fin with a step change in thickness (SF) in the literature. In the present paper, the homotopy perturbation method has been used to evaluate the temperature distribution within the straight radiating fins with a step change in thickness and variable thermal conductivity. The temperature profile has an abrupt change in the temperature gradient where the step change in thickness occurs and thermal conductivity parameter describing the variation of thermal conductivity has an important role on the temperature profile and the heat transfer rate. The optimum geometry which maximizes the heat transfer rate for a given fin volume has been found. The derived condition of optimality gives an open choice to the designer.
Guo, Yang
2018-01-04
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-01
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).