International Nuclear Information System (INIS)
Ginsburg, C.A.
1980-01-01
In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)
Diagonal Pade approximations for initial value problems
International Nuclear Information System (INIS)
Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.
1987-06-01
Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab
Pade approximant calculations for neutron escape probability
International Nuclear Information System (INIS)
El Wakil, S.A.; Saad, E.A.; Hendi, A.A.
1984-07-01
The neutron escape probability from a non-multiplying slab containing internal source is defined in terms of a functional relation for the scattering function for the diffuse reflection problem. The Pade approximant technique is used to get numerical results which compare with exact results. (author)
Pade approximants and efficient analytic continuation of a power series
International Nuclear Information System (INIS)
Suetin, S P
2002-01-01
This survey reflects the current state of the theory of Pade approximants, that is, best rational approximations of power series. The main focus is on the so-called inverse problems of this theory, in which one must make deductions about analytic continuation of a given power series on the basis of the known asymptotic behaviour of the poles of some sequence of Pade approximants of this series. Row and diagonal sequences are studied from this point of view. Gonchar's and Rakhmanov's fundamental results of inverse nature are presented along with results of the author
Pade approximants in field theory: pion and kaon systems
International Nuclear Information System (INIS)
Zinn-Justin, J.
1969-01-01
We construct the Pade approximants of the S-matrix, starting from the perturbation series, in the case of two body pion and kaon systems. We have three parameters. The seven lowest lying two body resonances (ρ, K * (890), φ, K * (1420), f 0 , f', A 2 ) are obtained within a few per cent of their actual masses. The Regge trajectories are rising, the intercepts of the ρ and f 0 agree well with the experimental values. In the appendices we give some properties and applications of the Pade approximants. (author) [fr
Solving microwave heating model using Hermite-Pade approximation technique
International Nuclear Information System (INIS)
Makinde, O.D.
2005-11-01
We employ the Hermite-Pade approximation method to explicitly construct the approximate solution of steady state reaction- diffusion equations with source term that arises in modeling microwave heating in an infinite slab with isothermal walls. In particular, we consider the case where the source term decreases spatially and increases with temperature. The important properties of the temperature fields including bifurcations and thermal criticality are discussed. (author)
Pade approximants and the calculation of spectral functions of solids
International Nuclear Information System (INIS)
Grinstein, F.F.
1981-06-01
The computational approach of Chisholm, Genz and Pusterla for evaluating Feynman matrix elements in the physical region, is proposed for the calculation of spectral functions of solids. The method is based on the moment expansion of the functions, with a convenient choice of reference point, and its resummation with Pade approximants. The technique is tested in the calculation of the electron density of states for a one-dimensional system. In this case, the convergence of the method may be formally proved, while a numerical study shows its practical signification. (author)
Pade approximants, NN scattering, and hard core repulsions
International Nuclear Information System (INIS)
Hartt, K.
1980-01-01
Pade approximants to the scattering function F=k cot(delta 0 ) are studied in terms of the variable x=k 2 , using four examples of potential models which possess features of the np 1 S 0 state. Strategies are thereby developed for analytically continuing F when only approximate partial knowledge of F is available. Results are characterized by high accuracy of interpolation. It is suggested that a physically realistic inverse scattering problem begins with such an analytically continued F. When it exists, the solution of this problem in terms of the Marchenko equation is a local potential of the Bargmann type. Some strategies for carrying out this program lead to a stably defined potential, while others do not. With hard core repulsions present, low-order Pade approximants accurately describe F for E/sub c.m./< or =300 MeV. However, since the condition Δ(infinity)-delta(0)=0 is not satisfied in any of our examples containing hard core repulsions, the Marchenko method does not have a solution for them. A possible physical consequence of this result is discussed. Another inverse scattering method is proposed for application to hard core problems
Pade approximants for the ground-state energy of closed-shell quantum dots
International Nuclear Information System (INIS)
Gonzalez, A.; Partoens, B.; Peeters, F.M.
1997-08-01
Analytic approximations to the ground-state energy of closed-shell quantum dots (number of electrons from 2 to 210) are presented in the form of two-point Pade approximants. These Pade approximants are constructed from the small- and large-density limits of the energy. We estimated that the maximum error, reached for intermediate densities, is less than ≤ 3%. Within that present approximation the ground-state is found to be unpolarized. (author). 21 refs, 3 figs, 2 tabs
Energy Technology Data Exchange (ETDEWEB)
Zinn-Justin, J. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1969-07-01
We construct the Pade approximants of the S-matrix, starting from the perturbation series, in the case of two body pion and kaon systems. We have three parameters. The seven lowest lying two body resonances ({rho}, K{sup *}(890), {phi}, K{sup *}(1420), f{sub 0}, f', A{sub 2}) are obtained within a few per cent of their actual masses. The Regge trajectories are rising, the intercepts of the {rho} and f{sub 0} agree well with the experimental values. In the appendices we give some properties and applications of the Pade approximants. (author) [French] Nous avons construit les approximants de Pade de la matrice S a partir de la serie formelle des perturbations, dans le cas des systemes a deux corps de pions et de kaons. Le probleme depend de trois parametres. Nous avons obtenu les sept resonances meson-meson a deux corps ({rho}, K{sup *}(890), {phi}, K{sup *}(1420), f{sub 0}, f', A{sub 2}) a quelques pour cent de leurs masses experimentales. Les trajectoires de Regge sont trouvees croissantes, les intercepts du {rho} et du f{sub 0} sont en bon accord avec l'experience. Dans les appendices nous donnons quelques proprietes et applications des approximants de Pade. (auteur)
Energy Technology Data Exchange (ETDEWEB)
Franceschini, V.; Grecchi, V.; Silverstone, H.J.
1985-09-01
The resonance energies for the hydrogen atom in an electric field, both the real and imaginary parts, have been calculated together from the real Rayleigh-Schroedinger perturbation series by Borel summation. Pade approximants were used to evaluate the Borel transform. The numerical results compare well with values obtained by the complex-coordinate variational method and by sequential use of Pade approximants.
Energy Technology Data Exchange (ETDEWEB)
Gonchar, Andrei A; Rakhmanov, Evguenii A; Suetin, Sergey P
2011-12-31
Pade-Chebyshev approximants are considered for multivalued analytic functions that are real-valued on the unit interval [-1,1]. The focus is mainly on non-linear Pade-Chebyshev approximants. For such rational approximations an analogue is found of Stahl's theorem on convergence in capacity of the Pade approximants in the maximal domain of holomorphy of the given function. The rate of convergence is characterized in terms of the stationary compact set for the mixed equilibrium problem of Green-logarithmic potentials. Bibliography: 79 titles.
International Nuclear Information System (INIS)
Aboanber, A E; Nahla, A A
2002-01-01
A method based on the Pade approximations is applied to the solution of the point kinetics equations with a time varying reactivity. The technique consists of treating explicitly the roots of the inhour formula. A significant improvement has been observed by treating explicitly the most dominant roots of the inhour equation, which usually would make the Pade approximation inaccurate. Also the analytical inversion method which permits a fast inversion of polynomials of the point kinetics matrix is applied to the Pade approximations. Results are presented for several cases of Pade approximations using various options of the method with different types of reactivity. The formalism is applicable equally well to non-linear problems, where the reactivity depends on the neutron density through temperature feedback. It was evident that the presented method is particularly good for cases in which the reactivity can be represented by a series of steps and performed quite well for more general cases
International Nuclear Information System (INIS)
Garibotti, C.R.; Grinstein, F.F.
1978-01-01
Previous theorems on the convergence of the [n, n+m] Punctual Pade Approximants to the scattering amplitude are extended. The new proofs include the cases of non-forward and backward scattering corresponding to potentials having 1/r and 1/r 2 long range behaviours, for which the partial wave expansions are divergent and oscillatory, respectively. In this way, the ability of the approximation scheme as a summation method is established for all of the long range potentials of interest in potential scattering [pt
Nuclear data processing, analysis, transformation and storage with Pade-approximants
International Nuclear Information System (INIS)
Badikov, S.A.; Gay, E.V.; Guseynov, M.A.; Rabotnov, N.S.
1992-01-01
A method is described to generate rational approximants of high order with applications to neutron data handling. The problems considered are: The approximations of neutron cross-sections in resonance region producing the parameters for Adler-Adler type formulae; calculations of resulting rational approximants' errors given in analytical form allowing to compute the error at any energy point inside the interval of approximation; calculations of the correlation coefficient of error values in two arbitrary points provided that experimental errors are independent and normally distributed; a method of simultaneous generation of a few rational approximants with identical set of poles; functionals other than LSM; two-dimensional approximation. (orig.)
International Nuclear Information System (INIS)
Dehghan, Mehdi; Shakourifar, Mohammad; Hamidi, Asgar
2009-01-01
The purpose of this study is to implement Adomian-Pade (Modified Adomian-Pade) technique, which is a combination of Adomian decomposition method (Modified Adomian decomposition method) and Pade approximation, for solving linear and nonlinear systems of Volterra functional equations. The results obtained by using Adomian-Pade (Modified Adomian-Pade) technique, are compared to those obtained by using Adomian decomposition method (Modified Adomian decomposition method) alone. The numerical results, demonstrate that ADM-PADE (MADM-PADE) technique, gives the approximate solution with faster convergence rate and higher accuracy than using the standard ADM (MADM).
PaDe - The particle detection program
Ott, T.; Drolshagen, E.; Koschny, D.; Poppe, B.
2016-01-01
This paper introduces the Particle Detection program PaDe. Its aim is to analyze dust particles in the coma of the Jupiter-family comet 67P/Churyumov-Gerasimenko which were recorded by the two OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras onboard the ESA spacecraft Rosetta, see e.g. Keller et al. (2007). In addition to working with the Rosetta data, the code was modified to work with images from meteors. It was tested with data recorded by the ICCs (Intensified CCD Cameras) of the CILBO-System (Canary Island Long-Baseline Observatory) on the Canary Islands; compare Koschny et al. (2013). This paper presents a new method for the position determination of the observed meteors. The PaDe program was written in Python 3.4. Its original intent is to find the trails of dust particles in space from the OSIRIS images. For that it determines the positions where the trail starts and ends. They were found using a fit following the so-called error function (Andrews, 1998) for the two edges of the profiles. The positions where the intensities fall to the half maximum were found to be the beginning and end of the particle. In the case of meteors, this method can be applied to find the leading edge of the meteor. The proposed method has the potential to increase the accuracy of the position determination of meteors dramatically. Other than the standard method of finding the photometric center, our method is not influenced by any trails or wakes behind the meteor. This paper presents first results of this ongoing work.
The PADE dosimetry system at the Brokdorf nuclear power station
International Nuclear Information System (INIS)
Poetter, Karl-Friedrich; Eckelmann, Joerg; Kuegow, Mario; Spahn, Werner; Franz, Manfred
2002-01-01
The PADE program system is used in nuclear power plants for personnel and workplace dosimetry and for managing access to the controlled area. On-line interfaces with existing dose determination systems allow collection, surveillance and evaluation functions to be achieved for person-related and workplace-related dose data. This is managed by means of open, non-proprietary communication of PADE with the computer system coupled via interfaces. In systems communication, PADE is limited to main interventions into outside systems, thus ensuring flexible adaptation to existing systems. As a client-server solution, PADE has been developed on the basis of an ORACLE-8 database; the version presented here runs on a Windows NT server. The system described has been used at the Brokdorf Nuclear Power Station since early 2000 and has so far reliably managed more than one million individual access movements of more than 6 000 persons. It is currently being integrated into a comprehensive plant operations management system. Among other things, PADE offers a considerable development potential for a tentatively planned future standardization of parts of the dosimetry systems in German nuclear power plants and for the joint management of in-plant and official dose data. (orig.) [de
Nonstandard Analysis and Constructivism!
Sanders, Sam
2017-01-01
Almost two decades ago, Wattenberg published a paper with the title 'Nonstandard Analysis and Constructivism?' in which he speculates on a possible connection between Nonstandard Analysis and constructive mathematics. We study Wattenberg's work in light of recent research on the aforementioned connection. On one hand, with only slight modification, some of Wattenberg's theorems in Nonstandard Analysis are seen to yield effective and constructive theorems (not involving Nonstandard Analysis). ...
Cosmological Models and Gamma-Ray Bursts Calibrated by Using Pade Method
Liu, Jing; Wei, Hao
2014-01-01
Gamma-ray bursts (GRBs) are among the most powerful sources in the universe. In the recent years, GRBs have been proposed as a complementary probe to type Ia supernovae (SNIa). However, as is well known, there is a circularity problem in the use of GRBs to study cosmology. In this work, based on the Pad\\'e approximant, we propose a new cosmology-independent method to calibrate GRBs. We consider a sample consisting of 138 long Swift GRBs and obtain 79 calibrated long GRBs at high-redshift $z>1...
Davis, Martin
2005-01-01
Geared toward upper-level undergraduates and graduate students, this text explores the applications of nonstandard analysis without assuming any knowledge of mathematical logic. It develops the key techniques of nonstandard analysis at the outset from a single, powerful construction; then, beginning with a nonstandard construction of the real number system, it leads students through a nonstandard treatment of the basic topics of elementary real analysis, topological spaces, and Hilbert space.Important topics include nonstandard treatments of equicontinuity, nonmeasurable sets, and the existenc
DEFF Research Database (Denmark)
Tamke, Martin
. Using parametric design tools and computer controlled production facilities Copenhagens Centre for IT and Architecture undertook a practice based research into performance based non-standard element design and mass customization techniques. In close cooperation with wood construction software......, but the integration of traditional wood craft techniques. The extensive use of self adjusting, load bearing wood-wood joints contributed to ease in production and assembly of a performance based architecture....
Le Chevalier, Francois; Staraj, Robert
2013-01-01
This book aims at describing the wide variety of new technologies and concepts of non-standard antenna systems - reconfigurable, integrated, terahertz, deformable, ultra-wideband, using metamaterials, or MEMS, etc, and how they open the way to a wide range of applications, from personal security and communications to multifunction radars and towed sonars, or satellite navigation systems, with space-time diversity on transmit and receive. A reference book for designers in this lively scientific community linking antenna experts and signal processing engineers.
Nonstandard Methods in Lie Theory
Goldbring, Isaac Martin
2009-01-01
In this thesis, we apply model theory to Lie theory and geometric group theory. These applications of model theory come via nonstandard analysis. In Lie theory, we use nonstandard methods to prove two results. First, we give a positive solution to the local form of Hilbert's Fifth Problem, which asks whether every locally euclidean local…
DEFF Research Database (Denmark)
Tamke, Martin
. Using parametric design tools and computer controlled production facilities Copenhagens Centre for IT and Architecture undertook a practice based research into performance based non-standard element design and mass customization techniques. In close cooperation with wood construction software......Non-Standard elements in architecture bear the promise of a better more specific performance (Oosterhuis 2003). A new understanding of design evolves, which is focusing on open ended approaches, able to negotiate between shifting requirements and to integrate knowledge on process and material......, but the integration of traditional wood craft techniques. The extensive use of self adjusting, load bearing wood-wood joints contributed to ease in production and assembly of a performance based architecture....
Assessment of non-standard HIV antiretroviral therapy regimens at ...
African Journals Online (AJOL)
2016-03-06
Mar 6, 2016 ... Aim. Lighthouse Trust in Lilongwe, Malawi serves approximately 25,000 patients with HIV antiretroviral therapy (ART) regimens standardized according to national treatment guidelines. However, as a referral centre for complex cases, Lighthouse Trust occasionally treats patients with non-standard ART.
Maternal Nonstandard Work Schedules and Child Cognitive Outcomes
Han, Wen-Jui
2005-01-01
This paper examined associations between mothers' work schedules and children's cognitive outcomes in the first 3 years of life for approximately 900 children from the National Institute of Child Health and Human Development Study of Early Child Care. Both the timing and duration of maternal nonstandard work schedules were examined. Although…
Nonstandard Approach to Possibility Measures
Czech Academy of Sciences Publication Activity Database
Kramosil, Ivan
1996-01-01
Roč. 4, č. 3 (1996), s. 275-301 ISSN 0218-4885 R&D Projects: GA AV ČR IAA1030504; GA ČR GA201/93/0781 Keywords : possibility measure * boolean-valued measure * nonstandard model * Boolean model * probability measure
Completeness of Hoare Logic over Nonstandard Models
Xu, Zhaowei; Sui, Yuefei; Zhang, Wenhui
2017-01-01
The nonstandard approach to program semantics has successfully resolved the completeness problem of Floyd-Hoare logic. The known versions of nonstandard semantics, the Hungary semantics and axiomatic semantics, are so general that they are absent either from mathematical elegance or from practical usefulness. The aim of this paper is to exhibit a not only mathematically elegant but also practically useful nonstandard semantics. A basic property of computable functions in the standard model $N...
Nonstandard Employment in the Nonmetropolitan United States
McLaughlin, Diane K.; Coleman-Jensen, Alisha J.
2008-01-01
We examine the prevalence of nonstandard employment in the nonmetropolitan United States using the Current Population Survey Supplement on Contingent Work (1999 and 2001). We find that nonstandard work is more prevalent in nonmetropolitan than in central city or suburban areas. Logistic regression models controlling for sociodemographic and work…
Methods of solving nonstandard problems
Grigorieva, Ellina
2015-01-01
This book, written by an accomplished female mathematician, is the second to explore nonstandard mathematical problems – those that are not directly solved by standard mathematical methods but instead rely on insight and the synthesis of a variety of mathematical ideas. It promotes mental activity as well as greater mathematical skills, and is an ideal resource for successful preparation for the mathematics Olympiad. Numerous strategies and techniques are presented that can be used to solve intriguing and challenging problems of the type often found in competitions. The author uses a friendly, non-intimidating approach to emphasize connections between different fields of mathematics and often proposes several different ways to attack the same problem. Topics covered include functions and their properties, polynomials, trigonometric and transcendental equations and inequalities, optimization, differential equations, nonlinear systems, and word problems. Over 360 problems are included with hints, ...
A New Iteration Multivariate Pad e´ Approximation Technique for ...
African Journals Online (AJOL)
In this paper, the Laplace transform, the New iteration method and the Multivariate Pade´ approximation technique are employed to solve nonlinear fractional partial differential equations whose fractional derivatives are described in the sense of Caputo. The Laplace transform is used to ”fully” determine the initial iteration ...
Nonstandard analysis for the working mathematician
Wolff, Manfred
2015-01-01
Starting with a simple formulation accessible to all mathematicians, this second edition is designed to provide a thorough introduction to nonstandard analysis. Nonstandard analysis is now a well-developed, powerful instrument for solving open problems in almost all disciplines of mathematics; it is often used as a ‘secret weapon’ by those who know the technique. This book illuminates the subject with some of the most striking applications in analysis, topology, functional analysis, probability and stochastic analysis, as well as applications in economics and combinatorial number theory. The first chapter is designed to facilitate the beginner in learning this technique by starting with calculus and basic real analysis. The second chapter provides the reader with the most important tools of nonstandard analysis: the transfer principle, Keisler’s internal definition principle, the spill-over principle, and saturation. The remaining chapters of the book study different fields for applications; each begins...
Path space measures for Dirac and Schroedinger equations: Nonstandard analytical approach
International Nuclear Information System (INIS)
Nakamura, T.
1997-01-01
A nonstandard path space *-measure is constructed to justify the path integral formula for the Dirac equation in two-dimensional space endash time. A standard measure as well as a standard path integral is obtained from it. We also show that, even for the Schroedinger equation, for which there is no standard measure appropriate for a path integral, there exists a nonstandard measure to define a *-path integral whose standard part agrees with the ordinary path integral as defined by a limit from time-slice approximant. copyright 1997 American Institute of Physics
The Kernel of a Nonstandard Game.
1979-07-01
where c (•) denotes the Q—convex closure . Then for some a t E , a cA 1. J 17 Proof: Machover and Hirschield, Lectures in Nonstandard Analysis...1966. (5] M. Machover and J. Hirschfeld, Lecture Notes on Non- standard Analysis, Springer Verlag, 1969. (6] N. Maschler, B. Peleg, and L. S. Shapley
NONSTANDARD PROBLEMS IN STUDYING THE PROPERTIES FUNCTIONS.
Directory of Open Access Journals (Sweden)
V. I. Kuzmich
2010-06-01
Full Text Available In this paper we consider two non-standard problems that may be offered to students for independent solution in the study of fundamental properties of functions in the course of mathematical analysis. These tasks are wearing creativity and contribute to a better understanding of students to concepts such as monotonicity and continuity of the function.
Place branding and nonstandard regionalization in Europe
Boisen, Martin
2015-01-01
Place branding might, could, and maybe even should play a central role in urban and regional governance. The vantage point of this chapter is that every place is a brand and that the processes of nonstandard regionalization that can be witnessed all over Europe create new places and, thus, new place
A functional interpretation for nonstandard arithmetic
van den Berg, B.; Briseid, E.; Safarik, P.
2012-01-01
We introduce constructive and classical systems for nonstandard arithmetic and show how variants of the functional interpretations due to Gödel and Shoenfield can be used to rewrite proofs performed in these systems into standard ones. These functional interpretations show in particular that our
Nonstandard Employment Relations and Implications for Decent ...
African Journals Online (AJOL)
Conceptualizing nonstandard work within the context of casual, contract and outsourced work, the paper contends that this form of employment relations has been exacerbated by the growing incidence of youth unemployment in Nigeria. Using neoliberalism as a theoretical framework, the paper further contended that most ...
Axion cold dark matter in nonstandard cosmologies
International Nuclear Information System (INIS)
Visinelli, Luca; Gondolo, Paolo
2010-01-01
We study the parameter space of cold dark matter axions in two cosmological scenarios with nonstandard thermal histories before big bang nucleosynthesis: the low-temperature reheating (LTR) cosmology and the kination cosmology. If the Peccei-Quinn symmetry breaks during inflation, we find more allowed parameter space in the LTR cosmology than in the standard cosmology and less in the kination cosmology. On the contrary, if the Peccei-Quinn symmetry breaks after inflation, the Peccei-Quinn scale is orders of magnitude higher than standard in the LTR cosmology and lower in the kination cosmology. We show that the axion velocity dispersion may be used to distinguish some of these nonstandard cosmologies. Thus, axion cold dark matter may be a good probe of the history of the Universe before big bang nucleosynthesis.
Maternal Nonstandard Work Schedules and Breastfeeding Behaviors
Zilanawala, A.
2017-01-01
Objectives: Although maternal employment rates have increased in the last decade in the UK, there is very little research investigating the linkages between maternal nonstandard work schedules (i.e., work schedules outside of the Monday through Friday, 9–5 schedule) and breastfeeding initiation and duration, especially given the wide literature citing the health advantages of breastfeeding for mothers and children. Methods This paper uses a population-based, UK cohort study, the Millennium Co...
Maternal Nonstandard Work Schedules and Breastfeeding Behaviors.
Zilanawala, Afshin
2017-06-01
Objectives Although maternal employment rates have increased in the last decade in the UK, there is very little research investigating the linkages between maternal nonstandard work schedules (i.e., work schedules outside of the Monday through Friday, 9-5 schedule) and breastfeeding initiation and duration, especially given the wide literature citing the health advantages of breastfeeding for mothers and children. Methods This paper uses a population-based, UK cohort study, the Millennium Cohort Study (n = 17,397), to investigate the association between types of maternal nonstandard work (evening, night, away from home overnight, and weekends) and breastfeeding behaviors. Results In unadjusted models, exposure to evening shifts was associated with greater odds of breastfeeding initiation (OR 1.71, CI 1.50-1.94) and greater odds of short (OR 1.55, CI 1.32-1.81), intermediate (OR 2.01, CI 1.64-2.47), prolonged partial duration (OR 2.20, CI 1.78-2.72), and prolonged exclusive duration (OR 1.53, CI 1.29-1.82), compared with mothers who were unemployed and those who work other types of nonstandard shifts. Socioeconomic advantage of mothers working evening schedules largely explained the higher odds of breastfeeding initiation and duration. Conclusions Socioeconomic characteristics explain more breastfeeding behaviors among mothers working evening shifts. Policy interventions to increase breastfeeding initiation and duration should consider the timing of maternal work schedules.
Relic abundance of WIMPs in non-standard cosmological scenarios
Energy Technology Data Exchange (ETDEWEB)
Yimingniyazi, W.
2007-08-06
In this thesis we study the relic density n{sub {chi}} of non--relativistic long--lived or stable particles {chi} in various non--standard cosmological scenarios. First, we discuss the relic density in the non--standard cosmological scenario in which the temperature is too low for the particles {chi} to achieve full chemical equilibrium. We also investigated the case where {chi} particles are non--thermally produced from the decay of heavier particles in addition to the usual thermal production. In low temperature scenario, we calculate the relic abundance starting from arbitrary initial temperatures T{sub 0} of the radiation--dominated epoch and derive approximate solutions for the temperature dependence of the relic density which can accurately reproduces numerical results when full thermal equilibrium is not achieved. If full equilibrium is reached, our ansatz no longer reproduces the correct temperature dependence of the {chi} number density. However, we can contrive a semi-analytic formula which gives the correct final relic density, to an accuracy of about 3% or better, for all cross sections and initial temperatures. We also derive the lower bound on the initial temperature T{sub 0}, assuming that the relic particle accounts for the dark matter energy density in the universe. The observed cold dark matter abundance constrains the initial temperature T{sub 0} {>=}m{sub {chi}}/23, where m{sub {chi}} is the mass of {chi}. Second, we discuss the {chi} density in the scenario where the the Hubble parameter is modified. Even in this case, an approximate formula similar to the standard one is found to be capable of predicting the final relic abundance correctly. Choosing the {chi} annihilation cross section such that the observed cold dark matter abundance is reproduced in standard cosmology, we constrain possible modifications of the expansion rate at T {proportional_to}m{sub {chi}}/20, well before Big Bang Nucleosynthesis. (orig.)
Directory of Open Access Journals (Sweden)
E. Momoniat
2014-01-01
Full Text Available Two nonstandard finite difference schemes are derived to solve the regularized long wave equation. The criteria for choosing the “best” nonstandard approximation to the nonlinear term in the regularized long wave equation come from considering the modified equation. The two “best” nonstandard numerical schemes are shown to preserve conserved quantities when compared to an implicit scheme in which the nonlinear term is approximated in the usual way. Comparisons to the single solitary wave solution show significantly better results, measured in the L2 and L∞ norms, when compared to results obtained using a Petrov-Galerkin finite element method and a splitted quadratic B-spline collocation method. The growth in the error when simulating the single solitary wave solution using the two “best” nonstandard numerical schemes is shown to be linear implying the nonstandard finite difference schemes are conservative. The formation of an undular bore for both steep and shallow initial profiles is captured without the formation of numerical instabilities.
Stability and non-standard finite difference method of the generalized Chua's circuit
Radwan, Ahmed G.
2011-08-01
In this paper, we develop a framework to obtain approximate numerical solutions of the fractional-order Chua\\'s circuit with Memristor using a non-standard finite difference method. Chaotic response is obtained with fractional-order elements as well as integer-order elements. Stability analysis and the condition of oscillation for the integer-order system are discussed. In addition, the stability analyses for different fractional-order cases are investigated showing a great sensitivity to small order changes indicating the poles\\' locations inside the physical s-plane. The GrnwaldLetnikov method is used to approximate the fractional derivatives. Numerical results are presented graphically and reveal that the non-standard finite difference scheme is an effective and convenient method to solve fractional-order chaotic systems, and to validate their stability. © 2011 Elsevier Ltd. All rights reserved.
Non-standard and improperly posed problems
Straughan, Brian; Ames, William F
1997-01-01
Written by two international experts in the field, this book is the first unified survey of the advances made in the last 15 years on key non-standard and improperly posed problems for partial differential equations.This reference for mathematicians, scientists, and engineers provides an overview of the methodology typically used to study improperly posed problems. It focuses on structural stability--the continuous dependence of solutions on the initial conditions and the modeling equations--and on problems for which data are only prescribed on part of the boundary.The book addresses continuou
Inverse Variational Problem for Nonstandard Lagrangians
Saha, A.; Talukdar, B.
2014-06-01
In the mathematical physics literature the nonstandard Lagrangians (NSLs) were introduced in an ad hoc fashion rather than being derived from the solution of the inverse problem of variational calculus. We begin with the first integral of the equation of motion and solve the associated inverse problem to obtain some of the existing results for NSLs. In addition, we provide a number of alternative Lagrangian representations. The case studies envisaged by us include (i) the usual modified Emden-type equation, (ii) Emden-type equation with dissipative term quadratic in velocity, (iii) Lotka-Volterra model and (vi) a number of the generic equations for dissipative-like dynamical systems. Our method works for nonstandard Lagrangians corresponding to the usual action integral of mechanical systems but requires modification for those associated with the modified actions like S =∫abe L(x ,x˙ , t) dt and S =∫abL 1 - γ(x ,x˙ , t) dt because in the latter case one cannot construct expressions for the Jacobi integrals.
Job and life satisfaction of nonstandard workers in South Korea.
Lee, Bokim
2013-08-01
Since the South Korean financial crisis of the late 1990s, the number of nonstandard workers in South Korea has increased rapidly. With such a drastic change, it has been difficult to establish national welfare systems (e.g., accident insurance or support for families with dependent children) for nonstandard workers and identify critical aspects of their health. To evaluate job and life satisfaction among nonstandard workers, this study used a representative sample of South Koreans. Using data from the 2008 Korean Labor and Income Panel Study, the sample size totaled 4,340 observations, of which 1,344 (31.0%) involved nonstandard workers. Significant differences in job and life satisfaction between nonstandard workers and standard workers were found. The results also indicate discrimination in the welfare and fringe benefit systems in South Korea. Occupational health nurses must address the physical and psychological health issues, personal problems, and everyday life concerns of nonstandard workers. Given that the employment status of nonstandard workers in companies is generally unstable, it is difficult for these workers to report poor working conditions to employers or other authorities. Accordingly, occupational health nurses should advocate for nonstandard workers by notifying employers of the many problems they face. Copyright 2013, SLACK Incorporated.
Digital economy and non-standard work
Directory of Open Access Journals (Sweden)
Patrizia Tullini
2016-12-01
Full Text Available Public and scientific debate on the digital economy is now widespread in many european countries. Also labour law scholars started to pay more attention to the new economical models and to the impact of digital technologies on productive processes. Economics and labour sciences should now move from a descriptive analysis to a deeper theoretical elaboration.The directions of the theoretical analysis are essentially two: the first one deals with the overbearing diffusion of non-standard forms of work on the web, especially on the digital platforms. This trend undermines the traditional foundation of subordination and affects the dynamics of global labour law market. The second directions deals with the increasing use of artificial intelligence in the industrial environment that presents new legal and social issues, concerning both the replacement of standard work with robotics and the complementarity between human work and «non-human agents» work.
Critical region of a type II superconducting film near Hsub(c2): rational approximants
International Nuclear Information System (INIS)
Ruggeri, G.J.
1979-01-01
The high-temperature perturbative expansions for the thermal quantities of a type II superconducting film are extrapolated to the critical region near Hsub(c2) by means of new rational approximants of the Pade type. The new approximants are forced to reproduce the leading correction to the flux lattice contribution on the low-temperature side of the transition. Compared to those previously considered in the literature: (i) the mutual consistency of the approximants is improved; and (ii) they are nearer to the exact solution of the zero-dimensional Landau-Ginsburg model. (author)
Polynomial approximation of functions in Sobolev spaces
International Nuclear Information System (INIS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces
Neutrino Oscillations and Non-standard Interactions
Directory of Open Access Journals (Sweden)
Yasaman Farzan
2018-02-01
Full Text Available Current neutrino experiments are measuring the neutrino mixing parameters with an unprecedented accuracy. The upcoming generation of neutrino experiments will be sensitive to subdominant neutrino oscillation effects that can in principle give information on the yet-unknown neutrino parameters: the Dirac CP-violating phase in the PMNS mixing matrix, the neutrino mass ordering and the octant of θ23. Determining the exact values of neutrino mass and mixing parameters is crucial to test various neutrino models and flavor symmetries that are designed to predict these neutrino parameters. In the first part of this review, we summarize the current status of the neutrino oscillation parameter determination. We consider the most recent data from all solar neutrino experiments and the atmospheric neutrino data from Super-Kamiokande, IceCube, and ANTARES. We also implement the data from the reactor neutrino experiments KamLAND, Daya Bay, RENO, and Double Chooz as well as the long baseline neutrino data from MINOS, T2K, and NOνA. If in addition to the standard interactions, neutrinos have subdominant yet-unknown Non-Standard Interactions (NSI with matter fields, extracting the values of these parameters will suffer from new degeneracies and ambiguities. We review such effects and formulate the conditions on the NSI parameters under which the precision measurement of neutrino oscillation parameters can be distorted. Like standard weak interactions, the non-standard interaction can be categorized into two groups: Charged Current (CC NSI and Neutral Current (NC NSI. Our focus will be mainly on neutral current NSI because it is possible to build a class of models that give rise to sizeable NC NSI with discernible effects on neutrino oscillation. These models are based on new U(1 gauge symmetry with a gauge boson of mass ≲ 10 MeV. The UV complete model should be of course electroweak invariant which in general implies that along with neutrinos, charged
On a saddlepoint approximation to the Markov binomial distribution
DEFF Research Database (Denmark)
Jensen, Jens Ledet
A nonstandard saddlepoint approximation to the distribution of a sum of Markov dependent trials is introduced. The relative error of the approximation is studied, not only for the number of summands tending to infinity, but also for the parameter approaching the boundary of its definition range. ...
Filling in the gaps with non-standard body fluids.
Lo, Sheng-Ying; Saifee, Nabiha H; Mason, Brook O; Greene, Dina N
2016-08-01
Body fluid specimens other than serum, plasma or urine are generally not validated by manufacturers, but analysis of these non-standard fluids can be important for clinical diagnosis and management. Laboratories, therefore, rely on the published literature to better understand the validation and implementation of such tests. This study utilized a data-driven approach to determine the clinical reportable range for 11 analytes, evaluated a total bilirubin assay, and assessed interferences from hemolysis, icterus, and lipemia in non-standard fluids. Historical measurements in non-standard body fluids run on a Beckman Coulter DxC800 were used to optimize population-specific clinical reportable ranges for albumin, amylase, creatinine, glucose, lactate dehydrogenase, lipase, total bilirubin, total cholesterol, total protein, triglyceride and urea nitrogen run on the Beckman Coulter AU680. For these 11 analytes, interference studies were performed by spiking hemolysate, bilirubin, or Intralipid® into abnormal serous fluids. Precision, accuracy, linearity, and stability of total bilirubin in non-standard fluids was evaluated on the Beckman Coulter AU680 analyzer. The historical non-standard fluid results indicated that in order to report a numeric result, 4 assays required no dilution, 5 assays required onboard dilutions and 2 assays required both onboard and manual dilutions. The AU680 total bilirubin assay is suitable for clinical testing of non-standard fluids. Interference studies revealed that of the 11 total AU680 analyte measurements on non-standard fluids, lipemia affected 1, icterus affected 3, and hemolysis affected 5. Chemistry analytes measured on the AU680 demonstrate acceptable analytical performance for non-standard fluids. Common endogenous interference from lipemia, icterus, and hemolysis (LIH) are observed and flagging rules based on LIH indices were developed to help improve the clinical interpretation of results.
Integral approximants for functions of higher monodromic dimension
Energy Technology Data Exchange (ETDEWEB)
Baker, G.A. Jr.
1987-01-01
In addition to the description of multiform, locally analytic functions as covering a many sheeted version of the complex plane, Riemann also introduced the notion of considering them as describing a space whose ''monodromic'' dimension is the number of linearly independent coverings by the monogenic analytic function at each point of the complex plane. I suggest that this latter concept is natural for integral approximants (sub-class of Hermite-Pade approximants) and discuss results for both ''horizontal'' and ''diagonal'' sequences of approximants. Some theorems are now available in both cases and make clear the natural domain of convergence of the horizontal sequences is a disk centered on the origin and that of the diagonal sequences is a suitably cut complex-plane together with its identically cut pendant Riemann sheets. Some numerical examples have also been computed.
Metric and topology on a non-standard real line and non-standard space-time
International Nuclear Information System (INIS)
Tahir Shah, K.
1981-04-01
We study metric and topological properties of extended real line R* and compare it with the non-standard model of real line *R. We show that some properties, like triangular inequality, cannot be carried over R* from R. This confirms F. Wattenberg's result for measure theory on Dedekind completion of *R. Based on conclusions from these results we propose a non-standard model of space-time. This space-time is without undefined objects like singularities. (author)
Homotopy Analysis and Pade´ Methods for Solving Two Nonlinear Equations
Directory of Open Access Journals (Sweden)
A. Golbabai
2011-09-01
Full Text Available In this paper, we are giving analytic approximate solutions to nonlinear PDEs using the Homotopy Analysis Method (HAM and Homotopy Pad´e Method(HPad´eM. The HAM contains the auxiliary parameter h, which provides us with a simple way to adjust and control the convergence regions of solution series. It is illustrated that HPad´eM accelerates the convergence of the related series. The results reveal these methods are remarkably effective.
Nonstandard work arrangements and worker health and safety.
Howard, John
2017-01-01
Arrangements between those who perform work and those who provide jobs come in many different forms. Standard work arrangements now exist alongside several nonstandard arrangements: agency work, contract work, and gig work. While standard work arrangements are still the most prevalent types, the rise of nonstandard work arrangements, especially temporary agency, contract, and "gig" arrangements, and the potential effects of these new arrangements on worker health and safety have captured the attention of government, business, labor, and academia. This article describes the major work arrangements in use today, profiles the nonstandard workforce, discusses several legal questions about how established principles of labor and employment law apply to nonstandard work arrangements, summarizes findings published in the past 20 years about the health and safety risks for workers in nonstandard work arrangements, and outlines current research efforts in the area of healthy work design and worker well-being. Am. J. Ind. Med. 60:1-10, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Non-standard interactions using the OPERA experiment
International Nuclear Information System (INIS)
Blennow, Mattias; Meloni, Davide; Ohlsson, Tommy; Westerberg, Mattias; Terranova, Francesco
2008-01-01
We investigate the implications of non-standard interactions on neutrino oscillations in the OPERA experiment. In particular, we study the non-standard interaction parameter ε μ τ . We show that the OPERA experiment has a unique opportunity to reduce the allowed region for this parameter compared with other experiments such as the MINOS experiment, mostly due to the higher neutrino energies in the CNGS beam compared to the NuMI beam. We find that OPERA is mainly sensitive to a combination of standard and non-standard parameters and that a resulting anti-resonance effect could suppress the expected number of events. Furthermore, we show that running OPERA for five years each with neutrinos and anti-neutrinos would help in resolving the degeneracy between the standard parameters and ε μ τ . This scenario is significantly better than the scenario with a simple doubling of the statistics by running with neutrinos for ten years. (orig.)
Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model
Directory of Open Access Journals (Sweden)
Oluwaseun Egbelowo
2017-05-01
Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.
Cardiovascular health status between standard and nonstandard workers in Korea.
Directory of Open Access Journals (Sweden)
Jong Ju Seon
Full Text Available The effect of employment insecurity on employee health is an important public health issue due to the recent effects of neoliberalism and the global financial crisis (2007-2008 on labor markets. This study aims to evaluate the differences in cardiovascular health status and the use of preventive screening services between standard and nonstandard workers.Waged employees (N = 5,338 between the ages of 20 and 64 were grouped into standard (full-time, permanent and nonstandard (part-time, temporary, or daily employees. Data from the Fourth Korea National Health and Nutrition Examination Survey, 2007-2009, a nationwide representative survey, were examined, including cardiovascular health risk behaviors (tobacco, alcohol, physical inactivity, measured morbidities (blood pressure, blood glucose level, lipid profiles, body mass index, and the use of screening services for hypertension and diabetes mellitus.Female nonstandard employees tended to have higher blood pressure than did female standard employees (adjusted odds ratio, aOR 1.42, 95% confidence interval, CI 1.02 to 1.98. However, nonstandard employees (both men and women were less likely to use preventive screening services for hypertension (aOR 0.72, 95% CI 0.54 to 0.94 in men; aOR 0.56, 95% CI 0.43 to 0.73 in women and diabetes (aOR 0.58, 95% CI 0.43 to 0.79 in men; aOR 0.55, 95% CI 0.43 to 0.71 in women.Nonstandard work is associated with the underuse of screening services and poorer cardiovascular health in a specific population. Policies to reduce employment insecurity and encourage nonstandard employees to receive health screening services should be prioritized.
International Nuclear Information System (INIS)
Palma, Daniel A.; Goncalves, Alessandro C.; Martinez, Aquilino S.; Silva, Fernando C.
2008-01-01
The activation technique allows much more precise measurements of neutron intensity, relative or absolute. The technique requires the knowledge of the Doppler broadening function ψ(x,ξ) to determine the resonance self-shielding factors in the epithermal range G epi (τ,ξ). Two new analytical approximations for the Doppler broadening function ψ(x,ξ) are proposed. The approximations proposed are compared with other methods found in literature for the calculation of the ψ(x,ξ) function, that is, the 4-pole Pade method and the Frobenius method, when applied to the calculation of G epi (τ,ξ). The results obtained provided satisfactory accuracy. (authors)
Nonstandard Work Schedules and Partnership Quality : Quantitative and Qualitative Findings
Mills, Melinda; Täht, K
This article questions existing findings and provides new evidence about the consequences of nonstandard work schedules on partnership quality. Using quantitative couple data from The Netherlands Kinship Panel Study (NKPS) (N = 3,016) and semistructured qualitative interviews (N = 34), we found
Combining semantics with non-standard interpreter hierarchies
DEFF Research Database (Denmark)
Abramov, Sergei M.; Glück, Robert
2000-01-01
This paper reports on results concerning the combination of non-standard semantics via interpreters. We define what a semantics combination means and identify under which conditions a combination can be realized by computer programs (robustness, safely combinable). We develop the underlying mathe...
Nonstandard bosons in deep inelastic e±-p scattering
International Nuclear Information System (INIS)
Christova, E.C.
1990-07-01
Different polarization asymmetries in deep inelastic scattering of charged leptons on protons as possible model independent tests for nonstandard weak neutral spin-0 and spin-2 bosons are discussed. They are based only on the general vector, scalar and tensor structure of the lepton-hadron interactions. Expressions that can be used for numerical analysis are obtained. (author). 10 refs
Nonstandard interpretation of quantum electrodynamics and renormalization theory
International Nuclear Information System (INIS)
Dinariev, O.Yu.; Mosolov, A.B.
1986-01-01
Operations with infinite renormalization constants are shown to become physically sensible, if one consitderes electrodynamics not over the field of real number, but over its non-standard expansion. A classic scheme of the Bogolyubov-Parasyuk renormalization theory in application to spinor electrodynamics is briefly described
Non-standard testing of mechanical characteristics of historic mortars
Czech Academy of Sciences Publication Activity Database
Drdácký, Miloš
2011-01-01
Roč. 5, 4-5 (2011), s. 383-394 ISSN 1558-3058 R&D Projects: GA ČR(CZ) GA103/09/2067 Institutional research plan: CEZ:AV0Z20710524 Keywords : non-standard test specimen * historic mortar * compressive strength Subject RIV: AL - Art, Architecture, Cultural Heritage Impact factor: 0.235, year: 2011
Combining semantics with non-standard interpreter hierarchies
DEFF Research Database (Denmark)
Abramov, Sergei M.; Glück, Robert
2000-01-01
This paper reports on results concerning the combination of non-standard semantics via interpreters. We define what a semantics combination means and identify under which conditions a combination can be realized by computer programs (robustness, safely combinable). We develop the underlying...
Non-Standard Workers: The South African Context, International ...
African Journals Online (AJOL)
Non-Standard Workers: The South African Context, International Law and Regulation by The European Union. ... Most of these workers are unskilled or work in sectors with limited trade union organisation and limited coverage by collective bargaining, leaving them vulnerable to exploitation. They should, in theory, have the ...
Fragment of Nonstandard Analysis with a Finitary Consistency Proof
Czech Academy of Sciences Publication Activity Database
Rössler, M.; Jeřábek, Emil
2007-01-01
Roč. 13, č. 1 (2007), s. 54-70 ISSN 1079-8986 Institutional research plan: CEZ:AV0Z10190503 Keywords : nonstandard analysis * finitism * proof theory Subject RIV: BA - General Mathematics Impact factor: 0.921, year: 2007
Non-Standard Neutrino Interactions : Obviating Oscillation Experiments
Choudhury, Debajyoti; Ghosh, Kirtiman; Niyogi, Saurabh
2018-01-01
Searching for non-standard neutrino interactions, as a means for discovering physics beyond the Standard Model, has one of the key goals of dedicated neutrino experiments, current and future. We demonstrate here that much of the parameter space accessible to such experiments is already ruled out by the RUN II data of the Large Hadron Collider experiment.
Non-standard employment relations and wages among school-leavers in the Netherlands
de Vries, M.R.; Wolbers, M.H.J.
2005-01-01
Non-standard (alternatively, flexible) employment has become common in the Netherlands, and viewed as an important weapon for combating youth unemployment. However, if such jobs are 'bad', non-standard employment becomes a matter of concern. In addition, non-standard employment may hit the least
Standard and nonstandard Turing patterns and waves in the CIMA reaction
Rudovics, B.; Dulos, E.; de Kepper, P.
1996-01-01
We describe experimental evidence of stable triangular and hexagon-band mixed mode nonstandard patterns, in a three-dimensional chemical reaction-diffusion system with steep gradients of chemical constraints. These gradients confine the structures in a more or less thick stratum of the system. At onset, patterns develop in monolayers which approximate two-dimensional systems; but beyond onset, three-dimensional aspects have to be considered. We show that the nonstandard pattern symmetries result from the coupling of standard hexagonal and striped pattern modes which develop at adjacent positions, due to the differences in parameter values along the direction of the gradients. We evidence a Turing-Hopf codimension-2 point and show that some mixed mode chaotic dynamics, reminiscent of spatio-temporal intermittency combining the Turing and the Hopf modes, are also a consequence of the three-dimensional aspect of the structure. The relations between these observations and the theoretical studies performed in genuine two-dimensional systems are still open to discussion.
Indian Academy of Sciences (India)
IAS Admin
V S Borkar is the Institute. Chair Professor of. Electrical Engineering at. IIT Bombay. His research interests are stochastic optimization, theory, algorithms and applica- tions. 1 'Markov Chain Monte Carlo' is another one (see [1]), not to mention schemes that combine both. Stochastic approximation is one of the unsung.
Nonstandard jump functions for radically symmetric shock waves
International Nuclear Information System (INIS)
Baty, Roy S.; Tucker, Don H.; Stanescu, Dan
2008-01-01
Nonstandard analysis is applied to derive generalized jump functions for radially symmetric, one-dimensional, magnetogasdynamic shock waves. It is assumed that the shock wave jumps occur on infinitesimal intervals and the jump functions for the physical parameters occur smoothly across these intervals. Locally integrable predistributions of the Heaviside function are used to model the flow variables across a shock wave. The equations of motion expressed in nonconservative form are then applied to derive unambiguous relationships between the jump functions for the physical parameters for two families of self-similar flows. It is shown that the microstructures for these families of radially symmetric, magnetogasdynamic shock waves coincide in a nonstandard sense for a specified density jump function.
Workshop on CP Studies and Non-Standard Higgs Physics
Accomando, E.; Akhmetzyanova, E.; Albert, J.; Alves, A.; Amapane, N.; Aoki, M.; Azuelos, G.; Baffioni, S.; Ballestrero, A.; Barger, V.; Bartl, A.; Bechtle, P.; Blanger, G.; Belhouari, A.; Bellan, R.; Belyaev, A.; Benes, Petr; Benslama, K.; Bernreuther, W.; Besanon, M.; Bevilacqua, G.; Beyer, M.; Bluj, M.; Bolognesi, S.; Boonekamp, M.; Borzumati, Francesca; Boudjema, F.; Brandenburg, A.; Brauner, Tomas; Buszello, C.P.; Butterworth, J.M.; Carena, Marcela; Cavalli, D.; Cerminara, G.; Choi, S.Y.; Clerbaux, B.; Collard, C.; Conley, John A.; Deandrea, A.; De Curtis, S.; Dermisek, R.; De Roeck, A.; Dewhirst, G.; Diaz, M.A.; Diaz-Cruz, J.L.; Dietrich, D.D.; Dolgopolov, M.; Dominici, D.; Dubinin, M.; Eboli, O.; Ellis, John R.; Evans, N.; Fano, L.; Ferland, J.; Ferrag, S.; Fitzgerald, S.P.; Fraas, H.; Franke, F.; Gennai, S.; Ginzburg, I.F.; Godbole, R.M.; Gregoire, T.; Grenier, Gerald Jean; Grojean, C.; Gudnason, S.B.; Gunion, J.F.; Haber, H.E.; Hahn, T.; Han, T.; Hankele, V.; Hays, Christopher Paul; Heinemeyer, S.; Hesselbach, S.; Hewett, J.L.; Hidaka, K.; Hirsch, M.; Hollik, W.; Hooper, D.; Hosek, J.; Hubisz, J.; Hugonie, C.; Kalinowski, J.; Kanemura, S.; Kashkan, V.; Kernreiter, T.; Khater, W.; Khoze, V.A.; Kilian, W.; King, S.F.; Kittel, O.; Klamke, G.; Kneur, J.L.; Kouvaris, C.; Kraml, S.; Krawczyk, M.; Krstonoic, P.; Kyriakis, A.; Langacker, P.; Le, M.P.; Lee, H.-S.; Lee, J.S.; Lemaire, M.C.; Liao, Y.; Lillie, B.; Litvine, Vladimir A.; Logan, H.E.; McElrath, Bob; Mahmoud, T.; Maina, E.; Mariotti, C.; Marquard, P.; Martin, A.D.; Mazumdar, K.; Miller, D.J.; Min, P.; Monig, Klaus; Moortgat-Pick, G.; Moretti, S.; Muhlleitner, M.M.; Munir, S.; Nevzorov, R.; Newman, H.; Niezurawski, P.; Nikitenko, A.; Noriega-Papaqui, R.; Okada, Y.; Osland, P.; Pilaftsis, A.; Porod, W.; Przysiezniak, H.; Pukhov, A.; Rainwater, D.; Raspereza, A.; Reuter, J.; Riemann, S.; Rindani, S.; Rizzo, T.G.; Ros, E.; Rosado, A.; Rousseau, D.; Roy, D.P.; Ryskin, M.G.; Rzehak, H.; Sannino, F.; Schmidt, E.; Schrder, H.; Schumacher, M.; Semenov, A.; Senaha, E.; Shaughnessy, G.; Singh, R.K.; Terning, J.; Vacavant, L.; Velasco, M.; Villanova del Moral, Albert; von der Pahlen, F.; Weiglein, G.; Williams, J.; Williams, K.E.; Zarnecki, A.F.; Zeppenfeld, D.; Zerwas, D.; Zerwas, P.M.; Zerwekh, A.R.; Ziethe, J.; 2nd Workshop on CP Studies and Non-standard Higgs Physics; 3rd Workshop on CP Studies and Non-standard Higgs Physics; 4th Workshop on CP Studies and Non-standard Higgs Physics; CPNSH; Workshop on CP Studies and Non-standard Higgs Physics; CP Studies and Non-Standard Higgs Physics
2006-01-01
There are many possibilities for new physics beyond the Standard Model that feature non-standard Higgs sectors. These may introduce new sources of CP violation, and there may be mixing between multiple Higgs bosons or other new scalar bosons. Alternatively, the Higgs may be a composite state, or there may even be no Higgs at all. These non-standard Higgs scenarios have important implications for collider physics as well as for cosmology, and understanding their phenomenology is essential for a full comprehension of electroweak symmetry breaking. This report discusses the most relevant theories which go beyond the Standard Model and its minimal, CP-conserving supersymmetric extension: two-Higgs-doublet models and minimal supersymmetric models with CP violation, supersymmetric models with an extra singlet, models with extra gauge groups or Higgs triplets, Little Higgs models, models in extra dimensions, and models with technicolour or other new strong dynamics. For each of these scenarios, this report presents ...
Non-standard work schedules, gender, and parental stress
Czech Academy of Sciences Publication Activity Database
Lozano, M.; Hamplová, Dana; Le Bourdais, C.
2016-01-01
Roč. 34, č. 9 (2016), s. 259-284 ISSN 1435-9871 R&D Projects: GA ČR(CZ) GA14-15008S Institutional support: RVO:68378025 Keywords : stress * employment * non-standard work hours Subject RIV: AO - Sociology, Demography Impact factor: 1.320, year: 2016 http://www.demographic-research.org/volumes/vol34/9/default.htm
Engineering posttranslational proofreading to discriminate nonstandard amino acids
Kunjapur, Aditya M.; Stork, Devon A.; Kuru, Erkin; Vargas-Rodriguez, Oscar; Landon, Matthieu; Söll, Dieter; Church, George M.
2018-01-01
Incorporation of nonstandard amino acids (nsAAs) leads to chemical diversification of proteins, which is an important tool for the investigation and engineering of biological processes. However, the aminoacyl-tRNA synthetases crucial for this process are polyspecific in regard to nsAAs and standard amino acids. Here, we develop a quality control system called “posttranslational proofreading” to more accurately and rapidly evaluate nsAA incorporation. We achieve this proofreading by hijacking ...
Quantum Fields, Dark Matter and Non-Standard Wigner Classes
Gillard, A. B.; Martin, B. M. S.
2010-12-01
The Elko field of Ahluwalia and Grumiller is a quantum field for massive spin-1/2 particles. It has been suggested as a candidate for dark matter. We discuss our attempts to interpret the Elko field as a quantum field in the sense of Weinberg. Our work suggests that one should investigate quantum fields based on representations of the full Poincaré group which belong to one of the non-standard Wigner classes.
Non-standard work schedules, gender, and parental stress
Czech Academy of Sciences Publication Activity Database
Lozano, M.; Hamplová, Dana; Le Bourdais, C.
2016-01-01
Roč. 34, č. 9 (2016), s. 259-284 ISSN 1435-9871 R&D Projects: GA ČR(CZ) GA14-15008S Institutional support: RVO:68378025 Keywords : stress * employment * non-standard work hours Subject RIV: AO - Sociology, Demography Impact factor: 1.320, year: 2016 http://www.demographic-research.org/volumes/vol34/9/ default .htm
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Structure of the optimized effective Kohn-Sham exchange potential and its gradient approximations
International Nuclear Information System (INIS)
Gritsenko, O.; Van Leeuwen, R.; Baerends, E.J.
1996-01-01
An analysis of the structure of the optimized effective Kohn-Sham exchange potential v, and its gradient approximations is presented. The potential is decomposed into the Slater potential v s and the response of v s to density variations, v resp . The latter exhibits peaks that reflect the atomic shell structure. Kohn-Sham exchange potentials derived from current gradient approaches for the exchange energy are shown to be quite reasonable for the Slater potential, but they fail to approximate the response part, which leads to poor overall potentials. Improved potentials are constructed by a direct fit of v x with a gradient-dependent Pade approximant form. The potentials obtained possess proper asymptotic and scaling properties and reproduce the shell structure of the exact v x . 44 refs., 7 figs., 4 tabs
Neutrino propagation in binary neutron star mergers in presence of nonstandard interactions
Chatelain, Amélie; Volpe, Maria Cristina
2018-01-01
We explore the impact of nonstandard interactions on neutrino propagation in accretion disks around binary neutron star merger remnants. We show flavor evolution can be significantly modified even for values of the nonstandard couplings well below current bounds. We demonstrate the occurrence of inner resonances as synchronized MSW phenomena and show that intricate conversion patterns might appear depending on the nonstandard interaction parameters. We discuss the possible implications for nucleosynthesis.
S and T Parameters from a Light Nonstandard Higgs versus Near Conformal Dynamics
DEFF Research Database (Denmark)
Foadi, Roshan; Sannino, Francesco
2013-01-01
We determine the contribution to the $S$ and $T$ parameters coming from extensions of the standard model featuring a light nonstandard-like Higgs particle. We neatly separate, using the Landau gauge, the contribution from the purely nonstandard Higgs sector, from the one due to the interplay...... of this sector with the standard model. If the nonstandard Higgs sector derives from a new type of near conformal dynamics, the formalism allows to precisely link the intrinsic underlying contribution with the experimentally relevant parameters....
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
Energy Technology Data Exchange (ETDEWEB)
D. T. Clark; M. J. Russell; R. E. Spears; S. R. Jensen
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components with the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite
Doubling The Past Hypothesis: Observations On Two Nonstandard Third Conditionals
Directory of Open Access Journals (Sweden)
Cehan Nadina
2014-03-01
Full Text Available The paper briefly looks at two nonstandard conditional constructions, if [Su] had have [pp] and if [Su] would have [pp], which present anomalous components. Various works mentioning them have been analysed, leading to the conclusion that the forms have not been treated seriously or exhaustively. Following a small study which tries to establish their spread in the language, the paper concludes that some questions remain unanswered, such as whether the constructions can be characterised according to their geographical spread, their exact vernacular status, and to what extent they may coexist alongside the standard form in a person’s idiolect.
Capture reactions on C-14 in nonstandard big bang nucleosynthesis
Wiescher, Michael; Gorres, Joachim; Thielemann, Friedrich-Karl
1990-01-01
Nonstandard big bang nucleosynthesis leads to the production of C-14. The further reaction path depends on the depletion of C-14 by either photon, alpha, or neutron capture reactions. The nucleus C-14 is of particular importance in these scenarios because it forms a bottleneck for the production of heavier nuclei A greater than 14. The reaction rates of all three capture reactions at big bang conditions are discussed, and it is shown that the resulting reaction path, leading to the production of heavier elements, is dominated by the (p, gamma) and (n, gamma) rates, contrary to earlier suggestions.
CP Studies and Non-Standard Higgs Physics
DEFF Research Database (Denmark)
Kraml, S.; Accomando, E.; G. Akeroyd, A.
2006-01-01
There are many possibilities for new physics beyond the Standard Model that feature non-standard Higgs sectors. These may introduce new sources of CP violation, and there may be mixing between multiple Higgs bosons or other new scalar bosons. Alternatively, the Higgs may be a composite state...... which go beyond the Standard Model and its minimal, CP-conserving supersymmetric extension: two-Higgs-doublet models and minimal supersymmetric models with CP violation, supersymmetric models with an extra singlet, models with extra gauge groups or Higgs triplets, Little Higgs models, models in extra...
Nonstandard electroconvection and flexoelectricity in nematic liquid crystals.
Krekhov, Alexei; Pesch, Werner; Eber, Nándor; Tóth-Katona, Tibor; Buka, Agnes
2008-02-01
For many years it has been commonly accepted that electroconvection (EC) as primary instability in nematic liquid crystals for the "classical" planar geometry requires a positive anisotropy of the electric conductivity, sigma(a), and a slightly negative dielectric anisotropy, epsilon(a). This firm belief was supported by many experimental and theoretical studies. Recent experiments, which have surprisingly revealed EC patterns at negative conduction anisotropy as well, have motivated the theoretical studies in this paper. It will be demonstrated that extending the common hydrodynamic description of nematics by the usually neglected flexoelectric effect allows for a simple explanation of EC in the "nonstandard" case sigma(a)<0 .
Constraints on the Nonstandard Interaction in Propagation from Atmospheric Neutrinos
Directory of Open Access Journals (Sweden)
Shinya Fukasawa
2015-01-01
Full Text Available The sensitivity of the atmospheric neutrino experiments to the nonstandard flavor-dependent interaction in neutrino propagation is studied under the assumption that only nonvanishing components of the nonstandard matter effect are the electron and tau neutrino components ϵee, and ϵeτ, ϵττ and that the tau-tau component satisfies the constraint ϵττ=|ϵeτ|2/(1+ϵee which is suggested from the high energy behavior for atmospheric neutrino data. It is shown that the Super-Kamiokande (SK data for 4438 days constrains |tanβ|≡|ϵeτ/(1+ϵee|≲0.8 at 2.5σ (98.8% CL whereas the future Hyper-Kamiokande experiment for the same period of time as SK will constrain as |tanβ|≲0.3 at 2.5σCL from the energy rate analysis and the energy spectrum analysis will give even tighter bounds on ϵee and |ϵeτ|.
Curtailing the dark side in non-standard neutrino interactions
Energy Technology Data Exchange (ETDEWEB)
Coloma, Pilar [Theoretical Physics Department, Fermi National Accelerator Laboratory,P.O. Box 500, Batavia, IL 60510 (United States); Denton, Peter B. [Theoretical Physics Department, Fermi National Accelerator Laboratory,P.O. Box 500, Batavia, IL 60510 (United States); Niels Bohr International Academy, University of Copenhagen, The Niels Bohr Institute,Blegdamsvej 17, DK-2100, Copenhagen (Denmark); Gonzalez-Garcia, M.C. [Departament de Fisíca Quàntica i Astrofísica and Institut de Ciencies del Cosmos,Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Institució Catalana de Recerca i Estudis Avançats (ICREA),Pg. Lluis Companys 23, 08010 Barcelona (Spain); C.N. Yang Institute for Theoretical Physics, Stony Brook University,Stony Brook, NY 11794-3840 (United States); Maltoni, Michele [Instituto de Física Teórica UAM/CSIC, Universidad Autónoma de Madrid,Calle de Nicolás Cabrera 13-15, Cantoblanco, E-28049 Madrid (Spain); Schwetz, Thomas [Institut für Kernphysik, Karlsruher Institut für Technologie (KIT), D-76021 Karlsruhe (Germany)
2017-04-20
In presence of non-standard neutrino interactions the neutrino flavor evolution equation is affected by a degeneracy which leads to the so-called LMA-Dark solution. It requires a solar mixing angle in the second octant and implies an ambiguity in the neutrino mass ordering. Non-oscillation experiments are required to break this degeneracy. We perform a combined analysis of data from oscillation experiments with the neutrino scattering experiments CHARM and NuTeV. We find that the degeneracy can be lifted if the non-standard neutrino interactions take place with down quarks, but it remains for up quarks. However, CHARM and NuTeV constraints apply only if the new interactions take place through mediators not much lighter than the electroweak scale. For light mediators we consider the possibility to resolve the degeneracy by using data from future coherent neutrino-nucleus scattering experiments. We find that, for an experiment using a stopped-pion neutrino source, the LMA-Dark degeneracy will either be resolved, or the presence of new interactions in the neutrino sector will be established with high significance.
Non-standard Employment in the Nordics – towards precarious work?
DEFF Research Database (Denmark)
Rasmussen, Stine; Nätti, Jouko; Larsen, Trine Pernille
2018-01-01
and whether fixed-term contracts, temporary agency work, marginal part-time work and solo self-employment have precarious elements (income or job insecurity). We conclude that non-standard employment has remained rather stable in all four countries over time. However, although non-standard employment seems...
Winkler, Megan R; Mason, Susan; Laska, Melissa N; Christoph, Mary J; Neumark-Sztainer, Dianne
2018-04-01
The last century has seen dramatic shifts in population work circumstances, leading to an increasing normalization of non-standard work schedules (NSWSs), defined as non-daytime, irregular hours. An ever-growing body of evidence links NSWSs to a host of non-communicable chronic conditions; yet, these associations primarily concentrate on the physiologic mechanisms created by circadian disruption and insufficient sleep. While important, not all NSWSs create such chronobiologic disruption, and other aspects of working time and synchronization could be important to the relationships between work schedules and chronic disease. Leveraging survey data from Project EAT, a population-based study with health-related behavioral and psychological data from U.S. adults aged 25-36 years, this study explored the risks for a broad range of less healthful behavioral and well-being outcomes among NSWS workers compared to standard schedule workers (n = 1402). Variations across different NSWSs (evening, night/rotating, and irregular schedules) were also explored. Results indicated that, relative to standard schedule workers, workers with NSWSs are at increased risk for non-optimal sleep, substance use, greater recreational screen time, worse dietary practices, obesity, and depression. There was minimal evidence to support differences in relative risks across workers with different types of NSWSs. The findings provide insight into the potential links between NSWSs and chronic disease and indicate the relevancy social disruption and daily health practices may play in the production of health and well-being outcomes among working populations.
Integral operators in non-standard function spaces
Kokilashvili, Vakhtang; Rafeiro, Humberto; Samko, Stefan
2016-01-01
This book, the result of the authors’ long and fruitful collaboration, focuses on integral operators in new, non-standard function spaces and presents a systematic study of the boundedness and compactness properties of basic, harmonic analysis integral operators in the following function spaces, among others: variable exponent Lebesgue and amalgam spaces, variable Hölder spaces, variable exponent Campanato, Morrey and Herz spaces, Iwaniec-Sbordone (grand Lebesgue) spaces, grand variable exponent Lebesgue spaces unifying the two spaces mentioned above, grand Morrey spaces, generalized grand Morrey spaces, and weighted analogues of some of them. The results obtained are widely applied to non-linear PDEs, singular integrals and PDO theory. One of the book’s most distinctive features is that the majority of the statements proved here are in the form of criteria. The book is intended for a broad audience, ranging from researchers in the area to experts in applied mathematics and prospective students.
Colombeau's generalized functions and non-standard analysis
International Nuclear Information System (INIS)
Todorov, T.D.
1987-10-01
Using some methods of the Non-Standard Analysis we modify one of Colombeau's classes of generalized functions. As a result we define a class ε-circumflex of the so-called meta-functions which possesses all good properties of Colombeau's generalized functions, i.e. (i) ε-circumflex is an associative and commutative algebra over the system of the so-called complex meta-numbers C-circumflex; (ii) Every meta-function has partial derivatives of any order (which are meta-functions again); (iii) Every meta-function is integrable on any compact set of R n and the integral is a number from C-circumflex; (iv) ε-circumflex contains all tempered distributions S', i.e. S' is contained in ε' isomorphically with respect to all linear operations (including the differentiation). Thus, within the class ε-circumflex the problem of multiplication of the tempered distributions is satisfactorily solved (every two distributions in S' have a well-defined product in ε-circumflex). The crucial point is that C-circumflex is a field in contrast to the system of Colombeau's generalized numbers C-bar which is a ring only (C-bar is the counterpart of C-circumflex in Colombeau's theory). In this way we simplify and improve slightly the properties of the integral and notion of ''values of the meta-functions'' as well as the properties of the whole class ε-circumflex itself if compared with the original Colombeau theory. And, what is maybe more important, we clarify the connection between the Non-Standard Analysis and Colombeau's theory of new generalized functions in the framework of which the problem of multiplication of distributions was recently solved. (author). 14 refs
Predicting the need for nonstandard tracheostomy tubes in critically ill patients.
Pandian, Vinciya; Hutchinson, Christoph T; Schiavi, Adam J; Feller-Kopman, David J; Haut, Elliott R; Parsons, Nicole A; Lin, Jessica S; Gorbatkin, Chad; Angamuthu, Priya G; Miller, Christina R; Mirski, Marek A; Bhatti, Nasir I; Yarmus, Lonny B
2017-02-01
Few guidelines exist regarding the selection of a particular type or size of tracheostomy tube. Although nonstandard tubes can be placed over the percutaneous kit dilator, clinicians often place standard tracheostomy tubes and change to nonstandard tubes only after problems arise. This practice risks early tracheostomy tube change, possible bleeding, or loss of the airway. We sought to identify predictors of nonstandard tracheostomy tubes. In this matched case-control study at an urban, academic, tertiary care medical center, we reviewed 1220 records of patients who received a tracheostomy. Seventy-seven patients received nonstandard tracheostomy tubes (cases), and 154 received standard tracheostomy tubes (controls). Sex, endotracheal tube size, severity of illness, and computed tomography scan measurement of the distance from the trachea to the skin at the level of the superior aspect of the anterior clavicle were significant predictors of nonstandard tracheostomy tubes. Specifically, trachea-to-skin distance >4.4 cm and endotracheal tube sizes ≥8.0 were associated with nonstandard tracheostomy. The findings suggest that clinicians should consider using nonstandard tracheostomy tubes as the first choice if the patient is male with an endotracheal tube size ≥8.0 and has a trachea-to-skin distance >4.4 cm on the computed tomography scan. Copyright © 2016 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
H. Kheiri
2011-03-01
Full Text Available In this paper, analytic solutions of the modifiedBurgers-Korteweg-de Vries equation(mBKdVE and theNewell-Whitehead equation are obtained by the Homotopy analysismethod(HAM and the Homotopy Pad$acute{e}$method(HPad$acute{e}$M. The obtained approximation by using HAMcontains an auxiliary parameter which is a way to control and adjustthe convergence region and rate of the solution series. Theapproximation solution by $[m,m]$ HPad$acute{e}$M is oftenindependent of auxiliary parameter $ar{h}$ and this techniqueaccelerate the convergence of the related series.
Dosimetric equivalence of nonstandard HDR brachytherapy catheter patterns
Energy Technology Data Exchange (ETDEWEB)
Cunha, J. A. M.; Hsu, I-C.; Pouliot, J. [University of California, San Francisco, California 94115 (United States)
2009-01-15
Purpose: To determine whether alternative high dose rate prostate brachytherapy catheter patterns can result in similar or improved dose distributions while providing better access and reducing trauma. Materials and Methods: Standard prostate cancer high dose rate brachytherapy uses a regular grid of parallel needle positions to guide the catheter insertion. This geometry does not easily allow the physician to avoid piercing the critical structures near the penile bulb nor does it provide position flexibility in the case of pubic arch interference. This study used CT datasets with 3 mm slice spacing from ten previously treated patients and digitized new catheters following three hypothetical catheter patterns: conical, bi-conical, and fireworks. The conical patterns were used to accommodate a robotic delivery using a single entry point. The bi-conical and fireworks patterns were specifically designed to avoid the critical structures near the penile bulb. For each catheter distribution, a plan was optimized with the inverse planning algorithm, IPSA, and compared with the plan used for treatment. Irrelevant of catheter geometry, a plan must fulfill the RTOG-0321 dose criteria for target dose coverage (V{sub 100}{sup Prostate}>90%) and organ-at-risk dose sparing (V{sub 75}{sup Bladder}<1 cc, V{sub 75}{sup Rectum}<1 cc, V{sub 125}{sup Urethra}<<1 cc). Results: The three nonstandard catheter patterns used 16 nonparallel, straight divergent catheters, with entry points in the perineum. Thirty plans from ten patients with prostate sizes ranging from 26 to 89 cc were optimized. All nonstandard patterns fulfilled the RTOG criteria when the clinical plan did. In some cases, the dose distribution was improved by better sparing the organs-at-risk. Conclusion: Alternative catheter patterns can provide the physician with additional ways to treat patients previously considered unsuited for brachytherapy treatment (pubic arch interference) and facilitate robotic guidance of
Al-Dwairi, Ziad; Shaweesh, Ashraf; Kamkarfar, Sohrab; Kamkarfar, Shahrzad; Borzabadi-Farahani, Ali; Lynch, Edward
2014-01-01
The purpose of this study was to examine the relationship between skin color (shade) and tooth shade under standard and nonstandard illumination sources. Four hundred Jordanian participants (200 males, 200 females, 20 to 50 years of age) were studied. Skin colors were assessed and categorized using the L'Oreal and Revlon foundation shade guides (light, medium, dark). The Vita Pan Classical Shade Guide (VPCSG; Vident) and digital Vita EasyShade Intraoral Dental Spectrophotometer (VESIDS; Vident) were used to select shades in the middle thirds of maxillary central incisors; tooth shades were classified into four categories (highest, high, medium, low). Significant gender differences were observed for skin colors (P = .000) and tooth shade guide systems (P = .001 and .050 for VPCSG and VESIDS, respectively). The observed agreement was 100% and 93% for skin and tooth shade guides, respectively. The corresponding kappa statistic values were 1.00 and 0.79, respectively (substantial agreement, P < .001). The observed agreement between skin color and tooth shades (VPCSG and VESIDS) was approximately 50%. The digital tooth shade guide system can be a satisfactory substitute for classical tooth shade guides and clinical shade matching. There was only moderate agreement between skin color and tooth shade.
Quantitative portable gamma-spectroscopy sample analysis for non-standard sample geometries
International Nuclear Information System (INIS)
Ebara, S.B.
1998-01-01
Utilizing a portable spectroscopy system, a quantitative method for analysis of samples containing a mixture of fission and activation products in nonstandard geometries was developed. This method was not developed to replace other methods such as Monte Carlo or Discrete Ordinates but rather to offer an alternative rapid solution. The method can be used with various sample and shielding configurations where analysis on a laboratory based gamma-spectroscopy system is impractical. The portable gamma-spectroscopy method involves calibration of the detector and modeling of the sample and shielding to identify and quantify the radionuclides present in the sample. The method utilizes the intrinsic efficiency of the detector and the unattenuated gamma fluence rate at the detector surface per unit activity from the sample to calculate the nuclide activity and Minimum Detectable Activity (MDA). For a complex geometry, a computer code written for shielding applications (MICROSHIELD) is utilized to determine the unattenuated gamma fluence rate per unit activity at the detector surface. Lastly, the method is only applicable to nuclides which emit gamma-rays and cannot be used for pure beta or alpha emitters. In addition, if sample self absorption and shielding is significant, the attenuation will result in high MDA's for nuclides which solely emit low energy gamma-rays. The following presents the analysis technique and presents verification results using actual experimental data, rather than comparisons to other approximations such as Monte Carlo techniques, to demonstrate the accuracy of the method given a known geometry and source term. (author)
X-ray computed tomography reconstruction on non-standard trajectories for robotized inspection
International Nuclear Information System (INIS)
Banjak, Hussein
2016-01-01
The number of industrial applications of computed tomography (CT) is large and rapidly increasing with typical areas of use in the aerospace, automotive and transport industry. To support this growth of CT in the industrial field, the identified requirements concern firstly software development to improve the reconstruction algorithms and secondly the automation of the inspection process. Indeed, the use of robots gives more flexibility in the acquisition trajectory and allows the control of large and complex objects, which cannot be inspected using classical CT systems. In this context of new CT trend, a robotic platform has been installed at CEA LIST to better understand and solve specific challenges linked to the robotization of the CT process. The considered system integrates two robots that move the X-ray generator and detector. This thesis aims at achieving this new development. In particular, the objective is to develop and implement analytical and iterative reconstruction algorithms adapted to such robotized trajectories. The main focus of this thesis is concerned with helical-like scanning trajectories. We consider two main problems that could occur during acquisition process: truncated and limited-angle data. We present in this work experimental results for reconstruction on such non-standard trajectories. CIVA software is used to simulate these complex inspections and our developed algorithms are integrated as reconstruction tools. This thesis contains three parts. In the first part, we introduce the basic principles of CT and we present an overview of existing analytical and iterative algorithms for non-standard trajectories. In the second part, we modify the approximate helical FDK algorithm to deal with transversely truncated data and we propose a modified FDK algorithm adapted to reverse helical trajectory with the scan range less than 360 degrees. For iterative reconstruction, we propose two algebraic methods named SART-FISTA-TV and DART
Nonstandard Work Schedules, Family Dynamics, and Mother-Child Interactions During Early Childhood.
Prickett, Kate C
2018-03-01
The rising number of parents who work nonstandard schedules has led to a growing body of research concerned with what this trend means for children. The negative outcomes for children of parents who work nonstandard schedules are thought to arise from the disruptions these schedules place on family life, and thus, the types of parenting that support their children's development, particularly when children are young. Using a nationally representative sample of two-parent families (Early Childhood Longitudinal Study-Birth cohort, n = 3,650), this study examined whether mothers' and their partners' nonstandard work schedules were associated with mothers' parenting when children were 2 and 4 years old. Structural equation models revealed that mothers' and their partners' nonstandard work schedules were associated with mothers' lower scores on measures of positive and involved parenting. These associations were mediated by fathers' lower levels of participation in cognitively supportive parenting and greater imbalance in cognitively supportive tasks conducted by mothers versus fathers.
Kobayashi, Tsunehiro
1996-01-01
Quantum macroscopic motions are investigated in the scheme consisting of N-number of harmonic oscillators in terms of ultra-power representations of nonstandard analysis. Decoherence is derived from the large internal degrees of freedom of macroscopic matters.
Search for flavor-changing nonstandard neutrino interactions using ν_e appearance in MINOS
Adamson, P.; Mualem, L.; Newman, H. B.; Orchanian, M.; Patterson, R. B.
2017-01-01
We report new constraints on flavor-changing nonstandard neutrino interactions from the MINOS long-baseline experiment using ν_e and ν_e appearance candidate events from predominantly ν_μ and ν_μ beams. We used a statistical selection algorithm to separate ν_e candidates from background events, enabling an analysis of the combined MINOS neutrino and antineutrino data. We observe no deviations from standard neutrino mixing, and thus place constraints on the nonstandard interaction matter effec...
Non-standard constraints within In-Core Fuel Management
International Nuclear Information System (INIS)
Maldonado, G.I.; Torres, C.; Marrote, G.N.; Ruiz U, V.
2004-01-01
Recent advancements in the area of nuclear fuel management optimization have been considerable and widespread. Therefore, it is not surprising that the design of today's nuclear fuel reloads can be a highly automated process that is often accompanied by sophisticated optimization software and graphical user interfaces to assist core designers. Most typically, among other objectives, optimization software seeks to maximize the energy efficiency of a fuel cycle while satisfying a variety of safety, operational, and regulatory constraints. Concurrently, the general industry trend continues to be one of pursuing higher generating capacity (i.e., power up-rates) alongside cycle length extensions. As these increasingly invaluable software tools and ambitious performance goals are pursued in unison, more aggressive core designs ultimately emerge that effectively minimize the margins to limits and, in some cases, may turn out less forgiving or accommodating to changes in underlying key assumptions. The purpose of this article is to highlight a few 'non-standard', though common constraints that can affect a BWR core design but which are often difficult, if not impossible, to implement into an automated setting. In a way, this article indirectly emphasizes the unique and irreplaceable role of the experienced designer in light of 'real life' situations. (Author)
Non-standard employment relationship and the gender dimension
Directory of Open Access Journals (Sweden)
Mihaela-Emilia Marica
2015-12-01
Full Text Available Besides influences economic, political and social on the standard form of individual employment contract, which led to a more flexible regulatory framework in the field of labor relations, an important factor that marked trend evolving contract atypical employment is the number of women who entered the labor market in recent decades. Because most strongly feminized form of employment non-standard employment relationship part-time, this article captures the issues most important about the relationship work part-time and the gender factor, the impact of this form of employment on the size women's social and level of protection provided by labor law and social protection rules in light of states that have agreed to support and legitimize this form of employment. Also, the circumstances of the most important, determining the choice of women in terms of hiring part-time, rationales justifying the strong influence of gender in hiring part-time, along with the identification of negative consequences of the feminization of this atypical forms of work are important factors that are discussed in this article.
Increase of soybean nutritional quality with nonstandard foliar fertilizers
Directory of Open Access Journals (Sweden)
Vesna DRAGIČEVIĆ
2016-06-01
Full Text Available Deficiencies of mineral elements in human nutrition could be surpassed by crop fortification. One of the prevalent measures of fortification is foliar fertilization. The aim of this study was to determine the content and availability of the mineral nutrients Mg, Fe and Zn, together with phytate, as an anti-nutritive factor, and β-carotene as a promoter of mineral nutrient availability in grain of two soybean cultivars (Nena and Laura treated with different non-standard foliar fertilizers (mainly based on plant extracts. Generally, a negative correlation between Fe and phytate indicated that factors which decrease phytate and increase β-carotene could be primarily responsible for Fe utilization by humans and animals. Zlatno inje (based on manure had the highest impact on increasing the grain yield and decreasing the ratios between phytate and mineral elements in Nena grain, while for Laura, it was generally Zircon (based on an extract of Echinacea purpurea L, increasing also availability of mineral elements.
Control system architecture: The standard and non-standard models
International Nuclear Information System (INIS)
Thuot, M.E.; Dalesio, L.R.
1993-01-01
Control system architecture development has followed the advances in computer technology through mainframes to minicomputers to micros and workstations. This technology advance and increasingly challenging accelerator data acquisition and automation requirements have driven control system architecture development. In summarizing the progress of control system architecture at the last International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) B. Kuiper asserted that the system architecture issue was resolved and presented a ''standard model''. The ''standard model'' consists of a local area network (Ethernet or FDDI) providing communication between front end microcomputers, connected to the accelerator, and workstations, providing the operator interface and computational support. Although this model represents many present designs, there are exceptions including reflected memory and hierarchical architectures driven by requirements for widely dispersed, large channel count or tightly coupled systems. This paper describes the performance characteristics and features of the ''standard model'' to determine if the requirements of ''non-standard'' architectures can be met. Several possible extensions to the ''standard model'' are suggested including software as well as the hardware architectural feature
Can nonstandard interactions jeopardize the hierarchy sensitivity of DUNE?
Deepthi, K. N.; Goswami, Srubabati; Nath, Newton
2017-10-01
We study the effect of nonstandard interactions (NSIs) on the propagation of neutrinos through the Earth's matter and how it affects the hierarchy sensitivity of the DUNE experiment. We emphasize the special case when the diagonal NSI parameter ɛe e=-1 , nullifying the standard matter effect. We show that if, in addition, C P violation is maximal then this gives rise to an exact intrinsic hierarchy degeneracy in the appearance channel, irrespective of the baseline and energy. Introduction of the off diagonal NSI parameter, ɛe τ, shifts the position of this degeneracy to a different ɛe e. Moreover the unknown magnitude and phases of the off diagonal NSI parameters can give rise to additional degeneracies. Overall, given the current model independent limits on NSI parameters, the hierarchy sensitivity of DUNE can get seriously impacted. However, a more precise knowledge of the NSI parameters, especially ɛe e, can give rise to an improved sensitivity. Alternatively, if a NSI exists in nature, and still DUNE shows hierarchy sensitivity, certain ranges of the NSI parameters can be excluded. Additionally, we briefly discuss the implications of ɛe e=-1 (in the Earth) on the Mikheyev-Smirnov-Wolfenstein effect in the Sun.
Energy Technology Data Exchange (ETDEWEB)
Palma, Daniel A. [CEFET QUIMICA de Nilopolis/RJ, Rio de Janeiro (Brazil); Goncalves, Alessandro C.; Martinez, Aquilino S.; Silva, Fernando C. [COPPE/UFRJ - Programa de Engenharia Nuclear, Rio de Janeiro (Brazil)
2008-07-01
The activation technique allows much more precise measurements of neutron intensity, relative or absolute. The technique requires the knowledge of the Doppler broadening function psi(x,xi) to determine the resonance self-shielding factors in the epithermal range G{sub epi} (tau,xi). Two new analytical approximations for the Doppler broadening function psi(x,xi) are proposed. The approximations proposed are compared with other methods found in literature for the calculation of the psi(x,xi) function, that is, the 4-pole Pade method and the Frobenius method, when applied to the calculation of G{sub epi} (tau,xi). The results obtained provided satisfactory accuracy. (authors)
Directory of Open Access Journals (Sweden)
J. Spałek
2010-01-01
Full Text Available We use the concept of generalized (almost localized Fermi Liquid composed of nonstandard quasiparticles with spin-dependence effective masses and the effective field induced by electron correlations. This Fermi liquid is obtained within the so-called statistically-consistent Gutzwiller approximation (SGA proposed recently [cf. J. Jędrak et al., arXiv: 1008.0021] and describes electronic states of the correlated quantum liquid. Particular emphasis is put on real space pairing driven by the electronic correlations, the Fulde-Ferrell state of the heavy-fermion liquid, and the d-wave superconducting state of high temperature curate superconductors in the overdoped limit. The appropriate phase diagrams are discussed showing in particular the limits of stability of the Bardeen-Cooper-Schrieffer (BCS type of state.
Traveltime approximations for transversely isotropic media with an inhomogeneous background
Alkhalifah, Tariq
2011-05-01
A transversely isotropic (TI) model with a tilted symmetry axis is regarded as one of the most effective approximations to the Earth subsurface, especially for imaging purposes. However, we commonly utilize this model by setting the axis of symmetry normal to the reflector. This assumption may be accurate in many places, but deviations from this assumption will cause errors in the wavefield description. Using perturbation theory and Taylor\\'s series, I expand the solutions of the eikonal equation for 2D TI media with respect to the independent parameter θ, the angle the tilt of the axis of symmetry makes with the vertical, in a generally inhomogeneous TI background with a vertical axis of symmetry. I do an additional expansion in terms of the independent (anellipticity) parameter in a generally inhomogeneous elliptically anisotropic background medium. These new TI traveltime solutions are given by expansions in and θ with coefficients extracted from solving linear first-order partial differential equations. Pade approximations are used to enhance the accuracy of the representation by predicting the behavior of the higher-order terms of the expansion. A simplification of the expansion for homogenous media provides nonhyperbolic moveout descriptions of the traveltime for TI models that are more accurate than other recently derived approximations. In addition, for 3D media, I develop traveltime approximations using Taylor\\'s series type of expansions in the azimuth of the axis of symmetry. The coefficients of all these expansions can also provide us with the medium sensitivity gradients (Jacobian) for nonlinear tomographic-based inversion for the tilt in the symmetry axis. © 2011 Society of Exploration Geophysicists.
Czech Academy of Sciences Publication Activity Database
Drdácký, Miloš; Slížková, Zuzana
2008-01-01
Roč. 17, č. 1 (2008), s. 20-29 ISSN 1407-7353 R&D Projects: GA ČR(CZ) GA103/06/1609 Institutional research plan: CEZ:AV0Z20710524 Keywords : small-sample non-standard testing * lime * historic mortar Subject RIV: AL - Art, Architecture, Cultural Heritage
International Nuclear Information System (INIS)
Cvetic, G.; Koegerler, R.
1998-01-01
Applicability of the previously introduced method of modified diagonal Baker-Gammel approximants is extended to truncated perturbation series (TPS) of any order in gauge theories. The approximants reproduce the TPS when expanded in power series of the gauge coupling parameter to the order of that TPS. The approximants have the favorable property of being exactly invariant under the change of the renormalization scale, and that property is arrived at by a generalization of the method of the diagonal Pade approximants. The renormalization scheme dependence is subsequently eliminated by a variant of the method of the principle of minimal sensitivity (PMS). This is done by choosing the values of the renormalization-scheme-dependent coefficients (β 2 ,β 3 ,..), which appear in the beta function of the gauge coupling parameter, in such a way that the diagonal Baker-Gammel approximants have zero values of partial derivatives with respect to these coefficients. The resulting approximants are then independent of the renormalization scale and of the renormalization scheme. (orig.)
Sensitivities to charged-current nonstandard neutrino interactions at DUNE
Bakhti, Pouya; Khan, Amir N.; Wang, W.
2017-12-01
We investigate the effects of charged-current (CC) nonstandard neutrino interactions (NSIs) at the source and at the detector in the simulated data for the planned Deep Underground Neutrino Experiment (DUNE). We neglect the neutral-current NSIs at the propagation because several solutions have already been proposed for resolving the degeneracies posed by neutral-current NSIs but no solutions exist for the degeneracies due to the CC NSIs. We study the effects of CC NSIs on the simultaneous measurements of {θ }23 and {δ }{{CP}} in DUNE. The analysis reveals that 3σ C.L. measurement of the correct octant of {θ }23 in the standard mixing scenario is spoiled if the CC NSIs are taken into account. Likewise, the CC NSIs can deteriorate the uncertainty of the {δ }{{CP}} measurement by a factor of two relative to that in the standard oscillation scenario. We also show that the source and the detector CC NSIs can induce a significant amount of fake CP-violation and the CP-conserving case can be excluded by more than 80% C.L. in the presence of fake CP-violation. We further find DUNE’s potential for constraining the relevant CC NSI parameters from the single parameter fits for both neutrino and antineutrino appearance and disappearance channels at both the near and far detectors. The results show that there could be improvements in the current bounds by at least one order of magnitude at DUNE’s near and far detectors, except for a few parameters which remain weaker at the far detector.
Non-standard monetary policy of the ECB: Macroeconomic effects and exit strategy
Directory of Open Access Journals (Sweden)
Momirović Dragan
2014-01-01
Full Text Available In the initial phases of the global economic and financial crisis ECB reacted by lowering interest rates to a historic minimum. After the crash of Lehman Brothers and strengthening of financial tensions in the EU and later on the sovereign debt crisis in the euro zone, ECB was forced to search for resource in the non-standard monetary policy measures. The ECB non-standard monetary policy changed the structure and the balance sheet size, and through actions and undertaken measures in the crisis which is still ongoing, strives to conduct the policy of the anticipated long-term interest rate. This paper offers a review of the ECB non-standard policy measures applied and the manner in which monetary policy is spilling over on to the banking and the real sector, but also the effects of the balance sheet expansion on the effects of certain macroeconomic variables. VAR model shows that the effects of the ECB balance sheet expansion are having a positive impact on the macroeconomy effects, i.e. on the two variables, output and prices, and the economic growth. The long-term effects of the non-standard monetary policy implementation remain uncertain and bear the risk of resurrection of undesired financial shocks. The manner in which ECB can avoid the uncertainty of these policies, over a long-term, is to start with gradual 'narrowing down of the non-standard policy', e. to create prerequisite for an exit strategy.
Barnett, B S; Chaweza, T; Tweya, H; Ngambi, W; Phiri, S; Hosseinipour, M C
2016-03-01
Lighthouse Trust in Lilongwe, Malawi serves approximately 25,000 patients with HIV antiretroviral therapy (ART) regimens standardized according to national treatment guidelines. However, as a referral centre for complex cases, Lighthouse Trust occasionally treats patients with non-standard ART regimens (NS-ART) that deviate from the treatment guidelines. We evaluated factors contributing to the use of NS-ART and whether patients could transition to standard regimens. This was a cross-sectional study of all adult patients at Lighthouse Trust being treated with NS-ART as of February 2012. Patients were identified using the electronic data system. Medical charts were reviewed and descriptive statistics were obtained. One hundred six patients were initially found being treated with NS-ART, and 92 adult patients were confirmed to be on NS-ART after review. Mean patient age was 42.4 ± 10.3 years, and 52 (57%) were female. Mean duration of treatment with the NS-ART being used at the time of data collection was 2.1 ± 1.5 years. Eight patients (9%) were on modified first-line NS-ART and 84 (91%) were on modified second-line NS-ART, with 90 patients (98%) having multiple factors contributing to NS-ART use. Severe toxicity from one medication contributed in 28 cases (30%) and toxicity from multiple medications contributed in 46 cases (50%), while 22 patients (24%) were transitioned to NS-ART following a stockout of their original medication. Following clinical review, 84 patients (91%) were transitioned to standard regimens, and eight (9%) were maintained on NS-ART because of incompatibility of their clinical features with the latest national guidelines. Primary factors contributing to NS-ART use were medication toxicities and medication stockouts. Most patients were transitioned to standard regimens, although the need for NS-ART remains.
Evaluation of Suitability of Non-Standardized Test Block for Ultrasonic Testing
International Nuclear Information System (INIS)
Kwon, Ho Young; Lim, Jong Ho; Kang, Sei Sun
2000-01-01
Standard Test Block(STB) for UT(Ultrasonic Testing) is a block approved by authoritative for material, shape and quality. STB is used for characteristic tests, sensitivity calibration and control of the time base range of UT inspection devices. The material, size and chemical components of STB should be strictly controlled to meet the related standards such as ASTM and JIS because it has an effect upon sensitivity, resolution and reproductivity of UT. The STBs which are not approved are sometimes used because the qualified STBs are very expensive. So, the purpose of this study is to survey the characteristics, quality and usability of Non-Standardized Test Blocks. Non-Standardized Test Blocks did not meet the standard requirements in size or chemical components, and ultrasonic characteristics. Therefore if the Non-Standardized Test Blocks are used without being tested, it's likely to cause errors in detecting the location and measuring the size of the defects
The Effect of Tutoring With Nonstandard Equations for Students With Mathematics Difficulty.
Powell, Sarah R; Driver, Melissa K; Julian, Tyler E
2015-01-01
Students often misinterpret the equal sign (=) as operational instead of relational. Research indicates misinterpretation of the equal sign occurs because students receive relatively little exposure to equations that promote relational understanding of the equal sign. No study, however, has examined effects of nonstandard equations on the equation solving and equal-sign understanding of students with mathematics difficulty (MD). In the present study, second-grade students with MD (n = 51) were randomly assigned to standard equations tutoring, combined tutoring (standard and nonstandard equations), and no-tutoring control. Combined tutoring students demonstrated greater gains on equation-solving assessments and equal-sign tasks compared to the other two conditions. Standard tutoring students demonstrated improved skill on equation solving over control students, but combined tutoring students' performance gains were significantly larger. Results indicate that exposure to and practice with nonstandard equations positively influence student understanding of the equal sign. © Hammill Institute on Disabilities 2013.
Qualitatively stability of nonstandard 2-stage explicit Runge-Kutta methods of order two
Khalsaraei, M. M.; Khodadosti, F.
2016-02-01
When one solves differential equations, modeling physical phenomena, it is of great importance to take physical constraints into account. More precisely, numerical schemes have to be designed such that discrete solutions satisfy the same constraints as exact solutions. Nonstandard finite differences (NSFDs) schemes can improve the accuracy and reduce computational costs of traditional finite difference schemes. In addition NSFDs produce numerical solutions which also exhibit essential properties of solution. In this paper, a class of nonstandard 2-stage Runge-Kutta methods of order two (we call it nonstandard RK2) is considered. The preservation of some qualitative properties by this class of methods are discussed. In order to illustrate our results, we provide some numerical examples.
Shirts, R. B.; Reinhardt, W. P.
1982-01-01
Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.
Diophantine approximation and badly approximable sets
DEFF Research Database (Denmark)
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X. The clas......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X....... The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...
Big bang nucleosynthesis with a varying fine structure constant and nonstandard expansion rate
International Nuclear Information System (INIS)
Ichikawa, Kazuhide; Kawasaki, Masahiro
2004-01-01
We calculate the primordial abundances of light elements produced during big bang nucleosynthesis when the fine structure constant and/or the cosmic expansion rate take nonstandard values. We compare them with the recent values of observed D, 4 He, and 7 Li abundances, which show a slight inconsistency among themselves in the standard big bang nucleosynthesis scenario. This inconsistency is not solved by considering either a varying fine structure constant or a nonstandard expansion rate separately but solutions are found by their simultaneous existence
Alfaro, Cristina; Bartolomé, Lilia
2017-01-01
Mexicanos/Chicanos in the United States have historically suffered derision and mistreatment by the mainstream culture because of their use of nonstandard Spanish and English, as well as codeswitching (alternating between two or more languages or language varieties). In the field of education, codeswitching and the use of nonstandard English and…
Transition Systems and Non-Standard Employment in Early Career: Comparing Japan and Switzerland
Imdorf, Christian; Helbling, Laura Alexandra; Inui, Akio
2017-01-01
Even though Japan and Switzerland are characterised by comparatively low youth unemployment rates, non-standard forms of employment are on the rise, posing a risk to the stable integration of young labour market entrants. Drawing on the French approach of societal analysis, this paper investigates how country-specific school-to-work transition…
Analysis of a non-standard mixed finite element method with applications to superconvergence
Brandts, J.H.
2009-01-01
We show that a non-standard mixed finite element method proposed by Barrios and Gatica in 2007, is a higher order perturbation of the least-squares mixed finite element method. Therefore, it is also superconvergent whenever the least-squares mixed finite element method is superconvergent.
Non-standard perturbative methods for the effective potential in λφ4 QFT
International Nuclear Information System (INIS)
Okopinska, A.
1986-07-01
The effective potential in scalar QFT is calculated in the non-standard perturbative methods and compared with the conventional loop expansion. In the space time dimensions 0 and 1 the results are compared with the ''exact'' effective potential obtained numerically. In 4 dimensions we show that λφ 4 theory is non-interacting. (author)
Search for non-standard and rare decays of the Higgs boson with the ATLAS detector
Leney, Katharine; The ATLAS collaboration
2017-01-01
Some theories predict Lepton Flavour Violating decays of the Higgs boson, while other predict enhanced decay rates into new light pseudoscalar bosons "a" or invisible particles. Also enhanced rates in rare decay modes like Phi-photon are considered. In this presentation the latest ATLAS results on searches for such non-standard and rare decays will be discussed.
Ultra-cold WIMPs relics of non-standard pre-BBN cosmologies
Gelmini, Graciela B
2008-01-01
We point out that in scenarios in which the Universe evolves in a non-standard manner during and after the kinetic decoupling of weakly interacting massive particles (WIMPs), these relics can be much colder than in standard cosmological scenarios (i.e. can be ultra-cold), possibly leading to the formation of smaller first objects in hierarchical structure formation scenarios.
Search for non-standard and rare decays of the Higgs boson with the ATLAS detector
Mazini, Rachid; The ATLAS collaboration
2016-01-01
Some theories predict Lepton Flavour Violating decays of the Higgs boson, while other predict enhanced decay rates into new light pseudoscalar bosons "a" or invisible particles. Also enhanced rates in rare decay modes like Phi-photon are considered. In this presentation the latest ATLAS results on searches for such non-standard and rare decays will be discussed.
Attitudes of Japanese Learners and Teachers of English towards Non-Standard English in Coursebooks
Takahashi, Reiko
2017-01-01
Over the decades, efforts have been made to incorporate diverse perspectives on World Englishes into English Language Teaching (ELT) practice and teaching materials. To date, the majority of ELT learners and teachers have not yet been exposed to materials which use and explore non-standard forms of English. This paper examines the attitudes of…
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Sparse approximation with bases
2015-01-01
This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications. The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...
Topological approximations of multisets
Directory of Open Access Journals (Sweden)
El-Sayed A. Abo-Tabl
2013-07-01
Full Text Available Rough set theory is a powerful mathematical tool for dealing with inexact, uncertain or vague information. The core concept of rough set theory are information systems and approximation operators of approximation spaces. In this paper, we define and investigate three types of lower and upper multiset approximations of any multiset. These types based on the multiset base of multiset topology induced by a multiset relation. Moreover, the relationships between generalized rough msets and mset topologies are given. In addition, an illustrative example is given to illustrate the relationships between different types of generalized definitions of rough multiset approximations.
Expectation Consistent Approximate Inference
DEFF Research Database (Denmark)
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability...
Solving non-standard packing problems by global optimization and heuristics
Fasano, Giorgio
2014-01-01
This book results from a long-term research effort aimed at tackling complex non-standard packing issues which arise in space engineering. The main research objective is to optimize cargo loading and arrangement, in compliance with a set of stringent rules. Complicated geometrical aspects are also taken into account, in addition to balancing conditions based on attitude control specifications. Chapter 1 introduces the class of non-standard packing problems studied. Chapter 2 gives a detailed explanation of a general model for the orthogonal packing of tetris-like items in a convex domain. A number of additional conditions are looked at in depth, including the prefixed orientation of subsets of items, the presence of unusable holes, separation planes and structural elements, relative distance bounds as well as static and dynamic balancing requirements. The relative feasibility sub-problem which is a special case that does not have an optimization criterion is discussed in Chapter 3. This setting can be exploit...
Impact of Nonstandard Interactions on Sterile-Neutrino Searches at IceCube.
Liao, Jiajun; Marfatia, Danny
2016-08-12
We analyze the energy and zenith angle distributions of the latest two-year IceCube data set of upward-going atmospheric neutrinos to constrain sterile neutrinos at the eV scale in the 3+1 scenario. We find that the parameters favored by a combination of LSND and MiniBooNE data are excluded at more than the 99% C.L. We explore the impact of nonstandard matter interactions on this exclusion and find that the exclusion holds for nonstandard interactions (NSIs) that are within the stringent model-dependent bounds set by collider and neutrino scattering experiments. However, for large NSI parameters subject only to model-independent bounds from neutrino oscillation experiments, the LSND and MiniBooNE data are consistent with IceCube.
Non-standard neutrino interactions in the mu–tau sector
Directory of Open Access Journals (Sweden)
Irina Mocioiu
2015-04-01
Full Text Available We discuss neutrino mass hierarchy implications arising from the effects of non-standard neutrino interactions on muon rates in high statistics atmospheric neutrino oscillation experiments like IceCube DeepCore. We concentrate on the mu–tau sector, which is presently the least constrained. It is shown that the magnitude of the effects depends strongly on the sign of the ϵμτ parameter describing this non-standard interaction. A simple analytic model is used to understand the parameter space where differences between the two signs are maximized. We discuss how this effect is partially degenerate with changing the neutrino mass hierarchy, as well as how this degeneracy could be lifted.
Evolution PDEs with nonstandard growth conditions existence, uniqueness, localization, blow-up
Antontsev, Stanislav
2015-01-01
This monograph offers the reader a treatment of the theory of evolution PDEs with nonstandard growth conditions. This class includes parabolic and hyperbolic equations with variable or anisotropic nonlinear structure. We develop methods for the study of such equations and present a detailed account of recent results. An overview of other approaches to the study of PDEs of this kind is provided. The presentation is focused on the issues of existence and uniqueness of solutions in appropriate function spaces, and on the study of the specific qualitative properties of solutions, such as localization in space and time, extinction in a finite time and blow-up, or nonexistence of global in time solutions. Special attention is paid to the study of the properties intrinsic to solutions of equations with nonstandard growth.
Plotting positions via maximum-likelihood for a non-standard situation
Directory of Open Access Journals (Sweden)
D. A. Jones
1997-01-01
Full Text Available A new approach is developed for the specification of the plotting positions used in the frequency analysis of extreme flows, rainfalls or similar data. The approach is based on the concept of maximum likelihood estimation and it is applied here to provide plotting positions for a range of problems which concern non-standard versions of annual-maximum data. This range covers the inclusion of incomplete years of data and also the treatment of cases involving regional maxima, where the number of sites considered varies from year to year. These problems, together with a not-to-be-recommended approach to using historical information, can be treated as special cases of a non-standard situation in which observations arise from different statistical distributions which vary in a simple, known, way.
Parents' nonstandard work schedules and child well-being: a critical review of the literature.
Li, Jianghong; Johnson, Sarah E; Han, Wen-Jui; Andrews, Sonia; Kendall, Garth; Strazdins, Lyndall; Dockery, Alfred
2014-02-01
This paper provides a comprehensive review of empirical evidence linking parental nonstandard work schedules to four main child developmental outcomes: internalizing and externalizing problems, cognitive development, and body mass index. We evaluated the studies based on theory and methodological rigor (longitudinal data, representative samples, consideration of selection and information bias, confounders, moderators, and mediators). Of 23 studies published between 1980 and 2012 that met the selection criteria, 21 reported significant associations between nonstandard work schedules and an adverse child developmental outcome. The associations were partially mediated through parental depressive symptoms, low quality parenting, reduced parent-child interaction and closeness, and a less supportive home environment. These associations were more pronounced in disadvantaged families and when parents worked such schedules full time. We discuss the nuance, strengths, and limitations of the existing studies, and propose recommendations for future research.
Behavioral Public Economics: Welfare and Policy Analysis with Non-Standard Decision-Makers
B. Douglas Bernheim; Antonio Rangel
2005-01-01
This paper has two goals. First, we discuss several emerging approaches to applied welfare analysis under non-standard (“behavioral”) assumptions concerning consumer choice. This provides a foundation for Behavioral Public Economics. Second, we illustrate applications of these approaches by surveying behavioral studies of policy problems involving saving, addiction, and public goods. We argue that the literature on behavioral public economics, though in its infancy, has already fundamentally ...
Mothers’ non-standard working schedules and family time : enhancing regularity and togetherness
Murtorinne-Lahtinen, Minna; Moilanen, Sanna; Tammelin, Mia; Rönkä, Anna; Laakso, Marja-Leena
2016-01-01
Purpose – The purpose of this paper is to investigate Finnish working mothers’ experiences of the effects of non-standard working schedules (NSWS) on family time in two family forms, coupled and lone-parent families. Furthermore the aim is to find out what meanings mothers with NSWS attached to family time paying particular attention to the circumstances in which mothers experienced NSWS positively. Design/methodology/approach – Thematic analysis of 20 semi-structured interviews was...
Odom, Erika C.; Vernon-Feagans, Lynne; Crouter, Ann C.
2013-01-01
In this study, observed maternal positive engagement and perception of work-family spillover were examined as mediators of the association between maternal nonstandard work schedules and children’s expressive language outcomes in 231 African American families living in rural households. Mothers reported their work schedules when their child was 24 months of age and children’s expressive language development was assessed during a picture book task at 24 months and with a standardized assessmen...
A vanishing diffusion limit in a nonstandard system of phase field equations
Czech Academy of Sciences Publication Activity Database
Colli, P.; Gilardi, G.; Krejčí, Pavel; Sprekels, J.
2014-01-01
Roč. 3, č. 2 (2014), s. 257-275 ISSN 2163-2480 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : nonstandard phase field system * nonlinear partial differential equations * asympotic limit Subject RIV: BA - General Mathematics Impact factor: 0.373, year: 2014 http://aimsciences.org/journals/displayArticlesnew.jsp?paperID=9918
Non-standard base pairing and stacked structures in methyl xanthine clusters
Czech Academy of Sciences Publication Activity Database
Callahan, M. P.; Gengeliczki, Z.; Svadlenak, N.; Valdes, Haydee; Hobza, Pavel; de Vries, M. S.
2008-01-01
Roč. 10, č. 19 (2008), s. 2819-2826 ISSN 1463-9076 R&D Projects: GA MŠk LC512 Grant - others:NSF(US) CHE-0615401 Institutional research plan: CEZ:AV0Z40550506 Keywords : non-standard base pairing * stacked structures * in methyl xanthine Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.064, year: 2008
The Method of Eichhorn with Non-Standard Projections for a Single Plate
Cardona, O.; Corona-Galindo, M.
1990-11-01
RESUMEN. Se desarrollan las expresiones para el metodo de Eichhorn en astrometria para proyecciones diferentes a la estandar. El se usa para obtener las coordenadas esfericas de estrellas en placas astron6micas cuando las variables contienen errores. ABSTRACT. We develop the expressions for the Eichhorn's Method in astrometry for non-standard projections. The method is used to obtain spherical coordinates of stars in astronomical plates, when all the variables have errors. Key words: ASTROMETRY
Chin Kim On; Teo Kein Yau; Rayner Alfred; Jason Teo; Patricia Anthony; Wang Cheng
2016-01-01
In this paper, we describe a research project that autonomously localizes and recognizes non-standardized Malaysian’s car plates using conventional Backpropagation algorithm (BPP) in combination with Ensemble Neural Network (ENN). We compared the results with the results obtained using simple Feed-Forward Neural Network (FFNN). This research aims to solve four main issues; (1) localization of car plates that has the same colour with the vehicle colour, (2) detection and recognition of car pla...
Korpusik, Adam
2017-02-01
We present a nonstandard finite difference scheme for a basic model of cellular immune response to viral infection. The main advantage of this approach is that it preserves the essential qualitative features of the original continuous model (non-negativity and boundedness of the solution, equilibria and their stability conditions), while being easy to implement. All of the qualitative features are preserved independently of the chosen step-size. Numerical simulations of our approach and comparison with other conventional simulation methods are presented.
Church, George M.; Mandell, Daniel J.; Lajoie, Marc J.
2017-12-05
Recombinant cells and recombinant organisms persistently expressing nonstandard amino acids (NSAAs) are provided. Methods of making recombinant cells and recombinant organisms dependent on persistently expressing NSAAs for survival are also provided. These methods may be used to make safe recombinant cells and recombinant organisms and/or to provide a selective pressure to maintain one or more reassigned codon functions in recombinant cells and recombinant organisms.
Approximations of Fuzzy Systems
Directory of Open Access Journals (Sweden)
Vinai K. Singh
2013-03-01
Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions
Evelina Leivada; Evelina Leivada; Elena Papadopoulou; Elena Papadopoulou; Natalia Pavlou
2017-01-01
Findings from the field of experimental linguistics have shown that a native speaker may judge a variant that is part of her grammar as unacceptable, but still use it productively in spontaneous speech. The process of eliciting acceptability judgments from speakers of non-standard languages is sometimes clouded by factors akin to prescriptive notions of grammatical correctness. It has been argued that standardization enhances the ability to make clear-cut judgments, while non-standardization ...
Work accident victims: a comparison between non-standard and standard workers in Belgium.
Alali, Hanan; Abdel Wahab, Magd; Van Hecke, Tanja; Braeckman, Lutgart
2016-04-01
The fast growth of non-standard employment in developed countries highlights the importance of studying the influence of contract type on worker's safety and health. The main purpose of our study is to investigate whether non-standard workers are more injured than standard workers or not. Additionally, other risk factors for occupational accidents are investigated. Data from the Belgian surveys on work ability in 2009 and 2011 are used. During their annual occupational health examination, workers were asked to fill in a self-administered questionnaire. In total, 1886 complete responses are collected and analyzed using logistic regression. Temporary workers did not have higher injury rates than permanent workers [OR 0.5, 95% confidence interval 0.2-1.2]. Low-educated, less-experienced workers and those exposed to dangerous conditions are more frequent victims of occupational accidents. The present data do not support the hypothesis that non-standard workers have more injuries than standard workers. Our results about occupational accidents derived from a non-representative sample of the Belgian workforce and cannot be generalized due to the heterogeneity in job organization and labor regulations between countries. Further research is needed to extend our findings and to seek other factors that may be associated with work accidents.
Nonstandard Analysis and Shock Wave Jump Conditions in a One-Dimensional Compressible Gas
Energy Technology Data Exchange (ETDEWEB)
Roy S. Baty, F. Farassat, John A. Hargreaves
2007-05-25
Nonstandard analysis is a relatively new area of mathematics in which infinitesimal numbers can be defined and manipulated rigorously like real numbers. This report presents a fairly comprehensive tutorial on nonstandard analysis for physicists and engineers with many examples applicable to generalized functions. To demonstrate the power of the subject, the problem of shock wave jump conditions is studied for a one-dimensional compressible gas. It is assumed that the shock thickness occurs on an infinitesimal interval and the jump functions in the thermodynamic and fluid dynamic parameters occur smoothly across this interval. To use conservations laws, smooth pre-distributions of the Dirac delta measure are applied whose supports are contained within the shock thickness. Furthermore, smooth pre-distributions of the Heaviside function are applied which vary from zero to one across the shock wave. It is shown that if the equations of motion are expressed in nonconservative form then the relationships between the jump functions for the flow parameters may be found unambiguously. The analysis yields the classical Rankine-Hugoniot jump conditions for an inviscid shock wave. Moreover, non-monotonic entropy jump conditions are obtained for both inviscid and viscous flows. The report shows that products of generalized functions may be defined consistently using nonstandard analysis; however, physically meaningful products of generalized functions must be determined from the physics of the problem and not the mathematical form of the governing equations.
Antineutrino Oscillations and a Search for Non-standard Interactions with the MINOS
Energy Technology Data Exchange (ETDEWEB)
Isvan, Zeynep [Univ. of Pittsburgh, PA (United States)
2012-01-01
MINOS searches for neutrino oscillations using the disappearance of muon neutrinos from the NuMI beam at Fermilab between two detectors. The Near Detector, located near the source, measures the beam composition before flavor change occurs. The energy spectrum is measured again at the Far Detector after neutrinos travel a distance. The mixing angle and mass splitting between the second and third mass states are extracted from the energy dependent difference between the spectra at the two detectors. NuMI is able to produce an antineutrino-enhanced beam as well as a neutrino-enhanced beam. Collecting data in antineutrino-mode allows the direct measurement of antineutrino oscillation parameters. From the analysis of the antineutrino mode data we measure $|\\Delta\\bar{m}^{2}_{\\text{atm}}| = 2.62^{+0.31}_{-0.28}\\times10^{-3}\\text{eV}^{2}$ and $\\sin^{2}(2\\bar{\\theta})_{23} = 0.95^{+0.10}_{-0.11}$, which is the most precise measurement of antineutrino oscillation parameters to date. A difference between neutrino and antineutrino oscillation parameters may indicate new physics involving interactions that are not part of the Standard Model, called non-standard interactions, that alter the apparent disappearance probability. Collecting data in neutrino and antineutrino mode independently allows a direct search for non-standard interactions. In this dissertation non-standard interactions are constrained by a combined analysis of neutrino and antineutrino datasets and no evidence of such interactions is found.
Katherine Spradley, M; Jantz, Richard L
2016-07-01
Standard cranial measurements are commonly used for ancestry estimation; however, 3D digitizers have made cranial landmark data collection and geometric morphometric (GM) analyses more popular within forensic anthropology. Yet there has been little focus on which data type works best. The goal of the present research is to test the discrimination ability of standard and nonstandard craniometric measurements and data derived from GM analysis. A total of 31 cranial landmarks were used to generate 465 interlandmark distances, including a subset of 20 commonly used measurements, and to generate principal component scores from procrustes coordinates. All were subjected to discriminant function analysis to ascertain which type of data performed best for ancestry estimation of American Black and White and Hispanic males and females. The nonstandard interlandmark distances generated the highest classification rates for females (90.5%) and males (88.2%). Using nonstandard interlandmark distances over more commonly used measurements leads to better ancestry estimates for our current population structure. © 2016 American Academy of Forensic Sciences.
Qian, Youhua; Chen, Shengmin
2010-10-01
In this paper, the homotopy analysis method (HAM) is presented to establish the accurate approximate analytical solutions for multi-degree-of-freedom (MDOF) nonlinear coupled oscillators. The periodic solutions for the three-degree-of-freedom (3DOF) coupled van der Pol-Duffing oscillators are applied to illustrate the validity and great potential of this method. For given physical parameters of nonlinear systems and with different initial conditions, the frequency ω , displacements x1 (t),x2 (t) and x3 (t) can be explicitly obtained. In addition, comparisons are conducted between the results obtained by the HAM and the numerical integration (i.e. Runge-Kutta) method. It is shown that the analytical solutions of the HAM are in excellent agreement with respect to the numerical integration solutions, even if time t progresses to a certain large domain in the time history responses. Finally, the homotopy Pade technique is used to accelerate the convergence of the solutions.
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Loose-cluster approximation. Continuous curve Our Theory. Dashed curve Our Simulation. Loose cluster approx. not only. captures -the anomalous. qualitative features but is also,. quantitatively, quite accurate. Notes:
Bosma, Wieb
1990-01-01
The distribution is determined of some sequences that measure how well a number is approximated by its mediants (or intermediate continued fraction convergents). The connection with a theorem of Fatou, as well as a new proof of this, is given.
International Nuclear Information System (INIS)
Knobloch, A.F.
1980-01-01
A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Approximation Behooves Calibration
DEFF Research Database (Denmark)
da Silva Ribeiro, André Manuel; Poulsen, Rolf
2013-01-01
Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....
For a new look at 'lexical errors': evidence from semantic approximations with verbs in aphasia.
Duvignau, Karine; Tran, Thi Mai; Manchon, Mélanie
2013-08-01
The ability to understand the similarity between two phenomena is fundamental for humans. Designated by the term analogy in psychology, this ability plays a role in the categorization of phenomena in the world and in the organisation of the linguistic system. The use of analogy in language often results in non-standard utterances, particularly in speakers with aphasia. These non-standard utterances are almost always studied in a nominal context and considered as errors. We propose a study of the verbal lexicon and present findings that measure, by an action-video naming task, the importance of verb-based non-standard utterances made by 17 speakers with aphasia ("la dame déshabille l'orange"/the lady undresses the orange, "elle casse la tomate"/she breaks the tomato). The first results we have obtained allow us to consider these type of utterances from a new perspective: we propose to eliminate the label of "error", suggesting that they may be viewed as semantic approximations based upon a relationship of inter-domain synonymy and are ingrained in the heart of the lexical system.
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
Improved Approximation Algorithm for
Byrka, Jaroslaw; Li, S.; Rybicki, Bartosz
2014-01-01
We study the k-level uncapacitated facility location problem (k-level UFL) in which clients need to be connected with paths crossing open facilities of k types (levels). In this paper we first propose an approximation algorithm that for any constant k, in polynomial time, delivers solutions of
Prestack wavefield approximations
Alkhalifah, Tariq
2013-09-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
Generalized Approximate Message Passing
DEFF Research Database (Denmark)
Oxvig, Christian Schou; Arildsen, Thomas; Larsen, Torben
2017-01-01
This tech report details a collection of results related to the Generalised Approximate Message Passing (GAMP) algorithm. It is a summary of the results that the authors have found critical in understanding the GAMP algorithm. In particular, emphasis is on the details that are crucial in implemen...
Wolff, Hans
This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…
DEFF Research Database (Denmark)
Madsen, Rasmus Elsborg
2005-01-01
The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2011-01-01
Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.
Topology, calculus and approximation
Komornik, Vilmos
2017-01-01
Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...
Fragments of approximate counting
Czech Academy of Sciences Publication Activity Database
Buss, S.R.; Kolodziejczyk, L. A.; Thapen, Neil
2014-01-01
Roč. 79, č. 2 (2014), s. 496-525 ISSN 0022-4812 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : approximate counting * bounded arithmetic * ordering principle Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=9287274&fileId=S0022481213000376
Optimization and approximation
Pedregal, Pablo
2017-01-01
This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Topics in Metric Approximation
Leeb, William Edward
This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.
Determinants of teenage smoking, with special reference to non-standard family background.
Isohanni, M; Moilanen, I; Rantakallio, P
1991-04-01
The prevalence of teenage smoking in a cohort of 12,058 subjects born in northern Finland in 1966 is discussed in terms of its social and family determinants, especially in "non-standard" families (with one or more of the parents absent for at least part of the child's upbringing). The prevalence of experimental or daily smoking was 67.4%, the rate being 65.5% in the standard, two-parent families and 75.5% in the non-standard families, the difference being statistically significant (p less than 0.001). The corresponding prevalence of daily smoking was 6.4%, but the rate was 5.1% in standard families and 12.1% in non-standard families (p less than 0.001). An elevated risk of smoking existed among adolescents who had experienced death of their father or divorce of their parents and among girls who had experienced death of their mother. Maternal smoking during pregnancy and maternal age under 20 years at the time of delivery increased the risk, while being the first-born child reduced it. Among family factors existing in 1980, paternal smoking increased the risk for both sexes, while more than three siblings, mother's unemployment or gainful employment (i.e. not a housewife) were associated with smoking by the boys as was urban living, and for the girls migration by the family to a town. The results suggest that juvenile smoking may be a kind of indicator of possible problems experienced by the parents and/or the adolescents themselves with respect to parenthood and family development.
Directory of Open Access Journals (Sweden)
ES Fourie
2008-12-01
Full Text Available The current labour market has many forms of employment relations that differ from full-time employment. "Atypical," "non-standard," or even "marginal" are terms used to describe these new workers and include, amongst others, part-time work, contract work, self-employment, temporary, fixed-term, seasonal, casual, piece-rate work, employees supplied by employment agencies, home workers and those employed in the informal economy. These workers are often paid for results rather than time. Their vulnerability is linked in many instances to the absence of an employment relationship or the existence of a flimsy one. Most of these workers are unskilled or work in sectors with limited trade union organisation and limited coverage by collective bargaining, leaving them vulnerable to exploitation. They should, in theory, have the protection of current South African labour legislation, but in practice the unusual circumstances of their employment render the enforcement of their rights problematic. The majority of non-standard workers in South Africa are those previously disadvantaged by the apartheid regime, compromising women and unskilled black workers. The exclusion of these workers from labour legislation can be seen as discrimination, which is prohibited by almost all labour legislation in South Africa. This contribution illustrates how the concept of indirect discrimination can be an important tool used to provide labour protection to these workers. The purpose of this article is to explore the scope of the extension of labour rights to non-standard workers in the context of South African labour laws and the international framework.
Numerical simulation of non-standard tensile tests of thin metal foils
Bolzon, Gabriella; Shahmardani, Mahdieh
2018-01-01
The evolution of the fracture processes occurring in thin metal foils can be evidenced by tensile tests performed on samples of non-standard dimensions. The load versus displacement record of these experiments does not return directly the local stress-strain relationship and the fracture characteristics of the investigated material. In fact, the overall response of thin foils is sensitive to local imperfections, size and geometric effects. Simulation models of the performed tests can support the interpretation of the experimental results, provided that the most significant physical phenomena are captured. The present contribution focuses on the role of modelling details on the numerical output that can be obtained in this context.
CP-violation and non-standard interactions at the MOMENT
Energy Technology Data Exchange (ETDEWEB)
Bakhti, Pouya; Farzan, Yasaman [Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)
2016-07-21
To measure the last unknown 3ν oscillation parameter (δ), several long baseline neutrino experiments have been designed or proposed. Recently it has been shown that turning on neutral current Non-Standard Interactions (NSI) of neutrinos with matter can induce degeneracies that may even hinder the proposed state-of-the-art DUNE long baseline experiment from measuring the value of δ. We study how the result of the proposed MOMENT experiment with a baseline of 150 km and 200 MeV
Ethical and legal issues related to the donation and use of nonstandard organs for transplants.
Cronin, Antonia J
2013-12-01
Transplantation of nonstandard or expanded criteria donor organs creates several potential ethical and legal problems in terms of consent and liability, and new challenges for research and service development; it highlights the need for a system of organ donation that responds to an evolving ethical landscape and incorporates scientific innovation to meet the needs of recipients, but which also safeguards the interests and autonomy of the donor. In this article, the use of deceased donor organs for transplants that fail to meet standard donor criteria and the legitimacy of interventions and research aimed at optimizing their successful donation are discussed. Copyright © 2013. Published by Elsevier Inc.
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
International Nuclear Information System (INIS)
El Sawi, M.
1983-07-01
A simple approach employing properties of solutions of differential equations is adopted to derive an appropriate extension of the WKBJ method. Some of the earlier techniques that are commonly in use are unified, whereby the general approximate solution to a second-order homogeneous linear differential equation is presented in a standard form that is valid for all orders. In comparison to other methods, the present one is shown to be leading in the order of iteration, and thus possibly has the ability of accelerating the convergence of the solution. The method is also extended for the solution of inhomogeneous equations. (author)
Cyclic approximation to stasis
Directory of Open Access Journals (Sweden)
Stewart D. Johnson
2009-06-01
Full Text Available Neighborhoods of points in $mathbb{R}^n$ where a positive linear combination of $C^1$ vector fields sum to zero contain, generically, cyclic trajectories that switch between the vector fields. Such points are called stasis points, and the approximating switching cycle can be chosen so that the timing of the switches exactly matches the positive linear weighting. In the case of two vector fields, the stasis points form one-dimensional $C^1$ manifolds containing nearby families of two-cycles. The generic case of two flows in $mathbb{R}^3$ can be diffeomorphed to a standard form with cubic curves as trajectories.
Approximate Euclidean Ramsey theorems
Directory of Open Access Journals (Sweden)
Adrian Dumitrescu
2011-04-01
Full Text Available According to a classical result of Szemerédi, every dense subset of 1,2,…,N contains an arbitrary long arithmetic progression, if N is large enough. Its analogue in higher dimensions due to Fürstenberg and Katznelson says that every dense subset of {1,2,…,N}d contains an arbitrary large grid, if N is large enough. Here we generalize these results for separated point sets on the line and respectively in the Euclidean space: (i every dense separated set of points in some interval [0,L] on the line contains an arbitrary long approximate arithmetic progression, if L is large enough. (ii every dense separated set of points in the d-dimensional cube [0,L]d in Rd contains an arbitrary large approximate grid, if L is large enough. A further generalization for any finite pattern in Rd is also established. The separation condition is shown to be necessary for such results to hold. In the end we show that every sufficiently large point set in Rd contains an arbitrarily large subset of almost collinear points. No separation condition is needed in this case.
Approximate Bayesian computation.
Directory of Open Access Journals (Sweden)
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
International Nuclear Information System (INIS)
Gago, A.M.; Guzzo, M.M.; Peres, O.L.G.; Holanda, P.C. de; Nunokawa, H.; Pleitez, V.; Zukanovich Funchal, R.
2002-01-01
What can we learn from solar neutrino observations? Is there any solution to the solar neutrino anomaly which is favored by the present experimental panorama? After SNO results, is it possible to affirm that neutrinos have mass? In order to answer such questions we analyze the current available data from the solar neutrino experiments, including the recent SNO result, in view of many acceptable solutions to the solar neutrino problem based on different conversion mechanisms, for the first time using the same statistical procedure. This allows us to do a direct comparison of the goodness of the fit among different solutions, from which we can discuss and conclude on the current status of each proposed dynamical mechanism. These solutions are based on different assumptions: (a) neutrino mass and mixing (b) a nonvanishing neutrino magnetic moment, (c) the existence of nonstandard flavor-changing and nonuniversal neutrino interactions, and (d) a tiny violation of the equivalence principle. We investigate the quality of the fit provided by each one of these solutions not only to the total rate measured by all the solar neutrino experiments but also to the recoil electron energy spectrum measured at different zenith angles by the Super-Kamiokande Collaboration. We conclude that several nonstandard neutrino flavor conversion mechanisms provide a very good fit to the experimental data which is comparable with (or even slightly better than) the most famous solution to the solar neutrino anomaly based on the neutrino oscillation induced by mass
Unemployment, Nonstandard Employment, and Fertility: Insights From Japan's "Lost 20 Years".
Raymo, James M; Shibata, Akihisa
2017-12-01
In this study, we examine relationships of unemployment and nonstandard employment with fertility. We focus on Japan, a country characterized by a prolonged economic downturn, significant increases in both unemployment and nonstandard employment, a strong link between marriage and childbearing, and pronounced gender differences in economic roles and opportunities. Analyses of retrospective employment, marriage, and fertility data for the period 1990-2006 indicate that changing employment circumstances for men are associated with lower levels of marriage, while changes in women's employment are associated with higher levels of marital fertility. The latter association outweighs the former, and results of counterfactual standardization analyses indicate that Japan's total fertility rate would have been 10 % to 20 % lower than the observed rate after 1995 if aggregate- and individual-level employment conditions had remained unchanged from the 1980s. We discuss the implications of these results in light of ongoing policy efforts to promote family formation and research on temporal and regional variation in men's and women's roles within the family.
Non-standard interactions with high-energy atmospheric neutrinos at IceCube
Energy Technology Data Exchange (ETDEWEB)
Salvado, Jordi; Mena, Olga; Palomares-Ruiz, Sergio; Rius, Nuria [Instituto de Física Corpuscular (IFIC), CSIC-Universitat de València,Apartado de Correos 22085, E-46071 Valencia (Spain)
2017-01-31
Non-standard interactions in the propagation of neutrinos in matter can lead to significant deviations from expectations within the standard neutrino oscillation framework and atmospheric neutrino detectors have been considered to set constraints. However, most previous works have focused on relatively low-energy atmospheric neutrino data. Here, we consider the one-year high-energy through-going muon data in IceCube, which has been already used to search for light sterile neutrinos, to constrain new interactions in the μτ-sector. In our analysis we include several systematic uncertainties on both, the atmospheric neutrino flux and on the detector properties, which are accounted for via nuisance parameters. After considering different primary cosmic-ray spectra and hadronic interaction models, we improve over previous analysis by using the latest data and showing that systematics currently affect very little the bound on the off-diagonal ε{sub μτ}, with the 90% credible interval given by −6.0×10{sup −3}<ε{sub μτ}<5.4×10{sup −3}, comparable to previous results. In addition, we also estimate the expected sensitivity after 10 years of collected data in IceCube and study the precision at which non-standard parameters could be determined for the case of ε{sub μτ} near its current bound.
Vacuum oscillation solution to the solar neutrino problem in standard and nonstandard pictures
International Nuclear Information System (INIS)
Berezhiani, Z.G.; Rossi, A.
1995-01-01
The neutrino long wavelength (just-so) oscillation is reexamined as a solution to the solar neutrino problem. We consider the just-so scenario in various cases: in the framework of the solar models with a relaxed prediction of the boron neutrino flux, as well as in the presence of the nonstandard weak range interactions between neutrino and matter constituents. We show that the fit of the experimental data in the just-so scenario is not very good for any reasonable value of the 8 B neutrino flux, but it substantially improves if the nonstandard τ-neutrino--electron interaction is included. These new interactions could also remove the conflict of the just-so picture with the shape of the SN 1987A neutrino spectrum. Special attention is devoted to the potential of the future real-time solar neutrino detectors such as Super-Kamiokande, SNO, and BOREXINO, which could provide the model-independent tests for the just-so scenario. In particular, these imply a specific deformation of the original solar neutrino energy spectra and time variation of the intermediate energy monochromatic neutrino ( 7 Be and pep) signals
Low, Jonathan; Hogan, S John
2008-10-01
In planar nematic electrohydrodynamic convection (EHC), a microscopic liquid crystal cell is driven by a homogeneous ac electric field, which, if strong enough, causes the fluid to destabilize into a regular pattern-forming state. We consider asymmetric electric fields E(t)=E(t+T) not equal-E(t+T2) , which leads to the possibility of three different types of instabilities at onset: conductive, dielectric, and subharmonic. The first two are already well known as they are easily produced when the system is driven by symmetric electric fields; the third can only occur when the electric field symmetry is broken. We present theoretical results on EHC using linear stability analysis and Floquet theory. We consider rigid and free boundary conditions, extending the model to two Fourier modes in the vertical plane, the inclusion of flexoelectricity, and using standard (nematic electric conductivity sigma_{a}>0 and dielectric anisotorpy _{a}<0 ) and nonstandard (sigma_{a}<0) material parameters. We make full use of a three-dimensional linear model where two mutually perpendicular planar wave numbers q and p can be varied. Our results show that there is a qualitative difference between the boundary conditions used, which is also dependent on how many vertical Fourier modes were used in the model. We have obtained threshold values favoring oblique rolls in subharmonic and dielectric regimes in parameter space. For the nonstandard EHC parameter values, both conduction and subharmonic regimes disappear and only the dielectric threshold exists.
UO2 fuel pellets fabrication via Spark Plasma Sintering using non-standard molybdenum die
Papynov, E. K.; Shichalin, O. O.; Mironenko, A. Yu; Tananaev, I. G.; Avramenko, V. A.; Sergienko, V. I.
2018-02-01
The article investigates spark plasma sintering (SPS) of commercial uranium dioxide (UO2) powder of ceramic origin into highly dense fuel pellets using non-standard die instead of usual graphite die. An alternative and formerly unknown method has been suggested to fabricate UO2 fuel pellets by SPS for excluding of typical problems related to undesirable carbon diffusion. Influence of SPS parameters on chemical composition and quality of UO2 pellets has been studied. Also main advantages and drawbacks have been revealed for SPS consolidation of UO2 in non-standard molybdenum die. The method is very promising due to high quality of the final product (density 97.5-98.4% from theoretical, absence of carbon traces, mean grain size below 3 μm) and mild sintering conditions (temperature 1100 ºC, pressure 141.5 MPa, sintering time 25 min). The results are interesting for development and probable application of SPS in large-scale production of nuclear ceramic fuel.
The quasilocalized charge approximation
International Nuclear Information System (INIS)
Kalman, G J; Golden, K I; Donko, Z; Hartmann, P
2005-01-01
The quasilocalized charge approximation (QLCA) has been used for some time as a formalism for the calculation of the dielectric response and for determining the collective mode dispersion in strongly coupled Coulomb and Yukawa liquids. The approach is based on a microscopic model in which the charges are quasilocalized on a short-time scale in local potential fluctuations. We review the conceptual basis and theoretical structure of the QLC approach and together with recent results from molecular dynamics simulations that corroborate and quantify the theoretical concepts. We also summarize the major applications of the QLCA to various physical systems, combined with the corresponding results of the molecular dynamics simulations and point out the general agreement and instances of disagreement between the two
International Nuclear Information System (INIS)
Ise, Takeharu
1976-12-01
Review studies have been made on algorithms of numerical analysis and benchmark tests on point kinetics and quasistatic approximate kinetics computer codes to perform efficiently benchmark tests on space-dependent neutron kinetics codes. Point kinetics methods have now been improved since they can be directly applied to the factorization procedures. Methods based on Pade rational function give numerically stable solutions and methods on matrix-splitting are interested in the fact that they are applicable to the direct integration methods. An improved quasistatic (IQ) approximation is the best and the most practical method; it is numerically shown that the IQ method has a high stability and precision and the computation time which is about one tenth of that of the direct method. IQ method is applicable to thermal reactors as well as fast reactors and especially fitted for fast reactors to which many time steps are necessary. Two-dimensional diffusion kinetics codes are most practicable though there exist also three-dimensional diffusion kinetics code as well as two-dimensional transport kinetics code. On developing a space-dependent kinetics code, in any case, it is desirable to improve the method so as to have a high computing speed for solving static diffusion and transport equations. (auth.)
Approximate quantum Markov chains
Sutter, David
2018-01-01
This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2012-05-01
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.
YILMAZ, DENİZ
2015-01-01
Assuming neutrinos are Majorana particles, neutrino oscillation is examined in the case of spin flavor precession (SFP) and nonstandard neutrino interaction (NSI). It is seen that the combined effect of them (SFP and NSI) is not ignorable for the neutrino oscillation.
Relationship between non-standard work arrangements and work-related accident absence in Belgium.
Alali, Hanan; Braeckman, Lutgart; Van Hecke, Tanja; De Clercq, Bart; Janssens, Heidi; Wahab, Magd Abdel
2017-03-28
The main objective of this study is to examine the relationship between indicators of non-standard work arrangements, including precarious contract, long working hours, multiple jobs, shift work, and work-related accident absence, using a representative Belgian sample and considering several socio-demographic and work characteristics. This study was based on the data of the fifth European Working Conditions Survey (EWCS). For the analysis, the sample was restricted to 3343 respondents from Belgium who were all employed workers. The associations between non-standard work arrangements and work-related accident absence were studied with multivariate logistic regression modeling techniques while adjusting for several confounders. During the last 12 months, about 11.7% of workers were absent from work because of work-related accident. A multivariate regression model showed an increased injury risk for those performing shift work (OR 1.546, 95% CI 1.074-2.224). The relationship between contract type and occupational injuries was not significant (OR 1.163, 95% CI 0.739-1.831). Furthermore, no statistically significant differences were observed for those performing long working hours (OR 1.217, 95% CI 0.638-2.321) and those performing multiple jobs (OR 1.361, 95% CI 0.827-2.240) in relation to work-related accident absence. Those who rated their health as bad, low educated workers, workers from the construction sector, and those exposed to biomechanical exposure (BM) were more frequent victims of work-related accident absence. No significant gender difference was observed. Indicators of non-standard work arrangements under this study, except shift work, were not significantly associated with work-related accident absence. To reduce the burden of occupational injuries, not only risk reduction strategies and interventions are needed but also policy efforts are to be undertaken to limit shift work. In general, preventive measures and more training on the job are needed to
Non-Unitarity, sterile neutrinos, and Non-Standard neutrino Interactions
Blennow, Mattias; Fernandez-Martinez, Enrique; Hernandez-Garcia, Josu; Lopez-Pavon, Jacobo
2017-04-27
The simplest Standard Model extension to explain neutrino masses involves the addition of right-handed neutrinos. At some level, this extension will impact neutrino oscillation searches. In this work we explore the differences and similarities between the case in which these neutrinos are kinematically accessible (sterile neutrinos) or not (mixing matrix non-unitarity). We clarify apparent inconsistencies in the present literature when using different parametrizations to describe these effects and recast both limits in the popular neutrino non-standard interaction (NSI) formal- ism. We find that, in the limit in which sterile oscillations are averaged out at the near detector, their effects at the far detector coincide with non-unitarity at leading order, even in presence of a matter potential. We also summarize the present bounds existing in both limits and compare them with the expected sensitivities of near-future facilities taking the DUNE proposal as a bench- mark. We conclude that non-unitarity effects ...
Canonical integration and analysis of periodic maps using non-standard analysis and life methods
Energy Technology Data Exchange (ETDEWEB)
Forest, E.; Berz, M.
1988-06-01
We describe a method and a way of thinking which is ideally suited for the study of systems represented by canonical integrators. Starting with the continuous description provided by the Hamiltonians, we replace it by a succession of preferably canonical maps. The power series representation of these maps can be extracted with a computer implementation of the tools of Non-Standard Analysis and analyzed by the same tools. For a nearly integrable system, we can define a Floquet ring in a way consistent with our needs. Using the finite time maps, the Floquet ring is defined only at the locations s/sub i/ where one perturbs or observes the phase space. At most the total number of locations is equal to the total number of steps of our integrator. We can also produce pseudo-Hamiltonians which describe the motion induced by these maps. 15 refs., 1 fig.
Directory of Open Access Journals (Sweden)
Yu-Feng Li
2014-11-01
Full Text Available We discuss reactor antineutrino oscillations with non-standard interactions (NSIs at the neutrino production and detection processes. The neutrino oscillation probability is calculated with a parametrization of the NSI parameters by splitting them into the averages and differences of the production and detection processes respectively. The average parts induce constant shifts of the neutrino mixing angles from their true values, and the difference parts can generate the energy (and baseline dependent corrections to the initial mass-squared differences. We stress that only the shifts of mass-squared differences are measurable in reactor antineutrino experiments. Taking Jiangmen Underground Neutrino Observatory (JUNO as an example, we analyze how NSIs influence the standard neutrino measurements and to what extent we can constrain the NSI parameters.
Study of nonstandard charged-current interactions at the MOMENT experiment
Tang, Jian; Zhang, Yibing
2018-02-01
MuOn-decay MEdium baseline NeuTrino beam experiment (MOMENT) is a next-generation accelerator neutrino experiment, which can be used to probe new physics beyond the Standard Model. We try to simulate neutrino oscillations confronting charged-current and nonstandard neutrino interactions (CC-NSIs) at MOMENT. These NSIs could alter neutrino production and detection processes and interfere with neutrino oscillation channels. We separate a perturbative discussion of oscillation channels at near and far detectors, and analyze parameter correlations with the impact of CC-NSIs. Taking δc p and θ23 as an example, we find that CC-NSIs can induce bias in precision measurements of standard oscillation parameters. In addition, a combination of near and far detectors using Gd-doped water Cherenkov technology at MOMENT is able to provide good constraints of CC-NSIs happening to the neutrino production and detection processes.
Non-standard interactions and neutrinos from dark matter annihilation in the Sun
Demidov, S. V.
2018-02-01
We perform an analysis of the influence of non-standard neutrino interactions (NSI) on neutrino signal from dark matter annihilations in the Sun. Taking experimentally allowed benchmark values for the matter NSI parameters we show that the evolution of such neutrinos with energies at GeV scale can be considerably modified. We simulate propagation of neutrinos from the Sun to the Earth for realistic dark matter annihilation channels and find that the matter NSI can result in at most 30% correction to the signal rate of muon track events at neutrino telescopes. Still present experimental bounds on dark matter from these searches are robust in the presence of NSI within considerable part of their allowed parameter space. At the same time electron neutrino flux from dark matter annihilation in the Sun can be changed by a factor of few.
Kim, Ja Young; Lee, Joohee; Muntaner, Carles; Kim, Seung-Sup
2016-10-01
This study sought to examine whether nonstandard employment is associated with presenteeism as well as absenteeism among full-time employees in South Korea. We analyzed a cross-sectional survey of 26,611 full-time employees from the third wave of the Korean Working Conditions Survey in 2011. Experience of absenteeism and presenteeism during the past 12 months was assessed through self-reports. Employment condition was classified into six categories based on two contract types (parent firm and subcontract) and three contract durations [permanent (≥1 year, no fixed term), long term (≥1 year, fixed term), and short term (absenteeism and presenteeism after adjusting for covariates. Compared to parent firm-permanent employment, which has been often regarded as a standard employment, absenteeism was not associated or negatively associated with all nonstandard employment conditions except parent firm-long term employment (OR 1.88; 95 % CI 1.57, 2.26). However, presenteeism was positively associated with parent firm-long term (OR 1.64; 95 % CI 1.42, 1.91), subcontract-long term (OR 1.61; 95 % CI 1.12, 2.32), and subcontract-short term (OR 1.26; 95 % CI 1.02, 1.56) employment. Our results found that most nonstandard employment may increase risk of presenteeism, but not absenteeism. These results suggest that previous findings about the protective effects of nonstandard employment on absenteeism may be explained by nonstandard workers being forced to work when sick.
International Conference Approximation Theory XV
Schumaker, Larry
2017-01-01
These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Comparative Study of Approximate Multipliers
Masadeh, Mahmoud; Hasan, Osman; Tahar, Sofiene
2018-01-01
Approximate multipliers are widely being advocated for energy-efficient computing in applications that exhibit an inherent tolerance to inaccuracy. However, the inclusion of accuracy as a key design parameter, besides the performance, area and power, makes the identification of the most suitable approximate multiplier quite challenging. In this paper, we identify three major decision making factors for the selection of an approximate multipliers circuit: (1) the type of approximate full adder...
Forms of Approximate Radiation Transport
Brunner, G
2002-01-01
Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.
International Conference Approximation Theory XIV
Schumaker, Larry
2014-01-01
This volume developed from papers presented at the international conference Approximation Theory XIV, held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.
Exact constants in approximation theory
Korneichuk, N
1991-01-01
This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base
$\\beta$-asymmetry measurements in nuclear $\\beta$-decay as a probe for non-standard model physics
Roccia, S
2002-01-01
We propose to perform a series of measurements of the $\\beta$-asymmetry parameter in the decay of selected nuclei, in order to investigate the presence of possible time reversal invariant tensor contributions to the weak interaction. The measurements have the potential to improve by a factor of about four on the present limits for such non-standard model contributions in nuclear $\\beta$-decay.
CSIR Research Space (South Africa)
Leach, AR
2002-12-01
Full Text Available those due to stress and stress damage and possibly increased rock deformation, geological conditions, mining considerations, and human factors. This permits a clearer identification of specific hazards and to highlight the consequences and severity....1 Classification of stress intensity Table 6.2 Definitions of standard stopes Table 6.3 Definitions of non-standard conditions categorised by stress, geological, mining practice and human factors Table 6.4 Key roles to aid in mine planning and operations Table...
Quantitative portable gamma spectroscopy sample analysis for non-standard sample geometries
International Nuclear Information System (INIS)
Enghauser, M.W.; Ebara, S.B.
1997-01-01
Utilizing a portable spectroscopy system, a quantitative method for analysis of samples containing a mixture of fission and activation products in nonstandard geometries was developed. The method can be used with various sample and shielding configurations where analysis on a laboratory based gamma spectroscopy system is impractical. The portable gamma spectroscopy method involves calibration of the detector and modeling of the sample and shielding to identify and quantify the radionuclides present in the sample. The method utilizes the intrinsic efficiency of the detector and the unattenuated gamma fluence rate at the detector surface per unit activity from the sample to calculate the nuclide activity and Minimum Detectable Activity (MDA). For a complex geometry, a computer code written for shielding applications (MICROSHIELD) is utilized to determine the unattenuated gamma fluence rate per unit activity at the detector surface. Lastly, the method is only applicable to nuclides which emit gamma rays and cannot be used for pure beta emitters. In addition, if sample self absorption and shielding is significant, the attenuation will result in high MDA's for nuclides which solely emit low energy gamma rays. The following presents the analysis technique and presents verification results demonstrating the accuracy of the method
Research of opportunistic behavior in the market of non-standard products
Directory of Open Access Journals (Sweden)
Natalya Sergeyevna Grigoryeva
2015-03-01
Full Text Available Objective to study the characteristic features of opportunistic behavior in the markets of complex products not subject to standardization by the example of rendering services during constructionandassembling operations. Methods analysis synthesis collection and description of empirical data. Results a significant amount of theoretical material has been analyzed budgets of constructionandassembling operations have been analyzed normative documents have been studied. A poll of quantity surveyors supervisors of constructionandassembling operations consumers of services and works have been carried out. The differences were revealed in the kinds of opportunistic behavior in the nonstandard products markets compared to the standard products markets the results of those surveys were presented in the earlier works by the author. Scientific novelty it has been shown that in case of individual elaboration of each project due to the uniqueness of the result of economic activity as a single complex the opportunistic behavior has such prerequisites as high costs for measurements and control of the quantity and quality of the rendered services and used materials lack of professional competencies of the consumers in the sphere of activity of the producer. Practical value the key provisions and conclusions of the article can be used in the scientific and educational activity when viewing the issues of opportunistic revelations between economic subjects. nbsp
Directory of Open Access Journals (Sweden)
D. K. Papoulias
2015-01-01
Full Text Available In this work, we explore ν-nucleus processes from a nuclear theory point of view and obtain results with high confidence level based on accurate nuclear structure cross sections calculations. Besides cross sections, the present study includes simulated signals expected to be recorded by nuclear detectors and differential event rates as well as total number of events predicted to be measured. Our original cross sections calculations are focused on measurable rates for the standard model process, but we also perform calculations for various channels of the nonstandard neutrino-nucleus reactions and come out with promising results within the current upper limits of the corresponding exotic parameters. We concentrate on the possibility of detecting (i supernova neutrinos by using massive detectors like those of the GERDA and SuperCDMS dark matter experiments and (ii laboratory neutrinos produced near the spallation neutron source facilities (at Oak Ridge National Lab by the COHERENT experiment. Our nuclear calculations take advantage of the relevant experimental sensitivity and employ the severe bounds extracted for the exotic parameters entering the Lagrangians of various particle physics models and specifically those resulting from the charged lepton flavour violating μ-→e- experiments (Mu2e and COMET experiments.
Nonstandard neutrino self-interactions in a supernova and fast flavor conversions
Dighe, Amol; Sen, Manibrata
2018-02-01
We study the effects of nonstandard self-interactions (NSSI) of neutrinos streaming out of a core-collapse supernova. We show that with NSSI, the standard linear stability analysis gives rise to linearly as well as exponentially growing solutions. For a two-box spectrum, we demonstrate analytically that flavor-preserving NSSI lead to a suppression of bipolar collective oscillations. In the intersecting four-beam model, we show that flavor-violating NSSI can lead to fast oscillations even when the angle between the neutrino and antineutrino beams is obtuse, which is forbidden in the standard model. This leads to the new possibility of fast oscillations in a two-beam system with opposing neutrino-antineutrino fluxes, even in the absence of any spatial inhomogeneities. Finally, we solve the full nonlinear equations of motion in the four-beam model numerically, and explore the interplay of fast and slow flavor conversions in the long-time behavior, in the presence of NSSI.
Directory of Open Access Journals (Sweden)
Araceli Henares-Molina
Full Text Available Grade II gliomas are slowly growing primary brain tumors that affect mostly young patients. Cytotoxic therapies (radiotherapy and/or chemotherapy are used initially only for patients having a bad prognosis. These therapies are planned following the "maximum dose in minimum time" principle, i. e. the same schedule used for high-grade brain tumors in spite of their very different behavior. These tumors transform after a variable time into high-grade gliomas, which significantly decreases the patient's life expectancy. In this paper we study mathematical models describing the growth of grade II gliomas in response to radiotherapy. We find that protracted metronomic fractionations, i.e. therapeutical schedules enlarging the time interval between low-dose radiotherapy fractions, may lead to a better tumor control without an increase in toxicity. Other non-standard fractionations such as protracted or hypoprotracted schemes may also be beneficial. The potential survival improvement depends on the tumor's proliferation rate and can be even of the order of years. A conservative metronomic scheme, still being a suboptimal treatment, delays the time to malignant progression by at least one year when compared to the standard scheme.
Design and use of nonstandard tensile specimens for irradiated materials testing
International Nuclear Information System (INIS)
Panayotou, N.F.
1984-10-01
Miniature, nonstandard, tensile-type specimens have been developed for use in radiation effects experiments at high energy neutron sources where the useful radiation volume is as small as a few cubic centimeters. The end result of our development is a sheet-type specimen, 12.7 mm long with a 5.1 mm long, 1.0 mm wide gage section, which is typically fabricated from 0.25 mm thick sheet stock by a punching technique. Despite this miniature geometry, it has been determined that the data obtained using these miniature specimens are in good agreement with data obtained using much larger specimens. This finding indicates that miniature tensile specimen data may by used for engineering design purposes. Furthermore, it is clear that miniature tensile specimen technology is applicable to fields other than the study of radiation effects. This paper describes the miniature specimen technology which was developed and compares the data obtained from these miniature specimens to data obtained from much larger specimens. 9 figures
A Framework for Simulation of Aircraft Flyover Noise Through a Non-Standard Atmosphere
Arntzen, Michael; Rizzi, Stephen A.; Visser, Hendrikus G.; Simons, Dick G.
2012-01-01
This paper describes a new framework for the simulation of aircraft flyover noise through a non-standard atmosphere. Central to the framework is a ray-tracing algorithm which defines multiple curved propagation paths, if the atmosphere allows, between the moving source and listener. Because each path has a different emission angle, synthesis of the sound at the source must be performed independently for each path. The time delay, spreading loss and absorption (ground and atmosphere) are integrated along each path, and applied to each synthesized aircraft noise source to simulate a flyover. A final step assigns each resulting signal to its corresponding receiver angle for the simulation of a flyover in a virtual reality environment. Spectrograms of the results from a straight path and a curved path modeling assumption are shown. When the aircraft is at close range, the straight path results are valid. Differences appear especially when the source is relatively far away at shallow elevation angles. These differences, however, are not significant in common sound metrics. While the framework used in this work performs off-line processing, it is conducive to real-time implementation.
A Search for Non-Standard Model $W$ Helicity in Top Quark Decays
Energy Technology Data Exchange (ETDEWEB)
Kilminster, Benjamin John [Univ. of Rochester, NY (United States)
2004-01-01
The structure of the tbW vertex is probed by measuring the polarization of the W in t → W + b → l + v + b. The invariant mass of the lepton and b quark measures the W decay angle which in turn allows a comparison with polarizations expected from different possible models for the spin properties of the tbW interaction. We measure the fraction by rate of Ws produced with a V + A coupling in lieu of the Standard Model V-A to be f_{V + A} = [special characters omitted] (stat) ± 0.21 (sys). We assign a limit of f_{V + A} < 0.80 @ 95% Confidence Level (CL). By combining this result with a complementary observable in the same data, we assign a limit of f_{V + A} < 0.61 @ 95% CL. We find no evidence for a non-Standard Model tbW vertex.
Constraints on Non-Standard Gravitomagnetism by the Anomalous Perihelion Precession of the Planets
Directory of Open Access Journals (Sweden)
Luis Acedo
2014-09-01
Full Text Available In 2008, a team of astronomers reported an anomalous retrograde precession of the perihelion of Saturn amounting to \\(\\Delta \\dot{\\omega}_{\\mathrm{SATURN}}=-0.006(2\\ arcsec per century (arcsec cy\\(^{-1}\\. This unexplained precession was obtained after taking into account all classical and relativistic effects in the context of the highly refined EPM2008 ephemerides. More recent analyzes have not confirmed this effect, but they have found similar discrepancies in other planets. Our objective in this paper is to discuss a non-standard model involving transversal gravitomagnetism generated by the Sun as a possible source of these potential anomalies, to be confirmed by further data analyses. In order to compute the Lense–Thirring perturbations induced by the suggested interaction, we should consider the orientation of the Sun's rotational axis in Carrington elements and the inclination of the planetary orbits with respect to the ecliptic plane. We find that an extra component of the gravitomagnetic field not predicted by General Relativity could explain the reported anomalies without conflicting with the Gravity Probe B experiment and the orbits of the geodynamics satellites.
Approximation by planar elastic curves
DEFF Research Database (Denmark)
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient-driven...
Anytime classification by ontology approximation
Schlobach, S.; Blaauw, E.; El Kebir, M.; Ten Teije, A.; Van Harmelen, F.; Bortoli, S.; Hobbelman, M.C.; Millian, K.; Ren, Y.; Stam, S.; Thomassen, P.; Van Het Schip, R.; Van Willigem, W.
2007-01-01
Reasoning with large or complex ontologies is one of the bottle-necks of the Semantic Web. In this paper we present an anytime algorithm for classification based on approximate subsumption. We give the formal definitions for approximate subsumption, and show its monotonicity and soundness; we show
Some results in Diophantine approximation
DEFF Research Database (Denmark)
Pedersen, Steffen Højris
This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...... the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered...... in the formal Laurent series over F3. The first paper is on intrinsic Diophantine approximation in the Cantor set in the formal Laurent series over F3. The summary contains a short motivation, the results of the paper and sketches of the proofs, mainly focusing on the ideas involved. The details of the proofs...
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Nonstandard Approach to Gravity for the Dark Sector of the Universe
Directory of Open Access Journals (Sweden)
Wojtek J. Zakrzewski
2013-02-01
Full Text Available We summarize the present state of research on the darkon fluid as a model for the dark sector of the Universe. Nonrelativistic massless particles are introduced as a realization of the Galilei group in an enlarged phase space. The additional degrees of freedom allow for a nonstandard, minimal coupling to gravity respecting Einstein’s equivalence principle. Extended to a self-gravitating fluid the Poisson equation for the gravitational potential contains a dynamically generated effective gravitational mass density of either sign. The equations of motion (EOMs contain no free parameters and are invariant w.r.t. Milne gauge transformations. Fixing the gauge eliminates the unphysical degrees of freedom. The resulting Lagrangian possesses no free particle limit. The particles it describes, darkons, exist only as fluid particles of a self-gravitating fluid. This darkon fluid realizes the zero-mass Galilean algebra extended by dilations with dynamical exponent z = 5/3 . We reduce the EOMs to Friedmann-like equations and derive conserved quantities and a unique Hamiltonian dynamics by implementing dilation symmetry. By the Casimir of the Poisson-bracket (PB-algebra we foliate the phase space and construct a Lagrangian in reduced phase space. We solve the Friedmann-like equations with the transition redshift and the value of the Casimir as integration constants. We obtain a deceleration phase for the early Universe and an acceleration phase for the late Universe in agreement with observations. Steady state equations in the spherically symmetric case may model a galactic halo. Numerical solutions of a nonlinear differential equation for the gravitational potential lead to predictions for the dark matter (DM part of the rotation curves (RCs of galaxies in qualitative agreement with observational data. We also present a general covariant generalization of the model.
Study of nonstandard auto-antibodies as prognostic markers in auto immune hepatitis in children
Directory of Open Access Journals (Sweden)
Mahmoud Nermine H
2009-07-01
Full Text Available Abstract Background Antibodies to chromatin and soluble liver antigen have been associated with severe form of autoimmune hepatitis and/or poor treatment response and may provide guidance in defining subsets of patients with different disease behaviors. The major clinical limitation of these antibodies is their lower individual occurrence in patients with autoimmune hepatitis. Aim To estimate the value of detection of these non-standard antibodies in autoimmune hepatitis as prognostic markers. Methods Both antibodies were tested by enzyme immunoassay in 20 patients with autoimmune hepatitis. Results Antibodies to soluble liver antigen were not detected in any of our patients. On the other hand anti chromatin antibodies were present in 50% (10/20. Antibodies to chromatin occurred more commonly in females than males (8/14 versus 2/6. Of the 14 patients who relapsed 8(57% had antichromatin antibodies while they were present in only 2 out of 6(33.3% non relapsers. Antichromatin antibodies were found more in patients with antinuclear (3/4 and anti smooth muscle antibodies (9/13 more than in those with liver kidney microsomal antibodies (1/4 and those seronegative (1/4 i.e. they were +ve in patients with type I (8/12(66.6% more than those with type II (1/4(25% and those seronegative (1/4(25%. Antibodies to chromatin are associated with high levels of γ globulin but yet with no statistical difference between seropositive and seronegative counterparts (p = 0.65. Conclusion Antibodies to chromatin may be superior than those to soluble liver antigen in predicting relapse and may be useful as prognostic marker. Further studies with larger number of patients and combined testing of more than one antibody will improve the performance parameters of these antibodies and define optimal testing conditions for them before they can be incorporated into management algorithms that project prognosis.
Approximate Implicitization Using Linear Algebra
Directory of Open Access Journals (Sweden)
Oliver J. D. Barrowclough
2012-01-01
Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.
Rollout sampling approximate policy iteration
Dimitrakakis, C.; Lagoudakis, M.G.
2008-01-01
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a
Shearlets and Optimally Sparse Approximations
DEFF Research Database (Denmark)
Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q
2012-01-01
of such functions. Recently, cartoon-like images were introduced in 2D and 3D as a suitable model class, and approximation properties were measured by considering the decay rate of the $L^2$ error of the best $N$-term approximation. Shearlet systems are to date the only representation system, which provide...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....
Mathematical algorithms for approximate reasoning
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Approximate number sense theory or approximate theory of magnitude?
Content, Alain; Velde, Michael Vande; Adriano, Andrea
2017-01-01
Leibovich et al. argue that the evidence in favor of a perceptual mechanism devoted to the extraction of numerosity from visual collections is unsatisfactory and propose to replace it with an unspecific mechanism capturing approximate magnitudes from continuous dimensions. We argue that their representation of the evidence is incomplete and that their theoretical proposal is too vague to be useful.
Directory of Open Access Journals (Sweden)
Evelina Leivada
2017-07-01
Full Text Available Findings from the field of experimental linguistics have shown that a native speaker may judge a variant that is part of her grammar as unacceptable, but still use it productively in spontaneous speech. The process of eliciting acceptability judgments from speakers of non-standard languages is sometimes clouded by factors akin to prescriptive notions of grammatical correctness. It has been argued that standardization enhances the ability to make clear-cut judgments, while non-standardization may result to grammatical hybridity, often manifested in the form of functionally equivalent variants in the repertoire of a single speaker. Recognizing the importance of working with corpora of spontaneous speech, this work investigates patterns of variation in the spontaneous production of five neurotypical, adult speakers of a non-standard variety in terms of three variants, each targeting one level of linguistic analysis: syntax, morphology, and phonology. The results reveal the existence of functionally equivalent variants across speakers and levels of analysis. We first discuss these findings in relation to the notions of competing, mixed, and fused grammars, and then we flesh out the implications that different values of the same variant carry for parametric approaches to Universal Grammar. We observe that intraspeaker realizations of different values of the same variant within the same syntactic environment are incompatible with the ‘triggering-a-single-value’ approach of parametric models, but we argue that they are compatible with the concept of Universal Grammar itself. Since the analysis of these variants is ultimately a way of investigating the status of Universal Grammar primitives, we conclude that claims about the alleged unfalsifiability of (the contents of Universal Grammar are unfounded.
Leivada, Evelina; Papadopoulou, Elena; Pavlou, Natalia
2017-01-01
Findings from the field of experimental linguistics have shown that a native speaker may judge a variant that is part of her grammar as unacceptable, but still use it productively in spontaneous speech. The process of eliciting acceptability judgments from speakers of non-standard languages is sometimes clouded by factors akin to prescriptive notions of grammatical correctness. It has been argued that standardization enhances the ability to make clear-cut judgments, while non-standardization may result to grammatical hybridity, often manifested in the form of functionally equivalent variants in the repertoire of a single speaker. Recognizing the importance of working with corpora of spontaneous speech, this work investigates patterns of variation in the spontaneous production of five neurotypical, adult speakers of a non-standard variety in terms of three variants, each targeting one level of linguistic analysis: syntax, morphology, and phonology. The results reveal the existence of functionally equivalent variants across speakers and levels of analysis. We first discuss these findings in relation to the notions of competing, mixed, and fused grammars, and then we flesh out the implications that different values of the same variant carry for parametric approaches to Universal Grammar. We observe that intraspeaker realizations of different values of the same variant within the same syntactic environment are incompatible with the 'triggering-a-single-value' approach of parametric models, but we argue that they are compatible with the concept of Universal Grammar itself. Since the analysis of these variants is ultimately a way of investigating the status of Universal Grammar primitives, we conclude that claims about the alleged unfalsifiability of (the contents of) Universal Grammar are unfounded.
Directory of Open Access Journals (Sweden)
M. Bishehniasar
2017-01-01
Full Text Available The demand of many scientific areas for the usage of fractional partial differential equations (FPDEs to explain their real-world systems has been broadly identified. The solutions may portray dynamical behaviors of various particles such as chemicals and cells. The desire of obtaining approximate solutions to treat these equations aims to overcome the mathematical complexity of modeling the relevant phenomena in nature. This research proposes a promising approximate-analytical scheme that is an accurate technique for solving a variety of noninteger partial differential equations (PDEs. The proposed strategy is based on approximating the derivative of fractional-order and reducing the problem to the corresponding partial differential equation (PDE. Afterwards, the approximating PDE is solved by using a separation-variables technique. The method can be simply applied to nonhomogeneous problems and is proficient to diminish the span of computational cost as well as achieving an approximate-analytical solution that is in excellent concurrence with the exact solution of the original problem. In addition and to demonstrate the efficiency of the method, it compares with two finite difference methods including a nonstandard finite difference (NSFD method and standard finite difference (SFD technique, which are popular in the literature for solving engineering problems.
Search for Non-Standard, Rare or Invisible Decays of the Higgs Boson with the ATLAS Detector
Thompson, Paul; The ATLAS collaboration
2017-01-01
The search for non-standard, invisible or rare decays of the Higgs boson is an important area of investigation at the LHC because of the sensitivity to effects from physics beyond the Standard Model. In this article three recent measurements from the ATLAS Collaboration are presented; the search for Higgs boson decays to $Z\\gamma$, the search for an invisibly decaying Higgs boson or dark matter candidate produced in association with a $Z$ boson, and the search for Higgs boson decays to $\\phi\\gamma$ and $\\rho\\gamma$.
Search for non-standard model signatures in the WZ/ZZ final state at CDF run II
Energy Technology Data Exchange (ETDEWEB)
Norman, Matthew [Univ. of California, San Diego, CA (United States)
2009-01-01
This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb ^{-1} of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high š. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.
Approximate Matching of Hierarchial Data
DEFF Research Database (Denmark)
Augsten, Nikolaus
The goal of this thesis is to design, develop, and evaluate new methods for the approximate matching of hierarchical data represented as labeled trees. In approximate matching scenarios two items should be matched if they are similar. Computing the similarity between labeled trees is hard...... formally proof that the pq-gram index can be incrementally updated based on the log of edit operations without reconstructing intermediate tree versions. The incremental update is independent of the data size and scales to a large number of changes in the data. We introduce windowed pq...... as in addition to the data values also the structure must be considered. A well-known measure for comparing trees is the tree edit distance. It is computationally expensive and leads to a prohibitively high run time. Our solution for the approximate matching of hierarchical data are pq-grams. The pq...
Approximations to camera sensor noise
Jin, Xiaodan; Hirakawa, Keigo
2013-02-01
Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.
Face Recognition using Approximate Arithmetic
DEFF Research Database (Denmark)
Marso, Karol
Face recognition is image processing technique which aims to identify human faces and found its use in various diﬀerent ﬁelds for example in security. Throughout the years this ﬁeld evolved and there are many approaches and many diﬀerent algorithms which aim to make the face recognition as eﬀective...... as possible. The use of diﬀerent approaches such as neural networks and machine learning can lead to fast and eﬃcient solutions however, these solutions are expensive in terms of hardware resources and power consumption. A possible solution to this problem can be use of approximate arithmetic. In many image...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....
Diophantine approximation and Dirichlet series
Queffélec, Hervé
2013-01-01
This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...
Approximate reasoning in physical systems
International Nuclear Information System (INIS)
Mutihac, R.
1991-01-01
The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)
Approximations to the Newton potential
International Nuclear Information System (INIS)
Warburton, A.E.A.; Hatfield, R.W.
1977-01-01
Explicit expressions are obtained for Newton's (Newton, R.G., J. Math. Phys., 3:75-82 (1962)) solution to the inverse scattering problem in the approximations where up to two phase shifts are treated exactly and the rest to first order. (author)
Approximation properties of haplotype tagging
Directory of Open Access Journals (Sweden)
Dreiseitl Stephan
2006-01-01
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.
Approximate Reanalysis in Topology Optimization
DEFF Research Database (Denmark)
Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole
2009-01-01
In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...
Ultrafast Approximation for Phylogenetic Bootstrap
Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and
APPROXIMATE MODELS FOR FLOOD ROUTING
African Journals Online (AJOL)
kinematic model and a nonlinear convection-diffusion model are extracted from a normalized form of the St. Venant equations, and applied to ... normal ﬂow condition is moderate. Keywords: approximate models, nonlinear kinematic ... The concern here is with the movement of an abnormal amount of water along a river or ...
On badly approximable complex numbers
DEFF Research Database (Denmark)
Esdahl-Schou, Rune; Kristensen, S.
We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...
Rational approximation of vertical segments
Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte
2007-08-01
In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.
All-Norm Approximation Algorithms
Azar, Yossi; Epstein, Leah; Richter, Yossi; Woeginger, Gerhard J.; Penttonen, Martti; Meineche Schmidt, Erik
2002-01-01
A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different ℓ p norms. We address this problem by introducing the concept of an All-norm ρ-approximation
Approximate Reasoning with Fuzzy Booleans
van den Broek, P.M.; Noppen, J.A.R.
This paper introduces, in analogy to the concept of fuzzy numbers, the concept of fuzzy booleans, and examines approximate reasoning with the compositional rule of inference using fuzzy booleans. It is shown that each set of fuzzy rules is equivalent to a set of fuzzy rules with singleton crisp
Hydrogen: Beyond the Classic Approximation
International Nuclear Information System (INIS)
Scivetti, Ivan
2003-01-01
The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position
Hydrogen Beyond the Classic Approximation
Scivetti, I
2003-01-01
The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position
Good points for diophantine approximation
Indian Academy of Sciences (India)
n=1 of real numbers in the interval [0, 1) and a sequence. (δn)∞ n=1 of positive numbers tending to zero, we consider the size of the set of numbers in [0, 1] which can be 'well approximated' by terms of the first sequence, namely, those y ∈ [0, 1] for which the inequality |y − xn| < δn holds for infinitely many positive integers n ...
Dimensionality Reduction with Adaptive Approximation
Kokiopoulou, Effrosyni; Frossard, Pascal
2007-01-01
In this paper, we propose the use of (adaptive) nonlinear approximation for dimensionality reduction. In particular, we propose a dimensionality reduction method for learning a parts based representation of signals using redundant dictionaries. A redundant dictionary is an overcomplete set of basis vectors that spans the signal space. The signals are jointly represented in a common subspace extracted from the redundant dictionary, using greedy pursuit algorithms for simultaneous sparse approx...
Ultrafast approximation for phylogenetic bootstrap.
Minh, Bui Quang; Nguyen, Minh Anh Thi; von Haeseler, Arndt
2013-05-01
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and the Shimodaira-Hasegawa-like approximate likelihood ratio test have been introduced to speed up the bootstrap. Here, we suggest an ultrafast bootstrap approximation approach (UFBoot) to compute the support of phylogenetic groups in maximum likelihood (ML) based trees. To achieve this, we combine the resampling estimated log-likelihood method with a simple but effective collection scheme of candidate trees. We also propose a stopping rule that assesses the convergence of branch support values to automatically determine when to stop collecting candidate trees. UFBoot achieves a median speed up of 3.1 (range: 0.66-33.3) to 10.2 (range: 1.32-41.4) compared with RAxML RBS for real DNA and amino acid alignments, respectively. Moreover, our extensive simulations show that UFBoot is robust against moderate model violations and the support values obtained appear to be relatively unbiased compared with the conservative standard bootstrap. This provides a more direct interpretation of the bootstrap support. We offer an efficient and easy-to-use software (available at http://www.cibiv.at/software/iqtree) to perform the UFBoot analysis with ML tree inference.
Initially Approximated Quasi Equilibrium Manifold
International Nuclear Information System (INIS)
Shahzad, M.; Arif, H.; Gulistan, M.; Sajid, M.
2015-01-01
Most commonly, kinetics model reduction techniques are based on exploiting time scale separation into fast and slow reaction processes. Then, a researcher approximates the system dynamically with dimension reduction for slow ones eliminating the fast modes. The main idea behind the construction of the lower dimension manifold is based on finding its initial approximation using Quasi Equilibrium Manifold (QEM). Here, we provide an efficient numerical method, which allow us to calculate low dimensional manifolds of chemical reaction systems. This computation technique is not restricted to our specific complex problem, but it can also be applied to other reacting flows or dynamic systems provided with the condition that a large number of extra (decaying) components can be eliminated from the system. Through computational approach, we approximate low dimensional manifold for a mechanism of six chemical species to simplify complex chemical kinetics. A reduced descriptive form of slow invariant manifold is obtained from dissipative system. This method is applicable for higher dimensions and is applied over an oxidation of CO/Pt. (author)
Approximate Inference for Wireless Communications
DEFF Research Database (Denmark)
Hansen, Morten
to the optimal one, which usually requires an unacceptable high complexity. Some of the treated approximate methods are based on QL-factorization of the channel matrix. In the work presented in this thesis it is proven how the QL-factorization of frequency-selective channels asymptotically provides the minimum......-phase and all-pass filters. This enables us to view Sphere Detection (SD) as an adaptive variant of minimum-phase prefiltered reduced-state sequence estimation. Thus, a novel way of computing the minimum-phase filter and its associated all-pass filter using the numerically stable QL-factorization is suggested...
Generalized Gradient Approximation Made Simple
International Nuclear Information System (INIS)
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-01-01
Generalized gradient approximations (GGA close-quote s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. copyright 1996 The American Physical Society
Canards and curvature: nonsmooth approximation by pinching
International Nuclear Information System (INIS)
Desroches, M; Jeffrey, M R
2011-01-01
In multiple time-scale (singularly perturbed) dynamical systems, canards are counterintuitive solutions that evolve along both attracting and repelling invariant manifolds. In two dimensions, canards result in periodic oscillations whose amplitude and period grow in a highly nonlinear way: they are slowly varying with respect to a control parameter, except for an exponentially small range of values where they grow extremely rapidly. This sudden growth, called a canard explosion, has been encountered in many applications ranging from chemistry to neuronal dynamics, aerospace engineering and ecology. Canards were initially studied using nonstandard analysis, and later the same results were proved by standard techniques such as matched asymptotics, invariant manifold theory and parameter blow-up. More recently, canard-like behaviour has been linked to surfaces of discontinuity in piecewise-smooth dynamical systems. This paper provides a new perspective on the canard phenomenon by showing that the nonstandard analysis of canard explosions can be recast into the framework of piecewise-smooth dynamical systems. An exponential coordinate scaling is applied to a singularly perturbed system of ordinary differential equations. The scaling acts as a lens that resolves dynamics across all time-scales. The changes of local curvature that are responsible for canard explosions are then analysed. Regions where different time-scales dominate are separated by hypersurfaces, and these are pinched together to obtain a piecewise-smooth system, in which curvature changes manifest as discontinuity-induced bifurcations. The method is used to classify canards in arbitrary dimensions, and to derive the parameter values over which canards form either small cycles (canards without head) or large cycles (canards with head)
Wavelet Approximation in Data Assimilation
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Plasma Physics Approximations in Ares
International Nuclear Information System (INIS)
Managan, R. A.
2015-01-01
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A α (ζ ),A β (ζ ), ζ, f(ζ ) = (1 + e -μ/θ )F 1/2 (μ/θ), F 1/2 '/F 1/2 , F c α , and F c β . In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
El-Nabulsi, Rami Ahmad
2018-03-01
Recently, the notion of non-standard Lagrangians was discussed widely in literature in an attempt to explore the inverse variational problem of nonlinear differential equations. Different forms of non-standard Lagrangians were introduced in literature and have revealed nice mathematical and physical properties. One interesting form related to the inverse variational problem is the logarithmic Lagrangian, which has a number of motivating features related to the Liénard-type and Emden nonlinear differential equations. Such types of Lagrangians lead to nonlinear dynamics based on non-standard Hamiltonians. In this communication, we show that some new dynamical properties are obtained in stellar dynamics if standard Lagrangians are replaced by Logarithmic Lagrangians and their corresponding non-standard Hamiltonians. One interesting consequence concerns the emergence of an extra pressure term, which is related to the gravitational field suggesting that gravitation may act as a pressure in a strong gravitational field. The case of the stellar halo of the Milky Way is considered.
Approximation by double Walsh polynomials
Directory of Open Access Journals (Sweden)
Ferenc Móricz
1992-01-01
Full Text Available We study the rate of approximation by rectangular partial sums, Cesàro means, and de la Vallée Poussin means of double Walsh-Fourier series of a function in a homogeneous Banach space X. In particular, X may be Lp(I2, where 1≦p<∞ and I2=[0,1×[0,1, or CW(I2, the latter being the collection of uniformly W-continuous functions on I2. We extend the results by Watari, Fine, Yano, Jastrebova, Bljumin, Esfahanizadeh and Siddiqi from univariate to multivariate cases. As by-products, we deduce sufficient conditions for convergence in Lp(I2-norm and uniform convergence on I2 as well as characterizations of Lipschitz classes of functions. At the end, we raise three problems.
Approximating the minimum cycle mean
Directory of Open Access Journals (Sweden)
Krishnendu Chatterjee
2013-07-01
Full Text Available We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1 First we show that the algorithmic question is reducible in O(n^2 time to the problem of a logarithmic number of min-plus matrix multiplications of n-by-n matrices, where n is the number of vertices of the graph. (2 Second, when the weights are nonnegative, we present the first (1 + ε-approximation algorithm for the problem and the running time of our algorithm is ilde(O(n^ω log^3(nW/ε / ε, where O(n^ω is the time required for the classic n-by-n matrix multiplication and W is the maximum value of the weights.
Khandaker, Mayeen Uddin; Ali, Samer K. I.; Kassim, Hasan Abu; Yusof, Norhasliza
2017-11-01
Production cross-sections of Cobalt-55 (T1/2=17.53 h, Eβ+mean = 570 keV, Iβ+total = 76%), a non-standard positron emitter have been evaluated in the energy range of 40 MeV down to the threshold energy of the 56Fe(p,2n)55Co nuclear reaction due to its significance as a potential PET imaging agent in medical applications. Experimental cross-sections of 55Co radionuclide that lies within the scope of this work were collected from the EXFOR database, and renormalized using the latest agreed values of decay data and monitor cross-sections. Simultaneous Evaluation on KALMAN (SOK) code combined with least-squares method was applied to the corrected cross-sections to obtain evaluated cross-sections together with the covariance information. Knowledge of the underlying uncertainties in evaluated nuclear data, i.e., covariances are useful to improve the accuracy of nuclear data.
Barducci, D.; Fabbrichesi, M.; Tonero, A.
2017-10-01
We identify the differential cross sections for t t ¯ production and the total cross section for Higgs production through gluon fusion as the processes in which the two effective operators describing the leading nonstandard interactions of the top quark with the gluon can be disentangled and studied in an independent fashion. Current data on the Higgs production and the d σ /d pTt differential cross section provide limits comparable to, but not more stringent than, those from the total t t ¯ cross sections measurements at the LHC and Tevatron, where however the two operators enter on the same footing and can only be constrained together. We conclude by stating the (modest) reduction in the uncertainties necessary to provide more stringent limits by means of the Higgs production and t t ¯ differential cross section observables at the LHC with the future luminosity of 300 and 3000 fb-1 .
Nonlinear approximation with dictionaries I. Direct estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
2004-01-01
We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...
International Nuclear Information System (INIS)
Pavon, Ester Carrasco; Sanchez-Doblado, Francisco; Leal, Antonio; Capote, Roberto; Lagares, Juan Ignacio; Perucha, Maria; Arrans, Rafael
2003-01-01
Total skin electron therapy (TSET) is a complex technique which requires non-standard measurements and dosimetric procedures. This paper investigates an essential first step towards TSET Monte Carlo (MC) verification. The non-standard 6 MeV 40 x 40 cm 2 electron beam at a source to surface distance (SSD) of 100 cm as well as its horizontal projection behind a polymethylmethacrylate (PMMA) screen to SSD = 380 cm were evaluated. The EGS4 OMEGA-BEAM code package running on a Linux home made 47 PCs cluster was used for the MC simulations. Percentage depth-dose curves and profiles were calculated and measured experimentally for the 40 x 40 cm 2 field at both SSD = 100 cm and patient surface SSD = 380 cm. The output factor (OF) between the reference 40 x 40 cm 2 open field and its horizontal projection as TSET beam at SSD = 380 cm was also measured for comparison with MC results. The accuracy of the simulated beam was validated by the good agreement to within 2% between measured relative dose distributions, including the beam characteristic parameters (R 50 , R 80 , R 100 , R p , E 0 ) and the MC calculated results. The energy spectrum, fluence and angular distribution at different stages of the beam (at SSD = 100 cm, at SSD = 364.2 cm, behind the PMMA beam spoiler screen and at treatment surface SSD = 380 cm) were derived from MC simulations. Results showed a final decrease in mean energy of almost 56% from the exit window to the treatment surface. A broader angular distribution (FWHM of the angular distribution increased from 13deg at SSD 100 cm to more than 30deg at the treatment surface) was fully attributable to the PMMA beam spoiler screen. OF calculations and measurements agreed to less than 1%. The effect of changing the electron energy cut-off from 0.7 MeV to 0.521 MeV and air density fluctuations in the bunker which could affect the MC results were shown to have a negligible impact on the beam fluence distributions. Results proved the applicability of using MC
Hneda, M. L.; da Cunha, J. B. M.; Gusmão, M. A.; Neto, S. R. Oliveira; Rodríguez-Carvajal, J.; Isnard, O.
2017-01-01
This paper presents the physical properties of a nonstandard orthorhombic form of MnV2O6 , including a comparison with the isostructural orthorhombic niobate MnNb2O6 , and with the usual MnV2O6 monoclinic polymorph. Orthorhombic (P b c n ) MnV2O6 is obtained under extreme conditions of high pressure (6.7 GPa) and high temperature (800 ∘C ). A negative Curie-Weiss temperature θCW is observed, implying dominant antiferromagnetic interactions at high temperatures, in contrast to the positive θCW of the monoclinic form. Specific-heat measurements are reported down to 1.8 K for all three compounds, and corroborate the magnetic-transition temperatures obtained from susceptibility data. Orthorhombic MnV2O6 presents a transition to an ordered antiferromagnetic state at TN=4.7 K. Its magnetic structure, determined by neutron diffraction, is unique among the columbite compounds, being characterized by a commensurate propagation vector k =(0 ,0 ,1/2 ) . It presents antiferromagnetic chains running along the c axis, but with a different spin pattern in comparison to the chains observed in MnNb2O6 . By a comparative discussion of our observations in this three compounds, we are able to highlight the interplay between competing interactions and dimensionality that yield their magnetic properties.
A model for large non-standard interactions of neutrinos leading to the LMA-Dark solution
Directory of Open Access Journals (Sweden)
Yasaman Farzan
2015-09-01
Full Text Available It is well-known that in addition to the standard LMA solution to solar anomaly, there is another solution called LMA-Dark which requires Non-Standard Interactions (NSI with effective couplings as large as the Fermi coupling. Although this solution satisfies all the bounds from various neutrino oscillation observations and even provides a better fit to low energy solar neutrino spectrum, it is not as popular as the LMA solution mainly because no model compatible with the existing bounds has been so far constructed to give rise to this solution. We introduce a model that provides a foundation for such large NSI with strength and flavor structure required for the LMA-Dark solution. This model is based on a new U(1′ gauge interaction with a gauge boson of mass ∼10 MeV under which quarks as well as the second and third generations of leptons are charged. We show that observable effects can appear in the spectrum of supernova and high energy cosmic neutrinos. Our model predicts a new contribution to the muon magnetic dipole moment and new rare meson decay modes.
Directory of Open Access Journals (Sweden)
John Alexander Taborda
2014-04-01
Full Text Available In this paper, we propose a novel strategy for the synthesis and the classification of nonsmooth limit cycles and its bifurcations (named Non-Standard Bifurcations or Discontinuity Induced Bifurcations or DIBs in n-dimensional piecewise-smooth dynamical systems, particularly Continuous PWS and Discontinuous PWS (or Filippov-type PWS systems. The proposed qualitative approach explicitly includes two main aspects: multiple discontinuity boundaries (DBs in the phase space and multiple intersections between DBs (or corner manifolds—CMs. Previous classifications of DIBs of limit cycles have been restricted to generic cases with a single DB or a single CM. We use the definition of piecewise topological equivalence in order to synthesize all possibilities of nonsmooth limit cycles. Families, groups and subgroups of cycles are defined depending on smoothness zones and discontinuity boundaries (DB involved. The synthesized cycles are used to define bifurcation patterns when the system is perturbed with parametric changes. Four families of DIBs of limit cycles are defined depending on the properties of the cycles involved. Well-known and novel bifurcations can be classified using this approach.
Lee, Ju Jong; Moon, Hyun Jey; Lee, Kyung-Jae; Kim, Joo Ja
2014-01-01
This study assessed fatigue and its association with emotional labor and non-standard working hours among hotel workers. A structured self-administered questionnaire was distributed to 1,320 employees of five hotels located in Seoul. The questionnaire survey included questions concerning the participants' sociodemographics, health-related behaviors, job-related factors, emotional labor, and fatigue. Fatigue was assessed using the Multidimensional Fatigue Scale (MFS). Multiple logistic regression modeling was used to determine the associations between fatigue and emotional labor. Among male workers, there was a significant association between fatigue and both emotional disharmony (OR=5.52, 95% CI=2.35-12.97) and emotional effort (OR=3.48, 95% CI=1.54-7.86). These same associations were seen among the female workers (emotional disharmony: OR=6.91, 95% CI=2.93-16.33; emotional effort: OR=2.28, 95% CI=1.00-5.16). These results indicate that fatigue is associated with emotional labor and, especially, emotional disharmony among hotel workers. Therefore, emotional disharmony management would prove helpful for the prevention of fatigue.
A non-standard optimal control problem arising in an economics application
Directory of Open Access Journals (Sweden)
Alan Zinober
2013-04-01
Full Text Available A recent optimal control problem in the area of economics has mathematical properties that do not fall into the standard optimal control problem formulation. In our problem the state value at the final time the state, y(T = z, is free and unknown, and additionally the Lagrangian integrand in the functional is a piecewise constant function of the unknown value y(T. This is not a standard optimal control problem and cannot be solved using Pontryagin's Minimum Principle with the standard boundary conditions at the final time. In the standard problem a free final state y(T yields a necessary boundary condition p(T = 0, where p(t is the costate. Because the integrand is a function of y(T, the new necessary condition is that y(T should be equal to a certain integral that is a continuous function of y(T. We introduce a continuous approximation of the piecewise constant integrand function by using a hyperbolic tangent approach and solve an example using a C++ shooting algorithm with Newton iteration for solving the Two Point Boundary Value Problem (TPBVP. The minimising free value y(T is calculated in an outer loop iteration using the Golden Section or Brent algorithm. Comparative nonlinear programming (NP discrete-time results are also presented.
Approximation properties of fine hyperbolic graphs
Indian Academy of Sciences (India)
is called the metric invariant translation approximation property for a countable dis- crete metric space. Moreover ... Uniform Roe algebras; fine hyperbolic graph; metric invariant translation approximation property. ..... ate Studies in Mathematics, Volume 88 (2008) (Rhode Island: American Mathematical. Society Providence).
Approximate Uniqueness Estimates for Singular Correlation Matrices.
Finkbeiner, C. T.; Tucker, L. R.
1982-01-01
The residual variance is often used as an approximation to the uniqueness in factor analysis. An upper bound approximation to the residual variance is presented for the case when the correlation matrix is singular. (Author/JKS)
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
HE11 radiation patterns and gaussian approximations
International Nuclear Information System (INIS)
Rebuffi, L.; Crenn, J.P.
1986-12-01
The possibility of approximating the HE11 radiation pattern with a Gaussian distribution is presented. A numerical comparison between HE11 far-field theoretical patterns and Abrams and Crenn approximations permits an evaluation of the validity of these two approximations. A new numerically optimized HE11 Gaussian approximation for the far-field, extended to great part of the near field, has been found. In particular, the value given for the beam radius at the waist, has been demonstrated to give the best HE11 Gaussian approximation in the far-field. The Crenn approximation is found to be very close to this optimal approximation, while the Abrams approximation is shown to be less precise. Universal curves for intensity, amplitude and power distribution are given for the HE11 radiated mode. These results are of interest for laser waveguide applications and for plasma ECRH transmission systems
Analytical approximations of Chandrasekhar's H-Function
International Nuclear Information System (INIS)
Simovic, R.; Vukanic, J.
1995-01-01
Analytical approximations of Chandrasekhar's H-function are derived in this paper by using ordinary and modified DPN methods. The accuracy of the approximations is discussed and the energy dependent albedo problem is treated. (author)
Axiomatic Characterizations of IVF Rough Approximation Operators
Directory of Open Access Journals (Sweden)
Guangji Yu
2014-01-01
Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.
Truth Approximation, Social Epistemology, and Opinion Dynamics
Douven, Igor; Kelp, Christoph
This paper highlights some connections between work on truth approximation and work in social epistemology, in particular work on peer disagreement. In some of the literature on truth approximation, questions have been addressed concerning the efficiency of research strategies for approximating the
Operator approximant problems arising from quantum theory
Maher, Philip J
2017-01-01
This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.
Approximate Nearest Neighbor Queries among Parallel Segments
DEFF Research Database (Denmark)
Emiris, Ioannis Z.; Malamatos, Theocharis; Tsigaridas, Elias
2010-01-01
We develop a data structure for answering efficiently approximate nearest neighbor queries over a set of parallel segments in three dimensions. We connect this problem to approximate nearest neighbor searching under weight constraints and approximate nearest neighbor searching on historical data...
Nonlinear approximation with dictionaries I. Direct estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
2004-01-01
We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation...... with algorithmic constraints: thresholding and Chebychev approximation classes are studied, respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space, and we prove...... that assuming a certain structure of the dictionary is sufficient and (almost) necessary to obtain stronger results. We give examples of classical dictionaries in L^p spaces and modulation spaces where our results recover some known Jackson type estimates, and discuss som new estimates they provide....
Nonlinear approximation with dictionaries, I: Direct estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
We study various approximation classes associated with $m$-term approximation by elements from a (possibly redundant) dictionary in a Banach space. The standard approximation class associated with the best $m$-term approximation is compared to new classes defined by considering $m......$-term approximation with algorithmic constraints: thresholding and Chebychev approximation classes are studied respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space......, and we prove that assuming a certain structure of the dictionary is sufficient and (almost) necessary to obtain stronger results. We give examples of classical dictionaries in $L^p$ spaces and modulation spaces where our results recover some known Jackson type estimates, and discuss som new estimates...
Bounded-Degree Approximations of Stochastic Networks
Energy Technology Data Exchange (ETDEWEB)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.
Mapping moveout approximations in TI media
Stovas, Alexey
2013-11-21
Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.
The use of a non-standard high calcium fly ash in concrete and its response to accelerated curing
Directory of Open Access Journals (Sweden)
Atis, C. D.
2002-09-01
Full Text Available An experimental work was carried out to investigate the use of a non-standard high calcium fly ash in concrete. The response of the same fly ash to the accelerated curing was also explored. With three different cementitious material contents, a total of 48 concretes were produced. The water/ cement ratios were varied from 0.40 to 0.87. Compressive strengths of the moist cured cube specimens cast from the concrete mixtures made with 0%, 15%, 30% and 45% replacement of normal Portland cement with fly ash were measured at 28 days and 3 months. Accelerated compressive strengths were also measured using warmwater method and boiling-water method in accordance with the relevant ASTM and Turkish Standards.
Despite the fact that the fly ash used was a non-standard, the laboratory test results showed that it could be utilized in concrete production at a replacement level between 15% and 30% by weight basis because fly ash concrete developed comparable or higher compressive strength than that of corresponding normal Portland cement concrete. The laboratory test results also indicated that the accelerated curing could be used to predict the compressive strength of fly ash concrete with 85% correlation coefficient. The amount of fly ash was found to be immaterial in the strength prediction. The relation between warm-water method and boiling-water method was of linear form with 93% correlation coefficient.
Se llevó a cabo un trabajo experimental para investigar el uso de una ceniza volante de alto contenido en cal en el hormigón, la cual no cumple las especificaciones recogidas en norma. También, se estudió el comportamiento de la ceniza bajo un curado acelerado. Se elaboraron un total de 48 hormigones con tres proporciones diferentes de material cementante. Las relaciones agua/cemento (a/c usadas estaban comprendidas entre 0,40 y 0,87. A 28 días y 3 meses de curado, se determinaron las resistencias a compresión de probetas cúbicas de hormig
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay
2017-02-13
In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Approximate unitary equivalence of normaloid type operators
Zhu, Sen
2015-01-01
In this paper, we explore approximate unitary equivalence of normaloid operators and classify several normaloid type operators including transaloid operators, polynomial-normaloid operators and von Neumann operators up to approximate unitary equivalence. As an application, we explore approximation of transaloid operators with closed numerical ranges. Among other things, it is proved that those transaloid operators with closed numerical ranges are norm dense in the class of transaloid operators.
Uniform analytic approximation of Wigner rotation matrices
Hoffmann, Scott E.
2018-02-01
We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.
A Note on Generalized Approximation Property
Directory of Open Access Journals (Sweden)
Antara Bhar
2013-01-01
Full Text Available We introduce a notion of generalized approximation property, which we refer to as --AP possessed by a Banach space , corresponding to an arbitrary Banach sequence space and a convex subset of , the class of bounded linear operators on . This property includes approximation property studied by Grothendieck, -approximation property considered by Sinha and Karn and Delgado et al., and also approximation property studied by Lissitsin et al. We characterize a Banach space having --AP with the help of -compact operators, -nuclear operators, and quasi--nuclear operators. A particular case for ( has also been characterized.
Local density approximations for relativistic exchange energies
International Nuclear Information System (INIS)
MacDonald, A.H.
1986-01-01
The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
Non-Linear Approximation of Bayesian Update
Litvinenko, Alexander
2016-06-23
We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.
Simultaneous approximation in scales of Banach spaces
International Nuclear Information System (INIS)
Bramble, J.H.; Scott, R.
1978-01-01
The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods
Approximation properties of fine hyperbolic graphs
Indian Academy of Sciences (India)
2016-08-26
Aug 26, 2016 ... In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use ... Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, People's Republic of China ...
Nonlinear approximation with general wave packets
DEFF Research Database (Denmark)
Borup, Lasse; Nielsen, Morten
2005-01-01
We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...
Quirks of Stirling's Approximation
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
On approximating multi-criteria TSP
Manthey, Bodo; Albers, S.; Marion, J.-Y.
2009-01-01
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP), whose performances are independent of the number $k$ of criteria and come close to the approximation ratios obtained for TSP with a single objective function. We present randomized
On approximating multi-criteria TSP
Manthey, Bodo
We present approximation algorithms for almost all variants of the multicriteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multicriteria maximum traveling salesman problems (Max-TSP). For multicriteria Max-STSP where the edge weights have to be
Boundary Value Problems and Approximate Solutions ...
African Journals Online (AJOL)
In this paper, we discuss about some basic things of boundary value problems. Secondly, we study boundary conditions involving derivatives and obtain finite difference approximations of partial derivatives of boundary value problems. The last section is devoted to determine an approximate solution for boundary value ...
Polynomial approximation approach to transient heat conduction ...
African Journals Online (AJOL)
This work reports polynomial approximation approach to transient heat conduction in a long slab, long cylinder and sphere with linear internal heat generation. It has been shown that the polynomial approximation method is able to calculate average temperature as a function of time for higher value of Biot numbers.
Approximation algorithms for guarding holey polygons ...
African Journals Online (AJOL)
Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...
Similarity based approximate reasoning: fuzzy control
Raha, S.; Hossain, A.; Ghosh, S.
2008-01-01
This paper presents an approach to similarity based approximate reasoning that elucidates the connection between similarity and existing approaches to inference in approximate reasoning methodology. A set of axioms is proposed to get a reasonable measure of similarity between two fuzzy sets. The
Improved Dutch Roll Approximation for Hypersonic Vehicle
Directory of Open Access Journals (Sweden)
Liang-Liang Yin
2014-06-01
Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Verhulst , Adrien; Normand , Jean-Marie; Lombart , Cindy; Moreau , Guillaume
2017-01-01
International audience; In this paper we present an immersive virtual reality user study aimed at investigating how customers perceive and if they would purchase non standard (i.e. misshaped) fruits and vegetables (FaVs) in supermarkets and hypermarkets. Indeed, food waste is a major issue for the retail sector and a recent trend is to reduce it by selling non-standard goods. An important question for retailers relates to the FaVs' " level of abnormality " that consumers would agree to buy. H...
The tendon approximator device in traumatic injuries.
Forootan, Kamal S; Karimi, Hamid; Forootan, Nazilla-Sadat S
2015-01-01
Precise and tension-free approximation of two tendon endings is the key predictor of outcomes following tendon lacerations and repairs. We evaluate the efficacy of a new tendon approximator device in tendon laceration repairs. In a comparative study, we used our new tendon approximator device in 99 consecutive patients with laceration of 266 tendons who attend a university hospital and evaluated the operative time to repair the tendons, surgeons' satisfaction as well as patient's outcomes in a long-term follow-up. Data were compared with the data of control patients undergoing tendon repair by conventional method. Totally 266 tendons were repaired by approximator device and 199 tendons by conventional technique. 78.7% of patients in first group were male and 21.2% were female. In approximator group 38% of patients had secondary repair of cut tendons and 62% had primary repair. Patients were followed for a mean period of 3years (14-60 months). Time required for repair of each tendon was significantly reduced with the approximator device (2 min vs. 5.5 min, ptendon repair were identical in the two groups and were not significantly different. 1% of tendons in group A and 1.2% in group B had rupture that was not significantly different. The new nerve approximator device is cheap, feasible to use and reduces the time of tendon repair with sustained outcomes comparable to the conventional methods.
Ayiomamitou, Ioli; Yiakoumetti, Androula
2017-01-01
Over the last 50 years, sociolinguistic research in settings in which a regional, social, or ethnic non-standard linguistic variety is used alongside the standard variety of the same language has steadily increased. The educational implications of the concomitant use of such varieties have also received a great deal of research attention. This study deals with regional linguistic variation and its implications for education by focusing on the Greek Cypriot educational context. This context is ideal for investigating the linguistic profiles of speakers of proximal varieties as the majority of Greek Cypriots are primarily educated in just one of their varieties: the standard educational variety. The aim of our study was to understand Greek Cypriot primary school pupils' sociolinguistic awareness via examination of their written production in their home variety [Cypriot Greek (CG) dialect]. Our assumption was that, because written production is less spontaneous than speech, it better reflects pupils' conscious awareness. Pupils were advised to produce texts that reflected their everyday language with family and friends (beyond school boundaries). As expected, students' texts included an abundance of mesolectal features and the following were the ten most frequent: (1) palato-alveolar consonants, (2) future particle [ená] and conditional [ítan na] + subjunctive, (3) consonant devoicing, (4) CG-specific verb stems, (5) final [n] retention, (6) [én/ éni] instead of [íne], (7) CG-specific verb endings, (8) [én/é] instead of [ðen], (9) elision of intervocalic fricative [ɣ], and (10) CG-specific adverbs. Importantly, in addition to the expected mesolectal features that reflect contemporary CG, students included a significant and unexpected number of basilectal features and instances of hyperdialectism (that are not representative of today's linguistic reality) which rendered their texts register-inappropriate. This led us to conclude that Greek Cypriot students
Directory of Open Access Journals (Sweden)
Ioli Ayiomamitou
2017-11-01
Full Text Available Over the last 50 years, sociolinguistic research in settings in which a regional, social, or ethnic non-standard linguistic variety is used alongside the standard variety of the same language has steadily increased. The educational implications of the concomitant use of such varieties have also received a great deal of research attention. This study deals with regional linguistic variation and its implications for education by focusing on the Greek Cypriot educational context. This context is ideal for investigating the linguistic profiles of speakers of proximal varieties as the majority of Greek Cypriots are primarily educated in just one of their varieties: the standard educational variety. The aim of our study was to understand Greek Cypriot primary school pupils’ sociolinguistic awareness via examination of their written production in their home variety [Cypriot Greek (CG dialect]. Our assumption was that, because written production is less spontaneous than speech, it better reflects pupils’ conscious awareness. Pupils were advised to produce texts that reflected their everyday language with family and friends (beyond school boundaries. As expected, students’ texts included an abundance of mesolectal features and the following were the ten most frequent: (1 palato-alveolar consonants, (2 future particle [ená] and conditional [ítan na] + subjunctive, (3 consonant devoicing, (4 CG-specific verb stems, (5 final [n] retention, (6 [én/ éni] instead of [íne], (7 CG-specific verb endings, (8 [én/é] instead of [ðen], (9 elision of intervocalic fricative [ɣ], and (10 CG-specific adverbs. Importantly, in addition to the expected mesolectal features that reflect contemporary CG, students included a significant and unexpected number of basilectal features and instances of hyperdialectism (that are not representative of today’s linguistic reality which rendered their texts register-inappropriate. This led us to conclude that Greek
Ayiomamitou, Ioli; Yiakoumetti, Androula
2017-01-01
Over the last 50 years, sociolinguistic research in settings in which a regional, social, or ethnic non-standard linguistic variety is used alongside the standard variety of the same language has steadily increased. The educational implications of the concomitant use of such varieties have also received a great deal of research attention. This study deals with regional linguistic variation and its implications for education by focusing on the Greek Cypriot educational context. This context is ideal for investigating the linguistic profiles of speakers of proximal varieties as the majority of Greek Cypriots are primarily educated in just one of their varieties: the standard educational variety. The aim of our study was to understand Greek Cypriot primary school pupils’ sociolinguistic awareness via examination of their written production in their home variety [Cypriot Greek (CG) dialect]. Our assumption was that, because written production is less spontaneous than speech, it better reflects pupils’ conscious awareness. Pupils were advised to produce texts that reflected their everyday language with family and friends (beyond school boundaries). As expected, students’ texts included an abundance of mesolectal features and the following were the ten most frequent: (1) palato-alveolar consonants, (2) future particle [ená] and conditional [ítan na] + subjunctive, (3) consonant devoicing, (4) CG-specific verb stems, (5) final [n] retention, (6) [én/ éni] instead of [íne], (7) CG-specific verb endings, (8) [én/é] instead of [ðen], (9) elision of intervocalic fricative [ɣ], and (10) CG-specific adverbs. Importantly, in addition to the expected mesolectal features that reflect contemporary CG, students included a significant and unexpected number of basilectal features and instances of hyperdialectism (that are not representative of today’s linguistic reality) which rendered their texts register-inappropriate. This led us to conclude that Greek Cypriot
Hardness and Approximation for Network Flow Interdiction
Chestnut, Stephen R.; Zenklusen, Rico
2015-01-01
In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...
Regression with Sparse Approximations of Data
DEFF Research Database (Denmark)
Noorzad, Pardis; Sturm, Bob L.
2012-01-01
We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...
Approximately Liner Phase IIR Digital Filter Banks
Directory of Open Access Journals (Sweden)
J. D. Ćertić
2013-11-01
Full Text Available In this paper, uniform and nonuniform digital filter banks based on approximately linear phase IIR filters and frequency response masking technique (FRM are presented. Both filter banks are realized as a connection of an interpolated half-band approximately linear phase IIR filter as a first stage of the FRM design and an appropriate number of masking filters. The masking filters are half-band IIR filters with an approximately linear phase. The resulting IIR filter banks are compared with linear-phase FIR filter banks exhibiting similar magnitude responses. The effects of coefficient quantization are analyzed.
Mathematical analysis, approximation theory and their applications
Gupta, Vijay
2016-01-01
Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.
Approximation of the semi-infinite interval
Directory of Open Access Journals (Sweden)
A. McD. Mercer
1980-01-01
Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.
Pion-nucleus cross sections approximation
International Nuclear Information System (INIS)
Barashenkov, V.S.; Polanski, A.; Sosnin, A.N.
1990-01-01
Analytical approximation of pion-nucleus elastic and inelastic interaction cross-section is suggested, with could be applied in the energy range exceeding several dozens of MeV for nuclei heavier than beryllium. 3 refs.; 4 tabs
Square well approximation to the optical potential
International Nuclear Information System (INIS)
Jain, A.K.; Gupta, M.C.; Marwadi, P.R.
1976-01-01
Approximations for obtaining T-matrix elements for a sum of several potentials in terms of T-matrices for individual potentials are studied. Based on model calculations for S-wave for a sum of two separable non-local potentials of Yukawa type form factors and a sum of two delta function potentials, it is shown that the T-matrix for a sum of several potentials can be approximated satisfactorily over all the energy regions by the sum of T-matrices for individual potentials. Based on this, an approximate method for finding T-matrix for any local potential by approximating it by a sum of suitable number of square wells is presented. This provides an interesting way to calculate the T-matrix for any arbitary potential in terms of Bessel functions to a good degree of accuracy. The method is applied to the Saxon-Wood potentials and good agreement with exact results is found. (author)
Seismic wave extrapolation using lowrank symbol approximation
Fomel, Sergey
2012-04-30
We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.
Steepest descent approximations for accretive operator equations
International Nuclear Information System (INIS)
Chidume, C.E.
1993-03-01
A necessary and sufficient condition is established for the strong convergence of the steepest descent approximation to a solution of equations involving quasi-accretive operators defined on a uniformly smooth Banach space. (author). 49 refs
Degree of Approximation and Green Potential
Directory of Open Access Journals (Sweden)
M. Simkani
2009-03-01
Full Text Available We will relate the degree of rational approximation of a meromorphic function f to the minimum value, on the natural boundary of f, of Green potential of the weak∗ limit of the normalized pole-counting measures
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
Methods of Fourier analysis and approximation theory
Tikhonov, Sergey
2016-01-01
Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.
APPROXIMATE DEVELOPMENTS FOR SURFACES OF REVOLUTION
Directory of Open Access Journals (Sweden)
Mădălina Roxana Buneci
2016-12-01
Full Text Available The purpose of this paper is provide a set of Maple procedures to construct approximate developments of a general surface of revolution generalizing the well-known gore method for sphere
An overview on Approximate Bayesian computation*
Directory of Open Access Journals (Sweden)
Baragatti Meïli
2014-01-01
Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.
Approximation for the adjoint neutron spectrum
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2002-01-01
The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)
TMB: Automatic differentiation and laplace approximation
DEFF Research Database (Denmark)
Kristensen, Kasper; Nielsen, Anders; Berg, Casper Willestofte
2016-01-01
are automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three) of the joint likelihood. The computations are designed to be fast for problems with many random effects (approximate to 10(6)) and parameters (approximate to 10...... computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects......(3)). Computation times using ADMB and TMB are compared on a suite of examples ranging from simple models to large spatial models where the random effects are a Gaussian random field. Speedups ranging from 1.5 to about 100 are obtained with increasing gains for large problems...
Approximations of Stochastic Partial Differential Equations
Di Nunno, Giulia; Zhang, Tusheng
2014-01-01
In this paper we show that solutions of stochastic partial differ- ential equations driven by Brownian motion can be approximated by stochastic partial differential equations forced by pure jump noise/random kicks. Applications to stochastic Burgers equations are discussed.
Saddlepoint approximation methods in financial engineering
Kwok, Yue Kuen
2018-01-01
This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables. The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...
Approximative solutions of stochastic optimization problem
Czech Academy of Sciences Publication Activity Database
Lachout, Petr
2010-01-01
Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf
Nonlinear approximation with nonstationary Gabor frames
DEFF Research Database (Denmark)
Ottosen, Emil Solsbæk; Nielsen, Morten
2018-01-01
We consider sparseness properties of adaptive time-frequency representations obtained using nonstationary Gabor frames (NSGFs). NSGFs generalize classical Gabor frames by allowing for adaptivity in either time or frequency. It is known that the concept of painless nonorthogonal expansions...... resolution. Based on this characterization we prove an upper bound on the approximation error occurring when thresholding the coefficients of the corresponding frame expansions. We complement the theoretical results with numerical experiments, estimating the rate of approximation obtained from thresholding...
Lattice quantum chromodynamics with approximately chiral fermions
Energy Technology Data Exchange (ETDEWEB)
Hierl, Dieter
2008-05-15
In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)
On surface approximation using developable surfaces
DEFF Research Database (Denmark)
Chen, H. Y.; Lee, I. K.; Leopoldseder, s.
1999-01-01
We introduce a method for approximating a given surface by a developable surface. It will be either a G(1) surface consisting of pieces of cones or cylinders of revolution or a G(r) NURBS developable surface. Our algorithm will also deal properly with the problems of reverse engineering and produ...... robust approximation of given scattered data. The presented technique can be applied in computer aided manufacturing, e.g. in shipbuilding. (C) 1999 Academic Press....
On surface approximation using developable surfaces
DEFF Research Database (Denmark)
Chen, H. Y.; Lee, I. K.; Leopoldseder, S.
1998-01-01
We introduce a method for approximating a given surface by a developable surface. It will be either a G_1 surface consisting of pieces of cones or cylinders of revolution or a G_r NURBS developable surface. Our algorithm will also deal properly with the problems of reverse engineering and produce...... robust approximation of given scattered data. The presented technique can be applied in computer aided manufacturing, e.g. in shipbuilding....
Lattice quantum chromodynamics with approximately chiral fermions
International Nuclear Information System (INIS)
Hierl, Dieter
2008-05-01
In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the Θ + pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)
Nonlinear Stochastic PDEs: Analysis and Approximations
2016-05-23
Approximation to Nonlinear SPDEs with Discrete Random Variables , SIAM J Scientific Computing, (08 2015): 1872. doi: R. Mikulevicius, B. Rozovskii. On...multiplicative discrete random variables , ( ) S. Lototsky, B. Rozovsky. Stochastic Partial Differential Equations, (09 2015) B. Rozovsky, R...B. Rozovsky and G.E. Karniadakis, "Adaptive Wick-Malliavin approximation to nonlinear SPDEs with discrete random variables ," SIAM J. Sci. Comput., 37
Approximation properties ofλ-Bernstein operators.
Cai, Qing-Bo; Lian, Bo-Yong; Zhou, Guorong
2018-01-01
In this paper, we introduce a new type λ -Bernstein operators with parameter [Formula: see text], we investigate a Korovkin type approximation theorem, establish a local approximation theorem, give a convergence theorem for the Lipschitz continuous functions, we also obtain a Voronovskaja-type asymptotic formula. Finally, we give some graphs and numerical examples to show the convergence of [Formula: see text] to [Formula: see text], and we see that in some cases the errors are smaller than [Formula: see text] to f .
Rough Sets Approximations for Learning Outcomes
Encheva, Sylvia; Tumin, Sharil
Discovering dependencies between students' responses and their level of mastering of a particular skill is very important in the process of developing intelligent tutoring systems. This work is an approach to attain a higher level of certainty while following students' learning progress. Rough sets approximations are applied for assessing students understanding of a concept. Consecutive responses from each individual learner to automated tests are placed in corresponding rough sets approximations. The resulting path provides strong indication about the current level of learning outcomes.
Approximations for the Erlang Loss Function
DEFF Research Database (Denmark)
Mejlbro, Leif
1998-01-01
Theoretically, at least three formulae are needed for arbitrarily good approximates of the Erlang Loss Function. In the paper, for convenience five formulae are presented guaranteeing a relative error <1E-2, and methods are indicated for improving this bound.......Theoretically, at least three formulae are needed for arbitrarily good approximates of the Erlang Loss Function. In the paper, for convenience five formulae are presented guaranteeing a relative error
Approximating centrality in evolving graphs: toward sublinearity
Priest, Benjamin W.; Cybenko, George
2017-05-01
The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.
Zamberlin, Šimun; Samaržija, Dubravka
2017-06-15
Classical and probiotic set yogurt were made using non-standard heat treatment of sheep's milk at 60°C/5min. Physico-chemical properties, sensory characteristics, and the viability of bacteria that originated from cultures in classical and probiotic yogurt were analysed during 21days of storage at 4°C. For the production of yogurt, a standard yogurt culture and a probiotic strain Lactobacillus rhamnosus GG were used. At the end of storage time of the classical and probiotic yogurt the totals of non-denatured whey proteins were 92.31 and 91.03%. The viability of yogurt culture bacteria and Lactobacillus rhamnosus GG were higher than 10 6 cfu/g. The total sensory score (maximum - 20) was 18.49 for the classical and 18.53 for the probiotic. In nutritional and functional terms it is possible to produce classical and probiotic sheep's milk yogurt by using a non-standard temperature of heat treatment with a shelf life of 21days. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sehnal, David; Svobodová Vařeková, Radka; Pravda, Lukáš; Ionescu, Crina-Maria; Geidl, Stanislav; Horský, Vladimír; Jaiswal, Deepti; Wimmerová, Michaela; Koča, Jaroslav
2015-01-01
Following the discovery of serious errors in the structure of biomacromolecules, structure validation has become a key topic of research, especially for ligands and non-standard residues. ValidatorDB (freely available at http://ncbr.muni.cz/ValidatorDB) offers a new step in this direction, in the form of a database of validation results for all ligands and non-standard residues from the Protein Data Bank (all molecules with seven or more heavy atoms). Model molecules from the wwPDB Chemical Component Dictionary are used as reference during validation. ValidatorDB covers the main aspects of validation of annotation, and additionally introduces several useful validation analyses. The most significant is the classification of chirality errors, allowing the user to distinguish between serious issues and minor inconsistencies. Other such analyses are able to report, for example, completely erroneous ligands, alternate conformations or complete identity with the model molecules. All results are systematically classified into categories, and statistical evaluations are performed. In addition to detailed validation reports for each molecule, ValidatorDB provides summaries of the validation results for the entire PDB, for sets of molecules sharing the same annotation (three-letter code) or the same PDB entry, and for user-defined selections of annotations or PDB entries. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
The Grammar of Approximating Number Pairs
Eriksson, Kimmo; Bailey, Drew H.; Geary, David C.
2009-01-01
We studied approximating pairs of numbers (a, b) used to estimate quantity in a single phrase (“two, three years ago”). Pollmann and Jansen (1996) found that only a few of the many possible pairs are actually used, suggesting an interaction between the ways in which people estimate quantity and their use of quantitative phrases in colloquial speech. They proposed a set of rules that describe which approximating pairs are used in Dutch phrases. We revisit this issue in an analysis of Swedish and American language corpora and in a series of three experiments in which Swedish and American adults rated the acceptability of various approximating pairs, and created approximating pairs of their own in response to various estimation tasks. We find evidence for Pollmann’s and Jansen’s rules in both Swedish and English phrases, but also identify additional rules and substantial individual and cross-language variation. We discuss implications for the origin of this loose “grammar” of approximating pairs. PMID:20234023
'LTE-diffusion approximation' for arc calculations
International Nuclear Information System (INIS)
Lowke, J J; Tanaka, M
2006-01-01
This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode
Approximate Bayesian evaluations of measurement uncertainty
Possolo, Antonio; Bodnar, Olha
2018-04-01
The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.
Semiclassical initial value approximation for Green's function.
Kay, Kenneth G
2010-06-28
A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
The grammar of approximating number pairs.
Eriksson, Kimmo; Bailey, Drew H; Geary, David C
2010-04-01
In the present article, we studied approximating pairs of numbers (a, b) that were used to estimate quantity in a single phrase ("two, three years ago"). Pollmann and Jansen (1996) found that only a few of the many possible pairs are actually used, suggesting an interaction between the ways in which people estimate quantity and their use of quantitative phrases in colloquial speech. They proposed a set of rules that describe which approximating pairs are used in Dutch phrases. We revisited this issue in an analysis of Swedish and American language corpora and in a series of three experiments in which Swedish and American adults rated the acceptability of various approximating pairs and created approximating pairs of their own in response to various estimation tasks. We found evidence for Pollmann and Jansen's rules in both Swedish and English phrases, but we also identified additional rules and substantial individual and cross-language variation. We will discuss implications for the origin of this loose "grammar" of approximating pairs.
Multilevel weighted least squares polynomial approximation
Haji-Ali, Abdul-Lateef
2017-06-30
Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.
17O(n,α)14C cross section from 25 meV to approximately 1 MeV
International Nuclear Information System (INIS)
Koehler, P.E.; Graff, S.M.
1991-01-01
We have measured the 17 O(n,α) 14 C cross section from thermal energy to approximately 1 MeV. A bump in the data near 3 keV could be fitted by a state whose properties are consistent with a known subthreshold J π =1 - level at E x =8.039 MeV. The cause of the 1/v cross section near thermal energy could not be determined although the known 2 + state at 8.213 MeV was found to be too narrow to contribute much to the thermal cross section. Our data are compared to measurements made via the inverse reaction. There are many differences between the two sets of data. The astrophysical reaction rate was calculated from the measured cross section. This reaction plays a role in the nucleosynthesis of heavy elements in nonstandard big-bang models. At big-bang temperatures, the experimental rate was found to be in fair agreement with the rate estimated from the previously known properties of states of 18 O in this region. Furthermore, using the available information from experiments, it was estimated that the 17 O(n,α) 14 C rate is approximately a factor of 10 3 --10 4 times larger than the 17 O(n,γ) 18 O rate at big-bang temperatures. As a result, there may be significant cycling between 14 C and 17 O resulting in a reduction of heavy-element nucleosynthesis
Numerical approximation of partial differential equations
Bartels, Sören
2016-01-01
Finite element methods for approximating partial differential equations have reached a high degree of maturity, and are an indispensible tool in science and technology. This textbook aims at providing a thorough introduction to the construction, analysis, and implementation of finite element methods for model problems arising in continuum mechanics. The first part of the book discusses elementary properties of linear partial differential equations along with their basic numerical approximation, the functional-analytical framework for rigorously establishing existence of solutions, and the construction and analysis of basic finite element methods. The second part is devoted to the optimal adaptive approximation of singularities and the fast iterative solution of linear systems of equations arising from finite element discretizations. In the third part, the mathematical framework for analyzing and discretizing saddle-point problems is formulated, corresponding finte element methods are analyzed, and particular ...
Fast wavelet based sparse approximate inverse preconditioner
Energy Technology Data Exchange (ETDEWEB)
Wan, W.L. [Univ. of California, Los Angeles, CA (United States)
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Piecewise-Cubic Approximation in Autotracking Mode
Dikoussar, N D
2004-01-01
A method for piecewise-cubic approximation within the frame of four-point transforms is proposed. The knots of the segments are detected in autotracking mode using a digitized curve. A three-point cubic parametric spline (TPS) is used as a model of a local approximant. A free parameter $\\theta$ (a coefficient at $x^{3}$) is found in a line following mode, using step-by-step averaging. A formula for expression of the free parameter via a length of the segment and values of a function and derivatives in joining points is received. The $C^{1}$-smoothness depends on the accuracy of the $\\theta$-estimate. The stability of the method w.r.t. input errors is shown as well. The key parameters of the approximation are: the parameters of the basic functions, the variance of the input errors, and a sampling step. The efficiency of the method is shown by numerical calculations on test examples.
On transparent potentials: a Born approximation study
International Nuclear Information System (INIS)
Coudray, C.
1980-01-01
In the frame of the scattering inverse problem at fixed energy, a class of potentials transparent in Born approximation is obtained. All these potentials are spherically symmetric and are oscillating functions of the reduced radial variable. Amongst them, the Born approximation of the transparent potential of the Newton-Sabatier method is found. In the same class, quasi-transparent potentials are exhibited. Very general features of potentials transparent in Born approximation are then stated. And bounds are given for the exact scattering amplitudes corresponding to most of the potentials previously exhibited. These bounds, obtained at fixed energy, and for large values of the angular momentum, are found to be independent on the energy
The adiabatic approximation in multichannel scattering
International Nuclear Information System (INIS)
Schulte, A.M.
1978-01-01
Using two-dimensional models, an attempt has been made to get an impression of the conditions of validity of the adiabatic approximation. For a nucleon bound to a rotating nucleus the Coriolis coupling is neglected and the relation between this nuclear Coriolis coupling and the classical Coriolis force has been examined. The approximation for particle scattering from an axially symmetric rotating nucleus based on a short duration of the collision, has been combined with an approximation based on the limitation of angular momentum transfer between particle and nucleus. Numerical calculations demonstrate the validity of the new combined method. The concept of time duration for quantum mechanical collisions has also been studied, as has the collective description of permanently deformed nuclei. (C.F.)
The binary collision approximation: Background and introduction
International Nuclear Information System (INIS)
Robinson, M.T.
1992-08-01
The binary collision approximation (BCA) has long been used in computer simulations of the interactions of energetic atoms with solid targets, as well as being the basis of most analytical theory in this area. While mainly a high-energy approximation, the BCA retains qualitative significance at low energies and, with proper formulation, gives useful quantitative information as well. Moreover, computer simulations based on the BCA can achieve good statistics in many situations where those based on full classical dynamical models require the most advanced computer hardware or are even impracticable. The foundations of the BCA in classical scattering are reviewed, including methods of evaluating the scattering integrals, interaction potentials, and electron excitation effects. The explicit evaluation of time at significant points on particle trajectories is discussed, as are scheduling algorithms for ordering the collisions in a developing cascade. An approximate treatment of nearly simultaneous collisions is outlined and the searching algorithms used in MARLOWE are presented
Minimal entropy approximation for cellular automata
International Nuclear Information System (INIS)
Fukś, Henryk
2014-01-01
We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)
APPROXIMATE INTEGRATION OF HIGHLY OSCILLATING FUNCTIONS
Directory of Open Access Journals (Sweden)
I. N. Melashko
2017-01-01
Full Text Available Elementary approximate formulae for numerical integration of functions containing oscillating factors of a special form with a parameter have been proposed in the paper. In this case general quadrature formulae can be used only at sufficiently small values of the parameter. Therefore, it is necessary to consider in advance presence of strongly oscillating factors in order to obtain formulae for numerical integration which are suitable in the case when the parameter is changing within wide limits. This can be done by taking into account such factors as weighting functions. Moreover, since the parameter can take values which cannot always be predicted in advance, approximate formulae for calculation of such integrals should be constructed in such a way that they contain this parameter in a letter format and they are suitable for calculation at any and particularly large values of the parameter. Computational rules with such properties are generally obtained by dividing an interval of integration into elementary while making successive approximation of the integral density at each elementary interval with polynomials of the first, second and third degrees and taking the oscillating factors as weighting functions. The paper considers the variant when density of the integrals at each elementary interval is approximated by a polynomial of zero degree that is a constant which is equal to the value of density in the middle of the interval. At the same time one approximate formula for calculation of an improper integral with infinite interval of the function with oscillating factor of a special type has been constructed in the paper. In this case it has been assumed that density of the improper integral rather quickly goes to zero when an argument module is increasing indefinitely. In other words it is considered as small to negligible outside some finite interval. Uniforms in parameter used for evaluation of errors in approximate formulae have been
Stopping Rules for Linear Stochastic Approximation
Wada, Takayuki; Itani, Takamitsu; Fujisaki, Yasumasa
Stopping rules are developed for stochastic approximation which is an iterative method for solving an unknown equation based on its consecutive residuals corrupted by additive random noise. It is assumed that the equation is linear and the noise is independent and identically distributed random vectors with zero mean and a bounded covariance. Then, the number of iterations for achieving a given probabilistic accuracy of the resultant solution is derived, which gives a rigorous stopping rule for the stochastic approximation. This number is polynomial of the problem size.
Computational topology for approximations of knots
Directory of Open Access Journals (Sweden)
Ji Li
2014-10-01
• a sum of total curvature and derivative. High degree Bézier curves are often used as smooth representations, where computational efficiency is a practical concern. Subdivision can produce PL approximations for a given B\\'ezier curve, fulfilling the above two conditions. The primary contributions are: (i a priori bounds on the number of subdivision iterations sufficient to achieve a PL approximation that is ambient isotopic to the original B\\'ezier curve, and (ii improved iteration bounds over those previously established.
On the dipole approximation with error estimates
Boßmann, Lea; Grummt, Robert; Kolb, Martin
2018-01-01
The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.
Static correlation beyond the random phase approximation
DEFF Research Database (Denmark)
Olsen, Thomas; Thygesen, Kristian Sommer
2014-01-01
derived from Hedin's equations (Random Phase Approximation (RPA), Time-dependent Hartree-Fock (TDHF), Bethe-Salpeter equation (BSE), and Time-Dependent GW) all reproduce the correct dissociation limit. We also show that the BSE improves the correlation energies obtained within RPA and TDHF significantly...... for intermediate binding distances. A Hubbard model for the dimer allows us to obtain exact analytical results for the various approximations, which is readily compared with the exact diagonalization of the model. Moreover, the model is shown to reproduce all the qualitative results from the ab initio calculations...
Faster and Simpler Approximation of Stable Matchings
Directory of Open Access Journals (Sweden)
Katarzyna Paluch
2014-04-01
Full Text Available We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS
Directory of Open Access Journals (Sweden)
T. I. Aliev
2013-03-01
Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.
Approximate Controllability of Fractional Integrodifferential Evolution Equations
Directory of Open Access Journals (Sweden)
R. Ganesh
2013-01-01
Full Text Available This paper addresses the issue of approximate controllability for a class of control system which is represented by nonlinear fractional integrodifferential equations with nonlocal conditions. By using semigroup theory, p-mean continuity and fractional calculations, a set of sufficient conditions, are formulated and proved for the nonlinear fractional control systems. More precisely, the results are established under the assumption that the corresponding linear system is approximately controllable and functions satisfy non-Lipschitz conditions. The results generalize and improve some known results.
Approximated Fractional Order Chebyshev Lowpass Filters
Directory of Open Access Journals (Sweden)
Todd Freeborn
2015-01-01
Full Text Available We propose the use of nonlinear least squares optimization to approximate the passband ripple characteristics of traditional Chebyshev lowpass filters with fractional order steps in the stopband. MATLAB simulations of (1+α, (2+α, and (3+α order lowpass filters with fractional steps from α = 0.1 to α = 0.9 are given as examples. SPICE simulations of 1.2, 1.5, and 1.8 order lowpass filters using approximated fractional order capacitors in a Tow-Thomas biquad circuit validate the implementation of these filter circuits.
Approximate Inference and Deep Generative Models
CERN. Geneva
2018-01-01
Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.
Approximation result toward nearest neighbor heuristic
Directory of Open Access Journals (Sweden)
Monnot Jér"me
2002-01-01
Full Text Available In this paper, we revisit the famous heuristic called nearest neighbor (N N for the traveling salesman problem under maximization and minimization goal. We deal with variants where the edge costs belong to interval Ša;taĆ for a>0 and t>1, which certainly corresponds to practical cases of these problems. We prove that NN is a (t+1/2t-approximation for maxTSPŠa;taĆ and a 2/(t+1-approximation for minTSPŠa;taĆ under the standard performance ratio. Moreover, we show that these ratios are tight for some instances.
Radiation forces in the discrete dipole approximation
Hoekstra, A.G.; Frijlink, M.O.; Waters, L.B.F.M.; Sloot, P.M.A.
2001-01-01
The theory of the discrete-dipole approximation (DDA) for light scattering is extended to allow for the calculation of radiation forces on each dipole in the DDA model. Starting with the theory of Draine and Weingartner [Astrophys. J. 470, 551 (1996)] we derive an expression for the radiation force
Perturbation of operators and approximation of spectrum
Indian Academy of Sciences (India)
The pure linear algebraic approach is the main advantage of the results here. Keywords. Operator .... The paper is organized as follows. In §2, the approximation results are extended to the case of a one-parameter norm continuous family of operators. In §3, the spectral gap prediction results are proved with some examples.
Isotopic Approximation of Implicit Curves and Surfaces
Plantinga, Simon; Vegter, Gert
2004-01-01
Implicit surfaces are defined as the zero set of a function F: R3 → R. Although several algorithms exist for generating piecewise linear approximations, most of them are based on a user-defined stepsize or bounds to indicate the precision, and therefore cannot guarantee topological correctness.
A rational approximation of the effectiveness factor
DEFF Research Database (Denmark)
Wedel, Stig; Luss, Dan
1980-01-01
A fast, approximate method of calculating the effectiveness factor for arbitrary rate expressions is presented. The method does not require any iterative or interpolative calculations. It utilizes the well known asymptotic behavior for small and large Thiele moduli to derive a rational function w...
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...
Approximate solutions to variational inequalities and applications
Directory of Open Access Journals (Sweden)
M. Beatrice Lignola
1994-11-01
Full Text Available The aim of this paper is to investigate two concepts of approximate solutions to parametric variational inequalities in topological vector spaces for which the corresponding solution map is closed graph and/or lower semicontinuous and to apply the results to the stability of optimization problems with variational inequality constrains.
Nanostructures: Scattering beyond the Born approximation
Grigoriev, S.V.; Syromyatnikov, A. V.; Chumakov, A. P.; Grigoryeva, N.A.; Napolskii, K.S.; Roslyakov, I. V.; Eliseev, A.A.; Petukhov, A.V.; Eckerlebe, H.
2010-01-01
The neutron scattering on a two-dimensional ordered nanostructure with the third nonperiodic dimension can go beyond the Born approximation. In our model supported by the exact theoretical solution a well-correlated hexagonal porous structure of anodic aluminum oxide films acts as a peculiar
Large hierarchies from approximate R symmetries
International Nuclear Information System (INIS)
Kappl, Rolf; Ratz, Michael; Vaudrevange, Patrick K.S.
2008-12-01
We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales. (orig.)
Approximability and Parameterized Complexity of Minmax Values
DEFF Research Database (Denmark)
Hansen, Kristoffer Arnsfelt; Hansen, Thomas Dueholm; Miltersen, Peter Bro
2008-01-01
We consider approximating the minmax value of a multi player game in strategic form. Tightening recent bounds by Borgs et al., we observe that approximating the value with a precision of ε log n digits (for any constant ε > 0) is NP-hard, where n is the size of the game. On the other hand......, approximating the value with a precision of c log log n digits (for any constant c ≥ 1) can be done in quasi-polynomial time. We consider the parameterized complexity of the problem, with the parameter being the number of pure strategies k of the player for which the minmax value is computed. We show...... that if there are three players, k = 2 and there are only two possible rational payoffs, the minmax value is a rational number and can be computed exactly in linear time. In the general case, we show that the value can be approximated wigh any polynomial number of digits of accuracy in time n^O(k) . On the other hand, we...
An Approximate Bayesian Fundamental Frequency Estimator
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2012-01-01
Joint fundamental frequency and model order estimation is an important problem in several applications such as speech and music processing. In this paper, we develop an approximate estimation algorithm of these quantities using Bayesian inference. The inference about the fundamental frequency...
Uniform semiclassical approximation for absorptive scattering systems
International Nuclear Information System (INIS)
Hussein, M.S.; Pato, M.P.
1987-07-01
The uniform semiclassical approximation of the elastic scattering amplitude is generalized to absorptive systems. An integral equation is derived which connects the absorption modified amplitude to the absorption free one. Division of the amplitude into a diffractive and refractive components is then made possible. (Author) [pt
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Nonlinear approximation with dictionaries. II. Inverse Estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
2006-01-01
In this paper, which is the sequel to [16], we study inverse estimates of the Bernstein type for nonlinear approximation with structured redundant dictionaries in a Banach space. The main results are for blockwise incoherent dictionaries in Hilbert spaces, which generalize the notion of joint block...
Nonlinear approximation with dictionaries,.. II: Inverse estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
In this paper we study inverse estimates of the Bernstein type for nonlinear approximation with structured redundant dictionaries in a Banach space. The main results are for separated decomposable dictionaries in Hilbert spaces, which generalize the notion of joint block-diagonal mutually...
Generalized Lower and Upper Approximations in Quantales
Directory of Open Access Journals (Sweden)
Qimei Xiao
2012-01-01
Full Text Available We introduce the concepts of set-valued homomorphism and strong set-valued homomorphism of a quantale which are the extended notions of congruence and complete congruence, respectively. The properties of generalized lower and upper approximations, constructed by a set-valued mapping, are discussed.
Uncertainty relations for approximation and estimation
Energy Technology Data Exchange (ETDEWEB)
Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)
2016-05-27
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.
An approximate classical unimolecular reaction rate theory
Zhao, Meishan; Rice, Stuart A.
1992-05-01
We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.
Markov operators, positive semigroups and approximation processes
Altomare, Francesco; Leonessa, Vita; Rasa, Ioan
2015-01-01
In recent years several investigations have been devoted to the study of large classes of (mainly degenerate) initial-boundary value evolution problems in connection with the possibility to obtain a constructive approximation of the associated positive C_0-semigroups. In this research monograph we present the main lines of a theory which finds its root in the above-mentioned research field.
Orthorhombic rational approximants for decagonal quasicrystals
Indian Academy of Sciences (India)
Unknown
An important exercise in the study of rational approximants is to derive their metric, especially in relation to the corresponding quasicrystal or the underlying clusters. Kuo's model has ..... the smaller diagonal of the fat rhombus in the Penrose tiling. This length scale is obtained by a section along a1 in the Penrose tiling and ...
Uncertainty relations for approximation and estimation
International Nuclear Information System (INIS)
Lee, Jaeha; Tsutsui, Izumi
2016-01-01
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.
Approximation properties of fine hyperbolic graphs
Indian Academy of Sciences (India)
groups have invariant translation approximation property (ITAP, see Definition 2.2). He also pointed out that there was no serious difficulty in extending the main theorem to fine hyperbolic graphs, but he did not outline the proof. So in this paper, we first give a proof for this extension, see Theorem 1.1 below. Then we define ...
Approximate Dynamic Programming by Practical Examples
Mes, Martijn R.K.; Perez Rivera, Arturo Eduardo; Boucherie, Richard; van Dijk, Nico M.
2017-01-01
Computing the exact solution of an MDP model is generally difficult and possibly intractable for realistically sized problem instances. A powerful technique to solve the large scale discrete time multistage stochastic control processes is Approximate Dynamic Programming (ADP). Although ADP is used
Approximation Algorithms for Model-Based Diagnosis
Feldman, A.B.
2010-01-01
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation
Quasilinear theory without the random phase approximation
International Nuclear Information System (INIS)
Weibel, E.S.; Vaclavik, J.
1980-08-01
The system of quasilinear equations is derived without making use of the random phase approximation. The fluctuating quantities are described by the autocorrelation function of the electric field using the techniques of Fourier analysis. The resulting equations posses the necessary conservation properties, but comprise new terms which hitherto have been lost in the conventional derivations
Statistical model semiquantitatively approximates arabinoxylooligosaccharides' structural diversity
DEFF Research Database (Denmark)
Dotsenko, Gleb; Nielsen, Michael Krogsgaard; Lange, Lene
2016-01-01
(wheat flour arabinoxylan (arabinose/xylose, A/X = 0.47); grass arabinoxylan (A/X = 0.24); wheat straw arabinoxylan (A/X = 0.15); and hydrothermally pretreated wheat straw arabinoxylan (A/X = 0.05)), is semiquantitatively approximated using the proposed model. The suggested approach can be applied...
Upper Bounds on Numerical Approximation Errors
DEFF Research Database (Denmark)
Raahauge, Peter
2004-01-01
This paper suggests a method for determining rigorous upper bounds on approximationerrors of numerical solutions to infinite horizon dynamic programming models.Bounds are provided for approximations of the value function and the policyfunction as well as the derivatives of the value function...
Multi-Interpretation Operators and Approximate Classification
Engelfriet, J.; Treur, J.
2003-01-01
In this paper non-classical logical techniques are introduced to formalize the analysis of multi-interpretable observation information, in particular in approximate classification processes where information on attributes of an object is to be inferred on the basis of observable properties of the
Approximate Furthest Neighbor in High Dimensions
DEFF Research Database (Denmark)
Pagh, Rasmus; Silvestri, Francesco; Sivertsen, Johan von Tangen
2015-01-01
-dimensional Euclidean space. We build on the technique of Indyk (SODA 2003), storing random projections to provide sublinear query time for AFN. However, we introduce a different query algorithm, improving on Indyk’s approximation factor and reducing the running time by a logarithmic factor. We also present a variation...
Padé approximations and diophantine geometry.
Chudnovsky, D V; Chudnovsky, G V
1985-04-01
Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves.
tt in the soft-gluon approximation
Indian Academy of Sciences (India)
April 2002 physics pp. 575–590. QCD corrections to decay-lepton polar and azimuthal angular distributions in ee+ee- t t in the soft-gluon approximation. SAURABH D RINDANI ... accurate determination of its couplings will have to await the construction of a linear e ·e collider. ..... is the azimuthal angle of the l· momentum.
Stability of approximate factorization with $ heta $-methods
W. Hundsdorfer (Willem)
1997-01-01
textabstractApproximate factorization seems for certain problems a viable alternative to time splitting. Since a splitting error is avoided, accuracy will in general be favourable compared to time splitting methods. However, it is not clear to what extent stability is affected by factorization.
Decision-theoretic troubleshooting: Hardness of approximation
Czech Academy of Sciences Publication Activity Database
Lín, Václav
2014-01-01
Roč. 55, č. 4 (2014), s. 977-988 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Decision-theoretic troubleshooting * Hardness of approximation * NP-completeness Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.451, year: 2014
Comparison of Two Approaches to Approximated Reasoning
van den Broek, P.M.; Wagenknecht, Michael; Hampel, Rainer
A comparison is made of two approaches to approximate reasoning: Mamdani's interpolation method and the implication method. Both approaches are variants of Zadeh's compositional rule of inference. It is shown that the approaches are not equivalent. A correspondence between the approaches is
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Counting independent sets using the Bethe approximation
Energy Technology Data Exchange (ETDEWEB)
Chertkov, Michael [Los Alamos National Laboratory; Chandrasekaran, V [MIT; Gamarmik, D [MIT; Shah, D [MIT; Sin, J [MIT
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
Energy Technology Data Exchange (ETDEWEB)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrain high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC
Analytical Ballistic Trajectories with Approximately Linear Drag
Directory of Open Access Journals (Sweden)
Giliam J. P. de Carpentier
2014-01-01
Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.
Analysing organic transistors based on interface approximation
International Nuclear Information System (INIS)
Akiyama, Yuto; Mori, Takehiko
2014-01-01
Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region
Nonlinear analysis approximation theory, optimization and applications
2014-01-01
Many of our daily-life problems can be written in the form of an optimization problem. Therefore, solution methods are needed to solve such problems. Due to the complexity of the problems, it is not always easy to find the exact solution. However, approximate solutions can be found. The theory of the best approximation is applicable in a variety of problems arising in nonlinear functional analysis and optimization. This book highlights interesting aspects of nonlinear analysis and optimization together with many applications in the areas of physical and social sciences including engineering. It is immensely helpful for young graduates and researchers who are pursuing research in this field, as it provides abundant research resources for researchers and post-doctoral fellows. This will be a valuable addition to the library of anyone who works in the field of applied mathematics, economics and engineering.
Simple Lie groups without the approximation property
DEFF Research Database (Denmark)
Haagerup, Uffe; de Laat, Tim
2013-01-01
For a locally compact group G, let A(G) denote its Fourier algebra, and let M0A(G) denote the space of completely bounded Fourier multipliers on G. The group G is said to have the Approximation Property (AP) if the constant function 1 can be approximated by a net in A(G) in the weak-∗ topology...... on the space M0A(G). Recently, Lafforgue and de la Salle proved that SL(3,R) does not have the AP, implying the first example of an exact discrete group without it, namely, SL(3,Z). In this paper we prove that Sp(2,R) does not have the AP. It follows that all connected simple Lie groups with finite center...... and real rank greater than or equal to two do not have the AP. This naturally gives rise to many examples of exact discrete groups without the AP....
The optimal XFEM approximation for fracture analysis
International Nuclear Information System (INIS)
Jiang Shouyan; Du Chengbin; Ying Zongquan
2010-01-01
The extended finite element method (XFEM) provides an effective tool for analyzing fracture mechanics problems. A XFEM approximation consists of standard finite elements which are used in the major part of the domain and enriched elements in the enriched sub-domain for capturing special solution properties such as discontinuities and singularities. However, two issues in the standard XFEM should specially be concerned: efficient numerical integration methods and an appropriate construction of the blending elements. In the paper, an optimal XFEM approximation is proposed to overcome the disadvantage mentioned above in the standard XFEM. The modified enrichment functions are presented that can reproduced exactly everywhere in the domain. The corresponding FORTRAN program is developed for fracture analysis. A classic problem of fracture mechanics is used to benchmark the program. The results indicate that the optimal XFEM can alleviate the errors and improve numerical precision.
Approximated solutions to Born-Infeld dynamics
Energy Technology Data Exchange (ETDEWEB)
Ferraro, Rafael [Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA),Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina); Nigro, Mauro [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina)
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Incomplete Sparse Approximate Inverses for Parallel Preconditioning
International Nuclear Information System (INIS)
Anzt, Hartwig; University of Tennessee, Knoxville, TN; Huckle, Thomas K.; Bräckle, Jürgen; Dongarra, Jack
2017-01-01
In this study, we propose a new preconditioning method that can be seen as a generalization of block-Jacobi methods, or as a simplification of the sparse approximate inverse (SAI) preconditioners. The “Incomplete Sparse Approximate Inverses” (ISAI) is in particular efficient in the solution of sparse triangular linear systems of equations. Those arise, for example, in the context of incomplete factorization preconditioning. ISAI preconditioners can be generated via an algorithm providing fine-grained parallelism, which makes them attractive for hardware with a high concurrency level. Finally, in a study covering a large number of matrices, we identify the ISAI preconditioner as an attractive alternative to exact triangular solves in the context of incomplete factorization preconditioning.
Approximate Solutions in Planted 3-SAT
Hsu, Benjamin; Laumann, Christopher; Moessner, Roderich; Sondhi, Shivaji
2013-03-01
In many computational settings, there exists many instances where finding a solution requires a computing time that grows exponentially in the number of variables. Concrete examples occur in combinatorial optimization problems and cryptography in computer science or glassy systems in physics. However, while exact solutions are often known to require exponential time, a related and important question is the running time required to find approximate solutions. Treating this problem as a problem in statistical physics at finite temperature, we examine the computational running time in finding approximate solutions in 3-satisfiability for randomly generated 3-SAT instances which are guaranteed to have a solution. Analytic predictions are corroborated by numerical evidence using stochastic local search algorithms. A first order transition is found in the running time of these algorithms.
Time Stamps for Fixed-Point Approximation
DEFF Research Database (Denmark)
Damian, Daniela
2001-01-01
Time stamps were introduced in Shivers's PhD thesis for approximating the result of a control-flow analysis. We show them to be suitable for computing program analyses where the space of results (e.g., control-flow graphs) is large. We formalize time-stamping as a top-down, fixed-point approximat......Time stamps were introduced in Shivers's PhD thesis for approximating the result of a control-flow analysis. We show them to be suitable for computing program analyses where the space of results (e.g., control-flow graphs) is large. We formalize time-stamping as a top-down, fixed...
Traveltime approximations for inhomogeneous HTI media
Alkhalifah, Tariq Ali
2011-01-01
Traveltimes information is convenient for parameter estimation especially if the medium is described by an anisotropic set of parameters. This is especially true if we could relate traveltimes analytically to these medium parameters, which is generally hard to do in inhomogeneous media. As a result, I develop traveltimes approximations for horizontaly transversely isotropic (HTI) media as simplified and even linear functions of the anisotropic parameters. This is accomplished by perturbing the solution of the HTI eikonal equation with respect to η and the azimuthal symmetry direction (usually used to describe the fracture direction) from a generally inhomogeneous elliptically anisotropic background medium. The resulting approximations can provide accurate analytical description of the traveltime in a homogenous background compared to other published moveout equations out there. These equations will allow us to readily extend the inhomogenous background elliptical anisotropic model to an HTI with a variable, but smoothly varying, η and horizontal symmetry direction values. © 2011 Society of Exploration Geophysicists.
The approximability of the String Barcoding problem
Directory of Open Access Journals (Sweden)
Rizzi Romeo
2006-08-01
Full Text Available Abstract The String Barcoding (SBC problem, introduced by Rash and Gusfield (RECOMB, 2002, consists in finding a minimum set of substrings that can be used to distinguish between all members of a set of given strings. In a computational biology context, the given strings represent a set of known viruses, while the substrings can be used as probes for an hybridization experiment via microarray. Eventually, one aims at the classification of new strings (unknown viruses through the result of the hybridization experiment. In this paper we show that SBC is as hard to approximate as Set Cover. Furthermore, we show that the constrained version of SBC (with probes of bounded length is also hard to approximate. These negative results are tight.
Approximations in the PE-method
DEFF Research Database (Denmark)
Arranz, Marta Galindo
1996-01-01
Two differenct sources of errors may occur in the implementation of the PE methods; a phase error introduced in the approximation of a pseudo-differential operator and an amplitude error generated from the starting field. First, the inherent phase errors introduced in the solution are analyzed...... for a case where the normal mode solution to the wave equation is valid, when the sound is propagated in a downward refracting atmosphere. The angular limitations for the different parabolic approximations are deduced, and calculations showing shifts in the starter as the second source of error...... is investigated. Numerical and analytical starters are compared for source locations close to the ground. The spectral properties of several starters are presented....
A Varifold Approach to Surface Approximation
Buet, Blanche; Leonardi, Gian Paolo; Masnou, Simon
2017-11-01
We show that the theory of varifolds can be suitably enriched to open the way to applications in the field of discrete and computational geometry. Using appropriate regularizations of the mass and of the first variation of a varifold we introduce the notion of approximate mean curvature and show various convergence results that hold, in particular, for sequences of discrete varifolds associated with point clouds or pixel/voxel-type discretizations of d-surfaces in the Euclidean n-space, without restrictions on dimension and codimension. The variational nature of the approach also allows us to consider surfaces with singularities, and in that case the approximate mean curvature is consistent with the generalized mean curvature of the limit surface. A series of numerical tests are provided in order to illustrate the effectiveness and generality of the method.
Approximations and Solution Estimates in Optimization
2016-04-06
for applications in machine learning and stochastic optimization . In this paper, we quantify the error in optimal values, optimal solutions, near...problem from that of another rather different problem is especially important in stochastic optimization , optimal control, and semi-infinite...approximate solutions to convex stochastic programs. SIAM J. Optimization , 18(3):961–979, 2007. [26] J. O. Royset and R. J-B Wets. From data to
Finite element approximation of the Isaacs equation
Salgado, Abner J.; Zhang, Wujun
2015-01-01
We propose and analyze a two-scale finite element method for the Isaacs equation. The fine scale is given by the mesh size $h$ whereas the coarse scale $\\varepsilon$ is dictated by an integro-differential approximation of the partial differential equation. We show that the method satisfies the discrete maximum principle provided that the mesh is weakly acute. This, in conjunction with weak operator consistency of the finite element method, allows us to establish convergence of the numerical s...
Mean-field approximation minimizes relative entropy
International Nuclear Information System (INIS)
Bilbro, G.L.; Snyder, W.E.; Mann, R.C.
1991-01-01
The authors derive the mean-field approximation from the information-theoretic principle of minimum relative entropy instead of by minimizing Peierls's inequality for the Weiss free energy of statistical physics theory. They show that information theory leads to the statistical mechanics procedure. As an example, they consider a problem in binary image restoration. They find that mean-field annealing compares favorably with the stochastic approach
Fast Approximate Joint Diagonalization Incorporating Weight Matrices
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Yeredor, A.
2009-01-01
Roč. 57, č. 3 (2009), s. 878-891 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : autoregressive processes * blind source separation * nonstationary random processes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.212, year: 2009 http://library.utia.cas.cz/separaty/2009/SI/tichavsky-fast approximate joint diagonalization incorporating weight matrices.pdf
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika
2013-02-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
Approximations of Two-Attribute Utility Functions
1976-09-01
06320 i Technology Cambridge, Massachusetts 02139 Dr. Jack R. Borsting, Chairman Dept. of Operations Research and Professor Oskar Morgenstern ... Morgenstern utility functions u defined on two attributes from the viewpoint of mathematical --roxi-n’tion theory.r It focuses on approximations v of u that...von Neumann and Morgenstern (1947) or an equivalent system (Herstein and Milnor, 1953; Fishburn, 1970) so that there exists u: T + Re such that P - Q
Approximate Inverse Preconditioners with Adaptive Dropping
Czech Academy of Sciences Publication Activity Database
Kopal, J.; Rozložník, Miroslav; Tůma, Miroslav
2015-01-01
Roč. 84, June (2015), s. 13-20 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GAP108/11/0853; GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : approximate inverse * Gram-Schmidt orthogonalization * incomplete decomposition * preconditioned conjugate gradient method * algebraic preconditioning * pivoting Subject RIV: BA - General Mathematics Impact factor: 1.673, year: 2015
Solving Math Problems Approximately: A Developmental Perspective.
Directory of Open Access Journals (Sweden)
Dana Ganor-Stern
Full Text Available Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger than the exact answer and when it was far (vs. close from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.
Optical pulse propagation with minimal approximations
Kinsler, Paul
2008-01-01
Propagation equations for optical pulses are needed to assist in describing applications in ever more extreme situations -- including those in metamaterials with linear and nonlinear magnetic responses. Here I show how to derive a single first order propagation equation using a minimum of approximations and a straightforward "factorization" mathematical scheme. The approach generates exact coupled bi-directional equations, after which it is clear that the description can be reduced to a singl...
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Solving Math Problems Approximately: A Developmental Perspective.
Ganor-Stern, Dana
2016-01-01
Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.
Markdown Optimization via Approximate Dynamic Programming
Directory of Open Access Journals (Sweden)
Cos?gun
2013-02-01
Full Text Available We consider the markdown optimization problem faced by the leading apparel retail chain. Because of substitution among products the markdown policy of one product affects the sales of other products. Therefore, markdown policies for product groups having a significant crossprice elasticity among each other should be jointly determined. Since the state space of the problem is very huge, we use Approximate Dynamic Programming. Finally, we provide insights on the behavior of how each product price affects the markdown policy.
Factorized Approximate Inverses With Adaptive Dropping
Czech Academy of Sciences Publication Activity Database
Kopal, Jiří; Rozložník, Miroslav; Tůma, Miroslav
2016-01-01
Roč. 38, č. 3 (2016), A1807-A1820 ISSN 1064-8275 R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : approximate inverses * incomplete factorization * Gram–Schmidt orthogonalization * preconditioned iterative methods Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016
An analytical approximation for resonance integral
International Nuclear Information System (INIS)
Magalhaes, C.G. de; Martinez, A.S.
1985-01-01
It is developed a method which allows to obtain an analytical solution for the resonance integral. The problem formulation is completely theoretical and based in concepts of physics of general character. The analytical expression for integral does not involve any empiric correlation or parameter. Results of approximation are compared with pattern values for each individual resonance and for sum of all resonances. (M.C.K.) [pt
Fast algorithms for approximate circular string matching.
Barton, Carl; Iliopoulos, Costas S; Pissis, Solon P
2014-03-22
Circular string matching is a problem which naturally arises in many biological contexts. It consists in finding all occurrences of the rotations of a pattern of length m in a text of length n. There exist optimal average-case algorithms for exact circular string matching. Approximate circular string matching is a rather undeveloped area. In this article, we present a suboptimal average-case algorithm for exact circular string matching requiring time O(n). Based on our solution for the exact case, we present two fast average-case algorithms for approximate circular string matching with k-mismatches, under the Hamming distance model, requiring time O(n) for moderate values of k, that is k=O(m/logm). We show how the same results can be easily obtained under the edit distance model. The presented algorithms are also implemented as library functions. Experimental results demonstrate that the functions provided in this library accelerate the computations by more than three orders of magnitude compared to a naïve approach. We present two fast average-case algorithms for approximate circular string matching with k-mismatches; and show that they also perform very well in practice. The importance of our contribution is underlined by the fact that the provided functions may be seamlessly integrated into any biological pipeline. The source code of the library is freely available at http://www.inf.kcl.ac.uk/research/projects/asmf/.
Approximate particle number projection in hot nuclei
International Nuclear Information System (INIS)
Kosov, D.S.; Vdovin, A.I.
1995-01-01
Heated finite systems like, e.g., hot atomic nuclei have to be described by the canonical partition function. But this is a quite difficult technical problem and, as a rule, the grand canonical partition function is used in the studies. As a result, some shortcomings of the theoretical description appear because of the thermal fluctuations of the number of particles. Moreover, in nuclei with pairing correlations the quantum number fluctuations are introduced by some approximate methods (e.g., by the standard BCS method). The exact particle number projection is very cumbersome and an approximate number projection method for T ≠ 0 basing on the formalism of thermo field dynamics is proposed. The idea of the Lipkin-Nogami method to perform any operator as a series in the number operator powers is used. The system of equations for the coefficients of this expansion is written and the solution of the system in the next approximation after the BCS one is obtained. The method which is of the 'projection after variation' type is applied to a degenerate single j-shell model. 14 refs., 1 tab
Feedforward Approximations to Dynamic Recurrent Network Architectures.
Muir, Dylan R
2018-02-01
Recurrent neural network architectures can have useful computational properties, with complex temporal dynamics and input-sensitive attractor states. However, evaluation of recurrent dynamic architectures requires solving systems of differential equations, and the number of evaluations required to determine their response to a given input can vary with the input or can be indeterminate altogether in the case of oscillations or instability. In feedforward networks, by contrast, only a single pass through the network is needed to determine the response to a given input. Modern machine learning systems are designed to operate efficiently on feedforward architectures. We hypothesized that two-layer feedforward architectures with simple, deterministic dynamics could approximate the responses of single-layer recurrent network architectures. By identifying the fixed-point responses of a given recurrent network, we trained two-layer networks to directly approximate the fixed-point response to a given input. These feedforward networks then embodied useful computations, including competitive interactions, information transformations, and noise rejection. Our approach was able to find useful approximations to recurrent networks, which can then be evaluated in linear and deterministic time complexity.
Impulse approximation versus elementary particle method
International Nuclear Information System (INIS)
Klieb, L.
1982-01-01
Calculations are made for radiative muon capture in 3 He, both in impulse approximation and with the elementary particle method, and results are compared. It is argued that a diagrammatic method which takes a selected set of Feynman diagrams into account only provides insufficient warrant that effects not included are small. Therefore low-energy theorems are employed, as first given by Adler and Dothan, to determine the amplitude up to and including all terms linear in photon momentum and momentum transfer at the weak vertex. This amplitude is applied to radiative muon capture with the elementary particle method (EPM). The various form factors needed are discussed. It is shown that the results are particularly sensitive to the π- 3 He- 3 H coupling constant of which many contradictory determinations have been described in the literature. The classification of the nuclear wave function employed in the impulse approximation (IA) is summarized. The ν-decay of 3 H and (radiative muon capture in 3 He is treated and numerical results are given. Next, pion photoproduction and radiative pion capture are considered. IA and EPM for radiative muon capture are compared more closely. It is concluded that two-step processes are inherently difficult; the elementary particle method has convergence problems, and unknown parameters are present. In the impulse approximation, which is perhaps conceptually more difficult, the two-step interaction for the nucleon is considered as effectively point-like with small non-local corrections. (Auth.)
Ranking Support Vector Machine with Kernel Approximation
Directory of Open Access Journals (Sweden)
Kai Chen
2017-01-01
Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
CMB-lensing beyond the Born approximation
International Nuclear Information System (INIS)
Marozzi, Giovanni; Fanizza, Giuseppe; Durrer, Ruth; Dio, Enea Di
2016-01-01
We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles ℓ ∼< 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussian nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.
Function approximation of tasks by neural networks
International Nuclear Information System (INIS)
Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.
2008-01-01
For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem
Simultaneous perturbation stochastic approximation for tidal models
Altaf, M.U.
2011-05-12
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.
The coupled-channel T-matrix: its lowest-order Born + Lanczos approximants
International Nuclear Information System (INIS)
Znojil, M.
1995-01-01
Three iterative methods of solution of the Lippmann-Schwinger equations (viz., the method of continued fractions by J.Horacek and T.Sasakawa), its Born-remainder modification and a coupled-channel matrix-continued-fraction generalization are all interpreted as special cases of a common iterative matrix prescription. Firstly, in terms of certain asymmetric projectors P≠P + , we re-derive the three particular older methods as different realizations of the well-known Lanczos inversion. Then, a generalized iteration method is proposed as a Born-like re-arrangement of any intermediate Lanczos iteration step. A maximal flexibility is achieved in the formalism which might compete with the standard Pade re-summations in practice. Its first few truncations are listed, therefore. 26 refs., 1 tab
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright
Photoelectron spectroscopy and the dipole approximation
Energy Technology Data Exchange (ETDEWEB)
Hemmers, O.; Hansen, D.L.; Wang, H. [Univ. of Nevada, Las Vegas, NV (United States)] [and others
1997-04-01
Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.
Product-State Approximations to Quantum States
Brandão, Fernando G. S. L.; Harrow, Aram W.
2016-02-01
We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.
Dynamic system evolution and markov chain approximation
Directory of Open Access Journals (Sweden)
Roderick V. Nicholas Melnik
1998-01-01
Full Text Available In this paper computational aspects of the mathematical modelling of dynamic system evolution have been considered as a problem in information theory. The construction of mathematical models is treated as a decision making process with limited available information.The solution of the problem is associated with a computational model based on heuristics of a Markov Chain in a discrete space–time of events. A stable approximation of the chain has been derived and the limiting cases are discussed. An intrinsic interconnection of constructive, sequential, and evolutionary approaches in related optimization problems provides new challenges for future work.
The EH Interpolation Spline and Its Approximation
Directory of Open Access Journals (Sweden)
Jin Xie
2014-01-01
Full Text Available A new interpolation spline with two parameters, called EH interpolation spline, is presented in this paper, which is the extension of the standard cubic Hermite interpolation spline, and inherits the same properties of the standard cubic Hermite interpolation spline. Given the fixed interpolation conditions, the shape of the proposed splines can be adjusted by changing the values of the parameters. Also, the introduced spline could approximate to the interpolated function better than the standard cubic Hermite interpolation spline and the quartic Hermite interpolation splines with single parameter by a new algorithm.
On one approximation in quantum chromodynamics
International Nuclear Information System (INIS)
Alekseev, A.I.; Bajkov, V.A.; Boos, Eh.Eh.
1982-01-01
Form of a complete fermion propagator near the mass shell is investigated. Considered is a nodel of quantum chromodynamics (MQC) where in the fermion section the Block-Nordsic approximation has been made, i. e. u-numbers are substituted for ν matrices. The model was investigated by means of the Schwinger-Dyson equation for a quark propagator in the infrared region. The Schwinger-Dyson equation was managed to reduce to a differential equation which is easily solved. At that, the Green function is suitable to represent as integral transformation
Topics in multivariate approximation and interpolation
Jetter, Kurt
2005-01-01
This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr
Shape theory categorical methods of approximation
Cordier, J M
2008-01-01
This in-depth treatment uses shape theory as a ""case study"" to illustrate situations common to many areas of mathematics, including the use of archetypal models as a basis for systems of approximations. It offers students a unified and consolidated presentation of extensive research from category theory, shape theory, and the study of topological algebras.A short introduction to geometric shape explains specifics of the construction of the shape category and relates it to an abstract definition of shape theory. Upon returning to the geometric base, the text considers simplical complexes and
Mean field approximation to QCD, 1
International Nuclear Information System (INIS)
Tezuka, Hirokazu.
1987-09-01
We apply mean field approximation to the gluon field in the equations of motion derived from the QCD lagrangian. The gluon mean field is restricted to the time-independent 0th component, and color exchange components are ignored. The equation of motion for the gluon mean field turns into a Poisson equation, and that for the quark field into a Dirac equation with a potential term. For example, assuming spherical symmetric box-like distribution and Gauss-like distribution to quarks, we try to solve these two equations simultaneously. (author)
Nonlinear higher quasiparticle random phase approximation
Smetana, Adam; Šimkovic, Fedor; Štefánik, Dušan; Krivoruchenko, Mikhail
2017-10-01
We develop a new approach to describe nuclear states of multiphonon origin, motivated by the necessity for a more accurate description of matrix elements of neutrinoless double-beta decay. Our approach is an extension of the Quasiparticle Random Phase Approximation (QRPA), in which nonlinear phonon operators play an essential role. Before applying the nonlinear higher QRPA (nhQRPA) to realistic problems, we test its efficiency with exactly solvable models. The first considered model is equivalent to a harmonic oscillator. The nhQRPA solutions follow from the standard QRPA equation, but for nonlinear phonon operators defined for each individual excited state separately. The second exactly solvable model is the proton-neutron Lipkin model that describes successfully not only energy spectrum of nuclei, but also beta-decay transitions. Again, we reproduce exactly the numerical solutions in the nhQRPA framework. We show in particular that truncation of the nonlinear phonon operators leads to an approximation similar to the self-consistent second QRPA, given the phonon operators are defined with a constant term. The test results demonstrate that the proposed nhQRPA is a promising tool for a realistic calculation of energy spectra and nuclear transitions.
Fast approximate hierarchical clustering using similarity heuristics
Directory of Open Access Journals (Sweden)
Kull Meelis
2008-09-01
Full Text Available Abstract Background Agglomerative hierarchical clustering (AHC is a common unsupervised data analysis technique used in several biological applications. Standard AHC methods require that all pairwise distances between data objects must be known. With ever-increasing data sizes this quadratic complexity poses problems that cannot be overcome by simply waiting for faster computers. Results We propose an approximate AHC algorithm HappieClust which can output a biologically meaningful clustering of a large dataset more than an order of magnitude faster than full AHC algorithms. The key to the algorithm is to limit the number of calculated pairwise distances to a carefully chosen subset of all possible distances. We choose distances using a similarity heuristic based on a small set of pivot objects. The heuristic efficiently finds pairs of similar objects and these help to mimic the greedy choices of full AHC. Quality of approximate AHC as compared to full AHC is studied with three measures. The first measure evaluates the global quality of the achieved clustering, while the second compares biological relevance using enrichment of biological functions in every subtree of the clusterings. The third measure studies how well the contents of subtrees are conserved between the clusterings. Conclusion The HappieClust algorithm is well suited for large-scale gene expression visualization and analysis both on personal computers as well as public online web applications. The software is available from the URL http://www.quretec.com/HappieClust
Traveling cluster approximation for uncorrelated amorphous systems
International Nuclear Information System (INIS)
Kaplan, T.; Sen, A.K.; Gray, L.J.; Mills, R.
1985-01-01
In this paper, the authors apply the TCA concepts to spatially disordered, uncorrelated systems (e.g., fluids or amorphous metals without short-range order). This is the first approximation scheme for amorphous systems that takes cluster effects into account while preserving the Herglotz property for any amount of disorder. They have performed some computer calculations for the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results are compared with exact calculations (which, in principle, taken into account all cluster effects) and with the CPA, which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA, and yet, apparently, the pair approximation distorts some of the features of the exact results. They conclude that the effects of large clusters are much more important in an uncorrelated liquid metal than in a substitutional alloy. As a result, the pair TCA, which does quite a nice job for alloys, is not adequate for the liquid. Larger clusters must be treated exactly, and therefore an n-TCA with n > 2 must be used
APPROXIMATING INNOVATION POTENTIAL WITH NEUROFUZZY ROBUST MODEL
Directory of Open Access Journals (Sweden)
Kasa, Richard
2015-01-01
Full Text Available In a remarkably short time, economic globalisation has changed the world’s economic order, bringing new challenges and opportunities to SMEs. These processes pushed the need to measure innovation capability, which has become a crucial issue for today’s economic and political decision makers. Companies cannot compete in this new environment unless they become more innovative and respond more effectively to consumers’ needs and preferences – as mentioned in the EU’s innovation strategy. Decision makers cannot make accurate and efficient decisions without knowing the capability for innovation of companies in a sector or a region. This need is forcing economists to develop an integrated, unified and complete method of measuring, approximating and even forecasting the innovation performance not only on a macro but also a micro level. In this recent article a critical analysis of the literature on innovation potential approximation and prediction is given, showing their weaknesses and a possible alternative that eliminates the limitations and disadvantages of classical measuring and predictive methods.
Analytic approximate radiation effects due to Bremsstrahlung
Energy Technology Data Exchange (ETDEWEB)
Ben-Zvi I.
2012-02-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.
Approximate reversal of quantum Gaussian dynamics
Lami, Ludovico; Das, Siddhartha; Wilde, Mark M.
2018-03-01
Recently, there has been focus on determining the conditions under which the data processing inequality for quantum relative entropy is satisfied with approximate equality. The solution of the exact equality case is due to Petz, who showed that the quantum relative entropy between two quantum states stays the same after the action of a quantum channel if and only if there is a reversal channel that recovers the original states after the channel acts. Furthermore, this reversal channel can be constructed explicitly and is now called the Petz recovery map. Recent developments have shown that a variation of the Petz recovery map works well for recovery in the case of approximate equality of the data processing inequality. Our main contribution here is a proof that bosonic Gaussian states and channels possess a particular closure property, namely, that the Petz recovery map associated to a bosonic Gaussian state σ and a bosonic Gaussian channel N is itself a bosonic Gaussian channel. We furthermore give an explicit construction of the Petz recovery map in this case, in terms of the mean vector and covariance matrix of the state σ and the Gaussian specification of the channel N .
Approximating Markov Chains: What and why
International Nuclear Information System (INIS)
Pincus, S.
1996-01-01
Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics
Analytic approximate radiation effects due to Bremsstrahlung
International Nuclear Information System (INIS)
Ben-Zvi, I.
2012-01-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R and D Energy Recovery Linac.
On some applications of diophantine approximations.
Chudnovsky, G V
1984-03-01
Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to "almost all" numbers. In particular, any such number has the "2 + epsilon" exponent of irrationality: Theta - p/q > q(-2-epsilon) for relatively prime rational integers p,q, with q >/= q(0) (Theta, epsilon). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162].
Regularity and approximability of electronic wave functions
Yserentant, Harry
2010-01-01
The electronic Schrödinger equation describes the motion of N-electrons under Coulomb interaction forces in a field of clamped nuclei. The solutions of this equation, the electronic wave functions, depend on 3N variables, with three spatial dimensions for each electron. Approximating these solutions is thus inordinately challenging, and it is generally believed that a reduction to simplified models, such as those of the Hartree-Fock method or density functional theory, is the only tenable approach. This book seeks to show readers that this conventional wisdom need not be ironclad: the regularity of the solutions, which increases with the number of electrons, the decay behavior of their mixed derivatives, and the antisymmetry enforced by the Pauli principle contribute properties that allow these functions to be approximated with an order of complexity which comes arbitrarily close to that for a system of one or two electrons. The text is accessible to a mathematical audience at the beginning graduate level as...
Adaptive and Approximate Orthogonal Range Counting
DEFF Research Database (Denmark)
Chan, Timothy M.; Wilkinson, Bryan Thomas
2013-01-01
-case optimal query time O(log_w n). We give an O(n loglog n)-space adaptive data structure that improves the query time to O(loglog n + log_w k), where k is the output count. When k=O(1), our bounds match the state of the art for the 2-D orthogonal range emptiness problem [Chan, Larsen, and Pătraşcu, SoCG 2011......]. •We give an O(n loglog n)-space data structure for approximate 2-D orthogonal range counting that can compute a (1+δ)-factor approximation to the count in O(loglog n) time for any fixed constant δ>0. Again, our bounds match the state of the art for the 2-D orthogonal range emptiness problem. •Lastly......Close Abstract We present three new results on one of the most basic problems in geometric data structures, 2-D orthogonal range counting. All the results are in the w-bit word RAM model. •It is well known that there are linear-space data structures for 2-D orthogonal range counting with worst...
DEFF Research Database (Denmark)
Sadegh, Payman
1997-01-01
This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....
Hydromagnetic turbulence in the direct interaction approximation
International Nuclear Information System (INIS)
Nagarajan, S.
1975-01-01
The dissertation is concerned with the nature of turbulence in a medium with large electrical conductivity. Three distinct though inter-related questions are asked. Firstly, the evolution of a weak, random initial magnetic field in a highly conducting, isotropically turbulent fluid is discussed. This was first discussed in the paper 'Growth of Turbulent Magnetic Fields' by Kraichnan and Nagargian. The Physics of Fluids, volume 10, number 4, 1967. Secondly, the direct interaction approximation for hydromagnetic turbulence maintained by stationary, isotropic, random stirring forces is formulated in the wave-number-frequency domain. Thirdly, the dynamical evolution of a weak, random, magnetic excitation in a turbulent electrically conducting fluid is examined under varying kinematic conditions. (G.T.H.)
Discrete Spectrum Reconstruction Using Integral Approximation Algorithm.
Sizikov, Valery; Sidorov, Denis
2017-07-01
An inverse problem in spectroscopy is considered. The objective is to restore the discrete spectrum from observed spectrum data, taking into account the spectrometer's line spread function. The problem is reduced to solution of a system of linear-nonlinear equations (SLNE) with respect to intensities and frequencies of the discrete spectral lines. The SLNE is linear with respect to lines' intensities and nonlinear with respect to the lines' frequencies. The integral approximation algorithm is proposed for the solution of this SLNE. The algorithm combines solution of linear integral equations with solution of a system of linear algebraic equations and avoids nonlinear equations. Numerical examples of the application of the technique, both to synthetic and experimental spectra, demonstrate the efficacy of the proposed approach in enabling an effective enhancement of the spectrometer's resolution.
Nanostructures: Scattering beyond the Born approximation
Grigoriev, S. V.; Syromyatnikov, A. V.; Chumakov, A. P.; Grigoryeva, N. A.; Napolskii, K. S.; Roslyakov, I. V.; Eliseev, A. A.; Petukhov, A. V.; Eckerlebe, H.
2010-03-01
The neutron scattering on a two-dimensional ordered nanostructure with the third nonperiodic dimension can go beyond the Born approximation. In our model supported by the exact theoretical solution a well-correlated hexagonal porous structure of anodic aluminum oxide films acts as a peculiar two-dimensional grating for the coherent neutron wave. The thickness of the film L (length of pores) plays important role in the transition from the weak to the strong scattering regimes. It is shown that the coherency of the standard small-angle neutron scattering setups suits to the geometry of the studied objects and often affects the intensity of scattering. The proposed theoretical solution can be applied in the small-angle neutron diffraction experiments with flux lines in superconductors, periodic arrays of magnetic or superconducting nanowires, as well as in small-angle diffraction experiments on synchrotron radiation.
UFBoot2: Improving the Ultrafast Bootstrap Approximation.
Hoang, Diep Thi; Chernomor, Olga; von Haeseler, Arndt; Minh, Bui Quang; Vinh, Le Sy
2018-02-01
The standard bootstrap (SBS), despite being computationally intensive, is widely used in maximum likelihood phylogenetic analyses. We recently proposed the ultrafast bootstrap approximation (UFBoot) to reduce computing time while achieving more unbiased branch supports than SBS under mild model violations. UFBoot has been steadily adopted as an efficient alternative to SBS and other bootstrap approaches. Here, we present UFBoot2, which substantially accelerates UFBoot and reduces the risk of overestimating branch supports due to polytomies or severe model violations. Additionally, UFBoot2 provides suitable bootstrap resampling strategies for phylogenomic data. UFBoot2 is 778 times (median) faster than SBS and 8.4 times (median) faster than RAxML rapid bootstrap on tested data sets. UFBoot2 is implemented in the IQ-TREE software package version 1.6 and freely available at http://www.iqtree.org. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Approximation by max-product type operators
Bede, Barnabás; Gal, Sorin G
2016-01-01
This monograph presents a broad treatment of developments in an area of constructive approximation involving the so-called "max-product" type operators. The exposition highlights the max-product operators as those which allow one to obtain, in many cases, more valuable estimates than those obtained by classical approaches. The text considers a wide variety of operators which are studied for a number of interesting problems such as quantitative estimates, convergence, saturation results, localization, to name several. Additionally, the book discusses the perfect analogies between the probabilistic approaches of the classical Bernstein type operators and of the classical convolution operators (non-periodic and periodic cases), and the possibilistic approaches of the max-product variants of these operators. These approaches allow for two natural interpretations of the max-product Bernstein type operators and convolution type operators: firstly, as possibilistic expectations of some fuzzy variables, and secondly,...
PROX: Approximated Summarization of Data Provenance.
Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B; Deutch, Daniel; Milo, Tova
2016-03-01
Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data.
Exact and Approximate Probabilistic Symbolic Execution
Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem
2014-01-01
Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
Approximate analytical modeling of leptospirosis infection
Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani
2017-11-01
Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.
Analytical approximations for wide and narrow resonances
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2005-01-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U 238 were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Analytical approximations for wide and narrow resonances
Energy Technology Data Exchange (ETDEWEB)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br
2005-07-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U{sup 238} were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Evaluating methods for approximating stochastic differential equations.
Brown, Scott D; Ratcliff, Roger; Smith, Philip L
2006-08-01
Models of decision making and response time (RT) are often formulated using stochastic differential equations (SDEs). Researchers often investigate these models using a simple Monte Carlo method based on Euler's method for solving ordinary differential equations. The accuracy of Euler's method is investigated and compared to the performance of more complex simulation methods. The more complex methods for solving SDEs yielded no improvement in accuracy over the Euler method. However, the matrix method proposed by Diederich and Busemeyer (2003) yielded significant improvements. The accuracy of all methods depended critically on the size of the approximating time step. The large (∼10 ms) step sizes often used by psychological researchers resulted in large and systematic errors in evaluating RT distributions.
Efficient Approximate OLAP Querying Over Time Series
DEFF Research Database (Denmark)
Perera, Kasun Baruhupolage Don Kasun Sanjeewa; Hahmann, Martin; Lehner, Wolfgang
2016-01-01
The ongoing trend for data gathering not only produces larger volumes of data, but also increases the variety of recorded data types. Out of these, especially time series, e.g. various sensor readings, have attracted attention in the domains of business intelligence and decision making. As OLAP...... queries play a major role in these domains, it is desirable to also execute them on time series data. While this is not a problem on the conceptual level, it can become a bottleneck with regards to query run-time. In general, processing OLAP queries gets more computationally intensive as the volume...... are either costly or require continuous maintenance. In this paper we propose an approach for approximate OLAP querying of time series that offers constant latency and is maintenance-free. To achieve this, we identify similarities between aggregation cuboids and propose algorithms that eliminate...
Approximate truncation robust computed tomography—ATRACT
International Nuclear Information System (INIS)
Dennerlein, Frank; Maier, Andreas
2013-01-01
We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)
Approximate Sensory Data Collection: A Survey
Directory of Open Access Journals (Sweden)
Siyao Cheng
2017-03-01
Full Text Available With the rapid development of the Internet of Things (IoTs, wireless sensor networks (WSNs and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.
Approximate Sensory Data Collection: A Survey.
Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong
2017-03-10
With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.
The Bloch Approximation in Periodically Perforated Media
International Nuclear Information System (INIS)
Conca, C.; Gomez, D.; Lobo, M.; Perez, E.
2005-01-01
We consider a periodically heterogeneous and perforated medium filling an open domain Ω of R N . Assuming that the size of the periodicity of the structure and of the holes is O(ε),we study the asymptotic behavior, as ε → 0, of the solution of an elliptic boundary value problem with strongly oscillating coefficients posed in Ω ε (Ω ε being Ω minus the holes) with a Neumann condition on the boundary of the holes. We use Bloch wave decomposition to introduce an approximation of the solution in the energy norm which can be computed from the homogenized solution and the first Bloch eigenfunction. We first consider the case where Ωis R N and then localize the problem for abounded domain Ω, considering a homogeneous Dirichlet condition on the boundary of Ω
Coated sphere scattering by geometric optics approximation.
Mengran, Zhai; Qieni, Lü; Hongxia, Zhang; Yinxin, Zhang
2014-10-01
A new geometric optics model has been developed for the calculation of light scattering by a coated sphere, and the analytic expression for scattering is presented according to whether rays hit the core or not. The ray of various geometric optics approximation (GOA) terms is parameterized by the number of reflections in the coating/core interface, the coating/medium interface, and the number of chords in the core, with the degeneracy path and repeated path terms considered for the rays striking the core, which simplifies the calculation. For the ray missing the core, the various GOA terms are dealt with by a homogeneous sphere. The scattering intensity of coated particles are calculated and then compared with those of Debye series and Aden-Kerker theory. The consistency of the results proves the validity of the method proposed in this work.
Rights, Jason D; Sterba, Sonya K
2016-11-01
Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.
DEFF Research Database (Denmark)
Sadegh, Payman; Spall, J. C.
1998-01-01
The simultaneous perturbation stochastic approximation (SPSA) algorithm has attracted considerable attention for challenging optimization problems where it is difficult or impossible to obtain a direct gradient of the objective (say, loss) function. The approach is based on a highly efficient...... simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...... process. The objective is to minimize the mean square error of the estimate. The authors also consider maximization of the likelihood that the estimate be confined within a bounded symmetric region of the true parameter. The optimal distribution for the components of the simultaneous perturbation vector...
DEFF Research Database (Denmark)
Sadegh, Payman; Spall, J. C.
1997-01-01
The simultaneous perturbation stochastic approximation (SPSA) algorithm has recently attracted considerable attention for optimization problems where it is difficult or impossible to obtain a direct gradient of the objective (say, loss) function. The approach is based on a highly efficient...... simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...... process. The objective is to minimize the mean square error of the estimate. We also consider maximization of the likelihood that the estimate be confined within a bounded symmetric region of the true parameter. The optimal distribution for the components of the simultaneous perturbation vector is found...
Some properties of dual and approximate dual of fusion frames
Arefijamaal, Ali Akbar; Neyshaburi, Fahimeh Arabyani
2016-01-01
In this paper we extend the notion of approximate dual to fusion frames and present some approaches to obtain dual and approximate alternate dual fusion frames. Also, we study the stability of dual and approximate alternate dual fusion frames.
Approximate Model for Turbulent Stagnation Point Flow.
Energy Technology Data Exchange (ETDEWEB)
Dechant, Lawrence [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-01-01
Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.
Configuring Airspace Sectors with Approximate Dynamic Programming
Bloem, Michael; Gupta, Pramod
2010-01-01
In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.
Adaptive approximation of higher order posterior statistics
Lee, Wonjung
2014-02-01
Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively. © 2013 Elsevier Inc.
Rainbows: Mie computations and the Airy approximation.
Wang, R T; van de Hulst, H C
1991-01-01
Efficient and accurate computation of the scattered intensity pattern by the Mie formulas is now feasible for size parameters up to x = 50,000 at least, which in visual light means spherical drops with diameters up to 6 mm. We present a method for evaluating the Mie coefficients from the ratios between Riccati-Bessel and Neumann functions of successive order. We probe the applicability of the Airy approximation, which we generalize to rainbows of arbitrary p (number of internal reflections = p - 1), by comparing the Mie and Airy intensity patterns. Millimeter size water drops show a match in all details, including the position and intensity of the supernumerary maxima and the polarization. A fairly good match is still seen for drops of 0.1 mm. A small spread in sizes helps to smooth out irrelevant detail. The dark band between the rainbows is used to test more subtle features. We conclude that this band contains not only externally reflected light (p = 0) but also a sizable contribution f rom the p = 6 and p = 7 rainbows, which shift rapidly with wavelength. The higher the refractive index, the closer both theories agree on the first primary rainbow (p = 2) peak for drop diameters as small as 0.02 mm. This may be useful in supporting experimental work.
Approximate von Neumann entropy for directed graphs.
Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R
2014-05-01
In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks.
Bond selective chemistry beyond the adiabatic approximation
Energy Technology Data Exchange (ETDEWEB)
Butler, L.J. [Univ. of Chicago, IL (United States)
1993-12-01
One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.
Hydration thermodynamics beyond the linear response approximation.
Raineri, Fernando O
2016-10-19
The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute
Coronal Loops: Evolving Beyond the Isothermal Approximation
Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.
2002-05-01
Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.
Kim, SungKun; Lee, Hunpyo
2017-06-01
Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.
Energy Technology Data Exchange (ETDEWEB)
Pedicini, Piernicola, E-mail: ppiern@libero.it [Service of Medical Physics, IRCCS Regional Cancer Hospital (C.R.O.B.), Rionero in Vulture (Italy); Caivano, Rocchina [Service of Medical Physics, IRCCS Regional Cancer Hospital (C.R.O.B.), Rionero in Vulture (Italy); Fiorentino, Alba [U.O. of Radiotherapy, IRCCS Regional Cancer Hospital (C.R.O.B.), Rionero in Vulture (Italy); Strigari, Lidia [Laboratory of Medical Physics and Expert Systems, Regina Elena National Cancer Institute, Rome (Italy); Califano, Giorgia [Service of Medical Physics, IRCCS Regional Cancer Hospital (C.R.O.B.), Rionero in Vulture (Italy); Barbieri, Viviana; Sanpaolo, Piero; Castaldo, Giovanni [U.O. of Radiotherapy, IRCCS Regional Cancer Hospital (C.R.O.B.), Rionero in Vulture (Italy); Benassi, Marcello [Service of Medical Physics, Scientific Institute of Tumors of Romagna IRST, Meldola (Italy); Fusco, Vincenzo [U.O. of Radiotherapy, IRCCS Regional Cancer Hospital (C.R.O.B.), Rionero in Vulture (Italy)
2012-01-01
To evaluate a nonstandard RapidArc (RA) modality as alternative to high-dose-rate brachytherapy (HDR-BRT) or IMRT treatments of the vaginal vault in patients with gynecological cancer (GC). Nonstandard (with vaginal applicator) and standard (without vaginal applicator) RapidArc plans for 27 women with GC were developed to compare with HDR-BRT and IMRT. Dosimetric and radiobiological comparison were performed by means of dose-volume histogram and equivalent uniform dose (EUD) for planning target volume (PTV) and organs at risk (OARs). In addition, the integral dose and the overall treatment times were evaluated. RA, as well as IMRT, results in a high uniform dose on PTV compared with HDR-BRT. However, the average of EUD for HDR-BRT was significantly higher than those with RA and IMRT. With respect to the OARs, standard RA was equivalent of IMRT but inferior to HDR-BRT. Furthermore, nonstandard RA was comparable with IMRT for bladder and sigmoid and better than HDR-BRT for the rectum because of a significant reduction of d{sub 2cc}, d{sub 1cc}, and d{sub max} (p < 0.01). Integral doses were always higher than HDR-BRT, although the values were very low. Delivery times were about the same and more than double for HDR-BRT compared with IMRT and RA, respectively. In conclusion, the boost of dose on vaginal vault in patients affected by GC delivered by a nonstandard RA technique was a reasonable alternative to the conventional HDR-BRT because of a reduction of delivery time and rectal dose at substantial comparable doses for the bladder and sigmoid. However HDR-BRT provides better performance in terms of PTV coverage as evidenced by a greater EUD.
Perito, E R; Braun, H J; Dodge, J L; Rhee, S; Roberts, J P
2017-08-01
Nonstandard exception requests (NSERs), for which transplant centers provide patient-specific narratives to support a higher Model for End-stage Liver Disease/Pediatric End-stage Liver Disease score, are made for >30% of pediatric liver transplant candidates. We describe the justifications used in pediatric NSER narratives 2009-2014 and identify justifications associated with NSER denial, waitlist mortality, and transplant. Using United Network for Organ Sharing data, 1272 NSER narratives from 1138 children with NSERs were coded for analysis. The most common NSER justifications were failure-to-thrive (48%) and risk of death (40%); both associated with approval. Varices, involvement of another organ, impaired quality of life, and encephalopathy were justifications used more often in denied NSERs. Of the 25 most prevalent justifications, 60% were not associated with approval or denial. Waitlist mortality risk was increased when fluid overload or "posttransplant complication outside standard criteria" were cited and decreased when liver-related infection was noted. Transplant probability was increased when the narrative mentioned liver-related infections, and fluid overload for children pediatric candidates. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
Approximate solutions of the Wei Hua oscillator using the Pekeris ...
Indian Academy of Sciences (India)
The analytical exact solutions of the wave equation with some exponential-type potentials are impossible for l = 0 states. So, the next best thing to do is to find approximate analytical solutions of a given potential by appropriate approximation techniques. Therefore, approximate schemes like the Pekeris approximation [6–8] ...
Approximate Implicitization of Parametric Curves Using Cubic Algebraic Splines
Directory of Open Access Journals (Sweden)
Xiaolei Zhang
2009-01-01
Full Text Available This paper presents an algorithm to solve the approximate implicitization of planar parametric curves using cubic algebraic splines. It applies piecewise cubic algebraic curves to give a global G2 continuity approximation to planar parametric curves. Approximation error on approximate implicitization of rational curves is given. Several examples are provided to prove that the proposed method is flexible and efficient.
Operational method for the particle slowing down problem
International Nuclear Information System (INIS)
El Wakil, S.A.; Machali, H.M.; Madkour, M.; Saied, E.A.
1986-07-01
The direct operational method is used to transform the collision integral in the transport equation to a polynomial in derivatives with respect to lethargy. This polynomial is approximated by the Pade approximation technique. Different orders of Pade approximation give the well-known synthetic kernel. This procedure reduces the integro-differential form of the transport equation to differential form. It also gives us the possibility to consider energy dependent cross-section and to get the solution without using the integral transform method. We shall consider here the solution for infinite homogeneous medium and calculate the energy deposition factor for different orders of the Pade approximant. (author)
Adams, Kim D; Cook, Albert M
2017-07-01
Purpose To examine how using a Lego robot controlled via a speech-generating device (SGD) can contribute to how students with physical and communication impairments perform hands-on and communicative mathematics measurement activities. This study was a follow-up to a previous study. Method Three students with cerebral palsy used the robot to measure objects using non-standard units, such as straws, and then compared and ordered the objects using the resulting measurement. Their performance was assessed, and the manipulation and communication events were observed. Teachers and education assistants were interviewed regarding robot use. Results Similar benefits to the previous study were found in this study. Gaps in student procedural knowledge were identified such as knowing to place measurement units tip-to-tip, and students' reporting revealed gaps in conceptual understanding. However, performance improved with repeated practice. Stakeholders identified that some robot tasks took too long or were too difficult to perform. Conclusions Having access to both their SGD and a robot gave the students multiple ways to show their understanding of the measurement concepts. Though they could participate actively in the new mathematics activities, robot use is most appropriate in short tasks requiring reasonable operational skill. Implications for Rehabilitation Lego robots controlled via speech-generating devices (SGDs) can help students to engage in the mathematics pedagogy of performing hands-on activities while communicating about concepts. Students can "show what they know" using the Lego robots, and report and reflect on concepts using the SGD. Level 1 and Level 2 mathematics measurement activities have been adapted to be accomplished by the Lego robot. Other activities can likely be accomplished with similar robot adaptations (e.g., gripper, pen). It is not recommended to use the robot to measure items that are long, or perform measurements that require high
Cheon, Sooyoung
2013-02-16
Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.