WorldWideScience

Sample records for unit normal approximation

  1. Auction analysis by normal form game approximation

    NARCIS (Netherlands)

    Kaisers, Michael; Tuyls, Karl; Thuijsman, Frank; Parsons, Simon

    2008-01-01

    Auctions are pervasive in todaypsilas society and provide a variety of real markets. This article facilitates a strategic choice between a set of available trading strategies by introducing a methodology to approximate heuristic payoff tables by normal form games. An example from the auction domain

  2. Normal and Feature Approximations from Noisy Point Clouds

    Science.gov (United States)

    2005-02-01

    Normal and Feature Approximations from Noisy Point Clouds Tamal K. Dey Jian Sun Abstract We consider the problem of approximating normal and...normal and, in partic- ular, feature size approximations for noisy point clouds . In the noise-free case the choice of the Delaunay balls is not an issue...axis from noisy point clouds ex- ists [7]. This algorithm approximates the medial axis with Voronoi faces under a stringent uniform sampling

  3. Approximate Nearest Neighbor Search for a Dataset of Normalized Vectors

    Science.gov (United States)

    Terasawa, Kengo; Tanaka, Yuzuru

    This paper describes a novel algorithm for approximate nearest neighbor searching. For solving this problem especially in high dimensional spaces, one of the best-known algorithm is Locality-Sensitive Hashing (LSH). This paper presents a variant of the LSH algorithm that outperforms previously proposed methods when the dataset consists of vectors normalized to unit length, which is often the case in pattern recognition. The LSH scheme is based on a family of hash functions that preserves the locality of points. This paper points out that for our special case problem we can design efficient hash functions that map a point on the hypersphere into the closest vertex of the randomly rotated regular polytope. The computational analysis confirmed that the proposed method could improve the exponent ρ, the main indicator of the performance of the LSH algorithm. The practical experiments also supported the efficiency of our algorithm both in time and in space.

  4. Normal approximations for descents and inversions of permutations of multisets

    OpenAIRE

    Conger, Mark; Viswanath, D.

    2005-01-01

    Normal approximations for descents and inversions of permutations of the set $\\{1,2,...,n\\}$ are well known. A number of sequences that occur in practice, such as the human genome and other genomes, contain many repeated elements. Motivated by such examples, we consider the number of inversions of a permutation $\\pi(1), \\pi(2),...,\\pi(n)$ of a multiset with $n$ elements, which is the number of pairs $(i,j)$ with $1\\leq i \\pi(j)$. The number of descents is the number of...

  5. Polynomial approximations of the Normal toWeibull Distribution transformation

    Directory of Open Access Journals (Sweden)

    Andrés Feijóo

    2014-09-01

    Full Text Available Some of the tools that are generally employed in power system analysis need to use approaches based on statistical distributions for simulating the cumulative behavior of the different system devices. For example, the probabilistic load flow. The presence of wind farms in power systems has increased the use of Weibull and Rayleigh distributions among them. Not only the distributions themselves, but also satisfying certain constraints such as correlation between series of data or even autocorrelation can be of importance in the simulation. Correlated Weibull or Rayleigh distributions can be obtained by transforming correlated Normal distributions, and it can be observed that certain statistical values such as the means and the standard deviations tend to be retained when operating such transformations, although why this happens is not evident. The objective of this paper is to analyse the consequences of using such transformations. The methodology consists of comparing the results obtained by means of a direct transformation and those obtained by means of approximations based on the use of first and second degree polynomials. Simulations have been carried out with series of data which can be interpreted as wind speeds. The use of polynomial approximations gives accurate results in comparison with direct transformations and provides an approach that helps explain why the statistical values are retained during the transformations.

  6. Normal modes of relativistic systems in post Newtonian approximation

    CERN Document Server

    Sobouti, Y

    1998-01-01

    We use the post Newtonian (pn) order of Liouville's equation (pnl) to study the normal modes of oscillation of a relativistic system. In addition to classical modes, we are able to isolate a new class of oscillations that arise from perturbations of the space-time metric. In the first pn order; a) their frequency is an order q smaller than the classical frequencies, where q is a pn expansion parameter; b) they are not damped, for there is no gravitational wave radiation in this order; c) they are not coupled with the classical modes in q order; d) in a spherically symmetric system, they are designated by a pair of angular momentum eigennumbers, (j,m), of a pair of phase space angular momentum operators (J^2,J_z). Hydrodynamical behavior of these new modes is also investigated; a) they do not disturb the equilibrium of the classical fluid; b) they generate macroscopic toroidal motions that in classical case would be neutral; c) they give rise to an oscillatory g_{0i} component of the metric tensor that otherwi...

  7. A note on the normal approximation error for randomly weighted self-normalized sums

    CERN Document Server

    Hoermann, Siegfried

    2011-01-01

    Let $\\bX=\\{X_n\\}_{n\\geq 1}$ and $\\bY=\\{Y_n\\}_{n\\geq 1}$ be two independent random sequences. We obtain rates of convergence to the normal law of randomly weighted self-normalized sums $$ \\psi_n(\\bX,\\bY)=\\sum_{i=1}^nX_iY_i/V_n,\\quad V_n=\\sqrt{Y_1^2+...+Y_n^2}. $$ These rates are seen to hold for the convergence of a number of important statistics, such as for instance Student's $t$-statistic or the empirical correlation coefficient.

  8. Perception of rate-altered sentential approximations by normal and aphasic children.

    Science.gov (United States)

    Rudnick, K J; Berry, R C

    1975-06-01

    25 4-word, first, and second-order sentential approximations were presented to 18 aphasic and 18 normal children. The material was taped and altered to represent 5 speaking rates: 140 (normal); 75 and 105 (expanded); and 180 and 205 (compressed) words per minute. Order of presentation was randomized. The major difference between the children was that the second-order material was perceived best by normals regardless of rate, while the aphasics showed this preference only at the normal rate.

  9. A simple approximation to the bivariate normal distribution with large correlation coefficient

    NARCIS (Netherlands)

    Albers, Willem; Kallenberg, Wilbert C.M.

    1994-01-01

    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the er

  10. A simple approximation to the bivariate normal distribution with large correlation coefficient

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.

    1994-01-01

    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the

  11. Parallel Preconditioned Conjugate Gradient Square Method Based on Normalized Approximate Inverses

    Directory of Open Access Journals (Sweden)

    George A. Gravvanis

    2005-01-01

    Full Text Available A new class of normalized explicit approximate inverse matrix techniques, based on normalized approximate factorization procedures, for solving sparse linear systems resulting from the finite difference discretization of partial differential equations in three space variables are introduced. A new parallel normalized explicit preconditioned conjugate gradient square method in conjunction with normalized approximate inverse matrix techniques for solving efficiently sparse linear systems on distributed memory systems, using Message Passing Interface (MPI communication library, is also presented along with theoretical estimates on speedups and efficiency. The implementation and performance on a distributed memory MIMD machine, using Message Passing Interface (MPI is also investigated. Applications on characteristic initial/boundary value problems in three dimensions are discussed and numerical results are given.

  12. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  13. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2014-01-01

    Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.

  14. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    Science.gov (United States)

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/

  15. Saddlepoint approximation based structural reliability analysis with non-normal random variables

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The saddlepoint approximation (SA) can directly estimate the probability distribution of linear performance function in non-normal variables space. Based on the property of SA, three SA based methods are developed for the structural system reliability analysis. The first method is SA based reliability bounds theory (RBT), in which SA is employed to estimate failure probability and equivalent normal reliability index for each failure mode firstly, and then RBT is employed to obtain the upper and the lower bounds of system failure probability. The second method is SA based Nataf approximation, in which SA is used to estimate the probability density function (PDF) and cumulative distribution function (CDF) for the approximately linearized performance function of each failure mode. After the PDF of each failure mode and the correlation coefficients among approximately linearized performance functions are estimated, Nataf distribution is employed to approximate the joint PDF of multiple structural system performance functions, and then the system failure probability can be estimated directly by numerical simulation using the joint PDF. The third method is SA based line sampling (LS). The standardization transformation is needed to eliminate the dimensions of variables firstly in this case. Then LS method can express the system failure probability as an arithmetic average of a set of failure probabilities of the linear performance functions, and the probabilities of the linear performance functions can be estimated by the SA in the non-normal variables space. By comparing basic concepts, implementations and results of illustrations, the following conclusions can be drawn: (1) The first method can only obtain the bounds of system failure probability and it is only acceptable for the linear limit state function; (2) the second method can give the estimation of system failure probability, and its error mostly results from the approximation of Nataf distribution for the

  16. Modelling of the toe trajectory during normal gait using circle-fit approximation.

    Science.gov (United States)

    Fang, Juan; Hunt, Kenneth J; Xie, Le; Yang, Guo-Yuan

    2016-10-01

    This work aimed to validate the approach of using a circle to fit the toe trajectory relative to the hip and to investigate linear regression models for describing such toe trajectories from normal gait. Twenty-four subjects walked at seven speeds. Best-fit circle algorithms were developed to approximate the relative toe trajectory using a circle. It was detected that the mean approximation error between the toe trajectory and its best-fit circle was less than 4 %. Regarding the best-fit circles for the toe trajectories from all subjects, the normalised radius was constant, while the normalised centre offset reduced when the walking cadence increased; the curve range generally had a positive linear relationship with the walking cadence. The regression functions of the circle radius, the centre offset and the curve range with leg length and walking cadence were definitively defined. This study demonstrated that circle-fit approximation of the relative toe trajectories is generally applicable in normal gait. The functions provided a quantitative description of the relative toe trajectories. These results have potential application for design of gait rehabilitation technologies.

  17. Normal Approximation to a Sum of Geometric Random Variableswith Application to Ammunition Stockpile Planning

    Directory of Open Access Journals (Sweden)

    W.J. Hurley

    2007-09-01

    Full Text Available The normal approximation for a sum of geometric random variables has been examined. Thisapproximation is relevant to the determination of direct-fire ammunition stockpile levels in adefence setting. Among the methodologies available for this assessment, one is a target-orientedmethodology. This approach calculates the number of rounds necessary to destroy a givenfraction of the enemy force and infrastructure. The difficulty is that the number of rounds requiredcannot be determined analytically. An obvious numeric approach is Monte Carlo simulation.Another is the approximation approach which has several advantages like it is easy to implement.and is accurate even in the case where the number of targets is low.

  18. Piecewise log-normal approximation of size distributions for aerosol modelling

    Directory of Open Access Journals (Sweden)

    K. von Salzen

    2006-01-01

    Full Text Available An efficient and accurate method for the representation of particle size distributions in atmospheric models is proposed. The method can be applied, but is not necessarily restricted, to aerosol mass and number size distributions. A piecewise log-normal approximation of the number size distribution within sections of the particle size spectrum is used. Two of the free parameters of the log-normal approximation are obtained from the integrated number and mass concentration in each section. The remaining free parameter is prescribed. The method is efficient in a sense that only relatively few calculations are required for applications of the method in atmospheric models. Applications of the method in simulations of particle growth by condensation and simulations with a single column model for nucleation, condensation, gravitational settling, wet deposition, and mixing are described. The results are compared to results from simulations employing single- and double-moment bin methods that are frequently used in aerosol modelling. According to these comparisons, the accuracy of the method is noticeably higher than the accuracy of the other methods.

  19. Design of reciprocal unit based on the Newton-Raphson approximation

    DEFF Research Database (Denmark)

    Gundersen, Anders Torp; Winther-Almstrup, Rasmus; Boesen, Michael

    A design of a reciprocal unit based on Newton-Raphson approximation is described and implemented. We present two different designs for single precisions where one of them is extremely fast but the trade-off is an increase in area. The solution behind the fast design is that the design is fully...

  20. Normal Approximations to the Distributions of the Wilcoxon Statistics: Accurate to What "N"? Graphical Insights

    Science.gov (United States)

    Bellera, Carine A.; Julien, Marilyse; Hanley, James A.

    2010-01-01

    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…

  1. New approximate solutions per unit of time for periodically checked systems with different lifetime distributions

    Directory of Open Access Journals (Sweden)

    J. Rodrigues Dias

    2006-11-01

    Full Text Available Systems with different lifetime distributions, associated with increasing, decreasing, constant, and bathtub-shaped hazard rates, are examined in this paper. It is assumed that a failure is only detected if systems are inspected. New approximate solutions for the inspection period and for the expected duration of hidden faults are presented, on the basis of the assumption that only periodic and perfect inspections are carried out. By minimizing total expected cost per unit of time, on the basis of numerical results and a range of comparisons, the conclusion is drawn that these new approximate solutions are extremely useful and simple to put into practice.

  2. Approximation Algorithms for the Connected Dominating Set Problem in Unit Disk Graphs

    Institute of Scientific and Technical Information of China (English)

    Gang Lu; Ming-Tian Zhou; Yong Tang; Ming-Yuan Zhao; Xin-Zheng Niu; Kun She

    2009-01-01

    The connected dominating set (CDS) problem, which consists of finding a smallest connected dominating set for graphs is an NP-hard problem in the unit disk graphs (UDGs). This paper focuses on the CDS problem in wireless networks. Investigation of some properties of independent set (IS) in UDGs shows that geometric features of nodes distribution like angle and area can be used to design efficient heuristics for the approximation algorithms. Several constant factor approximation algorithms are presented for the CDS problem in UDGs. Simulation results show that the proposed algorithms perform better than some known ones.

  3. Characterizing the complexity of spontaneous motor unit patterns of amyotrophic lateral sclerosis using approximate entropy

    Science.gov (United States)

    Zhou, Ping; Barkhaus, Paul E.; Zhang, Xu; Zev Rymer, William

    2011-10-01

    This paper presents a novel application of the approximate entropy (ApEn) measurement for characterizing spontaneous motor unit activity of amyotrophic lateral sclerosis (ALS) patients. High-density surface electromyography (EMG) was used to record spontaneous motor unit activity bilaterally from the thenar muscles of nine ALS subjects. Three distinct patterns of spontaneous motor unit activity (sporadic spikes, tonic spikes and high-frequency repetitive spikes) were observed. For each pattern, complexity was characterized by calculating the ApEn values of the representative signal segments. A sliding window over each segment was also introduced to quantify the dynamic changes in complexity for the different spontaneous motor unit patterns. We found that the ApEn values for the sporadic spikes were the highest, while those of the high-frequency repetitive spikes were the lowest. There is a significant difference in mean ApEn values between two arbitrary groups of the three spontaneous motor unit patterns (P < 0.001). The dynamic ApEn curve from the sliding window analysis is capable of tracking variations in EMG activity, thus providing a vivid, distinctive description for different patterns of spontaneous motor unit action potentials in terms of their complexity. These findings expand the existing knowledge of spontaneous motor unit activity in ALS beyond what was previously obtained using conventional linear methods such as firing rate or inter-spike interval statistics.

  4. Design of maneuvers based on new normal form approximations: The case study of the CPRTBP

    CERN Document Server

    Paez, Rocio Isabel

    2015-01-01

    In this work, we study the motions in the region around the equilateral Lagrangian equilibrium points L4 and L5, in the framework of the Circular Planar Restricted Three-Body Problem (hereafter, CPRTBP). We design a semi-analytic approach based on some ideas by Garfinkel in [4]: the Hamiltonian is expanded in Poincar\\'e-Delaunay coordinates and a suitable average is performed. This allows us to construct (quasi) invariant tori that are moderately far from the Lagrangian points L4-L5 and approximate wide tadpole orbits. This construction provides the tools for studying optimal transfers in the neighborhood of the equilateral points, when instantaneous impulses are considered. We show some applications of the new averaged Hamiltonian for the Earth-Moon system, applied to the setting-up of some transfers which allow to enter in the stability region filled by tadpole orbits.

  5. Almost sure convergence and asymptotical normality of a generalization of Kesten's stochastic approximation algorithm for multidimensional case

    CERN Document Server

    Cruz, Pedro

    2011-01-01

    It is shown the almost sure convergence and asymptotical normality of a generalization of Kesten's stochastic approximation algorithm for multidimensional case. In this generalization, the step increases or decreases if the scalar product of two subsequente increments of the estimates is positive or negative. This rule is intended to accelerate the entrance in the `stochastic behaviour' when initial conditions cause the algorithm to behave in a `deterministic fashion' for the starting iterations.

  6. 77 FR 38857 - Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units of Normal...

    Science.gov (United States)

    2012-06-29

    ... COMMISSION Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units of Normal..., Inspection, and Testing Criteria for Air Filtration and Adsorption Units of Normal Atmosphere Cleanup Systems..., entitled, ``Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units of...

  7. Simulation of mineral dust aerosol with piecewise log-normal approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2011-09-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Module (CanAM4-PAM. The total simulated annual mean dust burden is 37.8 mg m−2 for year 2000, which is consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with several satellite observations and shows good agreements. The model yields a dust AOD of 0.042 and total AOD of 0.126 for the year 2000. The simulated aerosol direct radiative forcings (ADRF of dust and total aerosol over ocean are −1.24 W m−2 and −4.76 W m−2 respectively, which show good consistency with satellite estimates for the year 2001.

  8. Simulation of mineral dust aerosol with Piecewise Log-normal Approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2012-08-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Model (CanAM4-PAM. The total simulated annual global dust emission is 2500 Tg yr−1, and the dust mass load is 19.3 Tg for year 2000. Both are consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Biases in long-range transport are also contributing. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with satellite and surface remote sensing measurements and shows general agreement in terms of the dust distribution around sources. The model yields a dust AOD of 0.042 and dust aerosol direct radiative forcing (ADRF of −1.24 W m−2 respectively, which show good consistency with model estimates from other studies.

  9. Motor unit changes in normal aging: a brief review.

    Science.gov (United States)

    Tudoraşcu, Iulia; Sfredel, Veronica; Riza, Anca Lelia; Dănciulescu Miulescu, Rucsandra; Ianoşi, Simona Laura; Dănoiu, Suzana

    2014-01-01

    Aging is explored by multiple lines of research, in a pursuit of understanding this natural process. The motor response is usually the main dependent variable in studies regarding physical or cognitive decline in aging. It is therefore critical to understand how the motor function changes with age. The present review, aims at presenting briefly some of the most recently published works in the field, focusing on the three key components of the motor unit. The changes that the skeletal muscle undergoes aging sarcopenia, alteration of fiber type distribution and also intimate metabolic transformations. The neuromuscular junction suffers at cellular and molecular level, with possible implications of various cell components, mediators and oxidative stress. Motoneuron loss and change in their physiological properties accompany remodeling in the motor units. The applicability of knowledge in this field lies in possible interventions intended to counteract these age-related losses.

  10. Borders and border representations: Comparative approximations among the United States and Latin America

    Directory of Open Access Journals (Sweden)

    Marcos Cueva Perus

    2005-01-01

    Full Text Available This article uses a comparative approach regarding frontier symbols and myths among the United States, Latin America and the Caribbean. Although wars fought over frontiers have greatly diminished throughout the world, the conception of frontier still held by the United States is that of a nationalist myth which embodies a semi-religious faith in the free market and democracy. On the other hand, Latin American and Caribbean countries, whose frontiers are far more complex, have shown extraordinary stability for several decades. This paper points out the risks involved in the spread of United States´ notions of frontier which, in addition, go hand-in-hand with the problem of multicultural segmentation. Although Latin American and Caribbean frontiers may be stable, they are vulnerable to the infiltration of foreing frontier representations.

  11. Environmental assessment: Transfer of normal and low-enriched uranium billets to the United Kingdom, Hanford Site, Richland, Washington

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-11-01

    Under the auspices of an agreement between the U.S. and the United Kingdom, the U.S. Department of Energy (DOE) has an opportunity to transfer approximately 710,000 kilograms (1,562,000 pounds) of unneeded normal and low-enriched uranium (LEU) to the United Kingdom; thus, reducing long-term surveillance and maintenance burdens at the Hanford Site. The material, in the form of billets, is controlled by DOE`s Defense Programs, and is presently stored as surplus material in the 300 Area of the Hanford Site. The United Kingdom has expressed a need for the billets. The surplus uranium billets are currently stored in wooden shipping containers in secured facilities in the 300 Area at the Hanford Site (the 303-B and 303-G storage facilities). There are 482 billets at an enrichment level (based on uranium-235 content) of 0.71 weight-percent. This enrichment level is normal uranium; that is, uranium having 0.711 as the percentage by weight of uranium-235 as occurring in nature. There are 3,242 billets at an enrichment level of 0.95 weight-percent (i.e., low-enriched uranium). This inventory represents a total of approximately 532 curies. The facilities are routinely monitored. The dose rate on contact of a uranium billet is approximately 8 millirem per hour. The dose rate on contact of a wooden shipping container containing 4 billets is approximately 4 millirem per hour. The dose rate at the exterior of the storage facilities is indistinguishable from background levels.

  12. Environmental assessment: Transfer of normal and low-enriched uranium billets to the United Kingdom, Hanford Site, Richland, Washington

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-11-01

    Under the auspices of an agreement between the U.S. and the United Kingdom, the U.S. Department of Energy (DOE) has an opportunity to transfer approximately 710,000 kilograms (1,562,000 pounds) of unneeded normal and low-enriched uranium (LEU) to the United Kingdom; thus, reducing long-term surveillance and maintenance burdens at the Hanford Site. The material, in the form of billets, is controlled by DOE`s Defense Programs, and is presently stored as surplus material in the 300 Area of the Hanford Site. The United Kingdom has expressed a need for the billets. The surplus uranium billets are currently stored in wooden shipping containers in secured facilities in the 300 Area at the Hanford Site (the 303-B and 303-G storage facilities). There are 482 billets at an enrichment level (based on uranium-235 content) of 0.71 weight-percent. This enrichment level is normal uranium; that is, uranium having 0.711 as the percentage by weight of uranium-235 as occurring in nature. There are 3,242 billets at an enrichment level of 0.95 weight-percent (i.e., low-enriched uranium). This inventory represents a total of approximately 532 curies. The facilities are routinely monitored. The dose rate on contact of a uranium billet is approximately 8 millirem per hour. The dose rate on contact of a wooden shipping container containing 4 billets is approximately 4 millirem per hour. The dose rate at the exterior of the storage facilities is indistinguishable from background levels.

  13. Midwives' experiences of facilitating normal birth in an obstetric-led unit: a feminist perspective.

    LENUS (Irish Health Repository)

    Keating, Annette

    2012-01-31

    OBJECTIVE: to explore midwives\\' experiences of facilitating normal birth in an obstetric-led unit. DESIGN: a feminist approach using semi-structured interviews focusing on midwives\\' perceptions of normal birth and their ability to facilitate this birth option in an obstetric-led unit. SETTING: Ireland. PARTICIPATION: a purposeful sample of 10 midwives with 6-30 years of midwifery experience. All participants had worked for a minimum of 6 years in a labour ward setting, and had been in their current setting for the previous 2 years. FINDINGS: the midwives\\' narratives related to the following four concepts of patriarchy: \\'hierarchical thinking\\

  14. Approximate, non-relativistic scattering phase shifts, bound state energies, and wave function normalization factors for a screened Coulomb potential of the Hulthen type

    Energy Technology Data Exchange (ETDEWEB)

    Buehring, W.

    1983-03-01

    Non-relativistic scattering phase shifts, bound state energies, and wave function normalization factors for a screened Coulomb potential of the Hulthen type are presented in the form of relatively simple analytic expressions. These formulae have been obtained by a suitable renormalization procedure applied to the quantities derived from an approximate Schroedinger equation which contains the exact Hulthen potential together with an approximate angular momentum term. When the screening exponent vanishes, our formulae reduce to the exact Coulomb expresions. The interrelation between our formulae and Pratt's analytic perturbation theory for screened Coulomb potentials' is discussed.

  15. The application of the piecewise linear approximation to the spectral neighborhood of soil line for the analysis of the quality of normalization of remote sensing materials

    Science.gov (United States)

    Kulyanitsa, A. L.; Rukhovich, A. D.; Rukhovich, D. D.; Koroleva, P. V.; Rukhovich, D. I.; Simakova, M. S.

    2017-04-01

    The concept of soil line can be to describe the temporal distribution of spectral characteristics of the bare soil surface. In this case, the soil line can be referred to as the multi-temporal soil line, or simply temporal soil line (TSL). In order to create TSL for 8000 regular lattice points for the territory of three regions of Tula oblast, we used 34 Landsat images obtained in the period from 1985 to 2014 after their certain transformation. As Landsat images are the matrices of the values of spectral brightness, this transformation is the normalization of matrices. There are several methods of normalization that move, rotate, and scale the spectral plane. In our study, we applied the method of piecewise linear approximation to the spectral neighborhood of soil line in order to assess the quality of normalization mathematically. This approach allowed us to range normalization methods according to their quality as follows: classic normalization > successive application of the turn and shift > successive application of the atmospheric correction and shift > atmospheric correction > shift > turn > raw data. The normalized data allowed us to create the maps of the distribution of a and b coefficients of the TSL. The map of b coefficient is characterized by the high correlation with the ground-truth data obtained from 1899 soil pits described during the soil surveys performed by the local institute for land management (GIPROZEM).

  16. Approximating Multivariate Normal Orthant Probabilities

    Science.gov (United States)

    1990-06-01

    limno. ?,oUburgX. PA 15268 !Z8 W,v,.tfcrr Co.ri Departmet of Prycboeot 7:,ms Rxter %J 09753 ftQ3 E. Ditive SL. Dr. Bert Green Champaign. IL 61820 l’oon...249 Batttmom SID .1218 icnerniv Part University of Colorsidoi Pstbirgn. PA 15213 Boulder. CO sw94Z49 Mi’chael Habon DOR%IER GMBH Dr MI lto. S. Katz...UnnerniY of lAno College of Education Layn&K s4 Dr. Ratna Niandatumar Li oria(ia Educaional Studies oaCe r TomJ b Willard Hall. Room Z13E !,a ot A522

  17. The approximate number system and domain-general abilities as predictors of math ability in children with normal hearing and hearing loss.

    Science.gov (United States)

    Bull, Rebecca; Marshark, Marc; Nordmann, Emily; Sapere, Patricia; Skene, Wendy A

    2017-08-29

    Many children with hearing loss (CHL) show a delay in mathematical achievement compared to children with normal hearing (CNH). This study examined whether there are differences in acuity of the approximate number system (ANS) between CHL and CNH, and whether ANS acuity is related to math achievement. Working memory (WM), short-term memory (STM), and inhibition were considered as mediators of any relationship between ANS acuity and math achievement. Seventy-five CHL were compared with 75 age- and gender-matched CNH. ANS acuity, mathematical reasoning, WM, and STM of CHL were significantly poorer compared to CNH. Group differences in math ability were no longer significant when ANS acuity, WM, or STM was controlled. For CNH, WM and STM fully mediated the relationship of ANS acuity to math ability; for CHL, WM and STM only partially mediated this relationship. ANS acuity, WM, and STM are significant contributors to hearing status differences in math achievement, and to individual differences within the group of CHL. Statement of contribution What is already known on this subject? Children with hearing loss often perform poorly on measures of math achievement, although there have been few studies focusing on basic numerical cognition in these children. In typically developing children, the approximate number system predicts math skills concurrently and longitudinally, although there have been some contradictory findings. Recent studies suggest that domain-general skills, such as inhibition, may account for the relationship found between the approximate number system and math achievement. What does this study adds? This is the first robust examination of the approximate number system in children with hearing loss, and the findings suggest poorer acuity of the approximate number system in these children compared to hearing children. The study addresses recent issues regarding the contradictory findings of the relationship of the approximate number system to math ability

  18. Hemichannels in the neurovascular unit and white matter under normal and inflamed conditions.

    Science.gov (United States)

    Orellana, Juan A; Figueroa, Xavier F; Sánchez, Helmuth A; Contreras-Duarte, Susana; Velarde, Victoria; Sáez, Juan C

    2011-05-01

    In the normal brain, cellular types that compose the neurovascular unit, including neurons, astrocytes and endothelial cells express pannexins and connexins, which are protein subunits of two families that form plasma membrane channels. Most available evidence in mammals indicated that endogenously expressed pannexins only form hemichannels, and connexins form both gap junction channels and hemichannels. While gap junction channels connect the cytoplasm of contacting cells and coordinate electrical and metabolic activities, hemichannels communicate intra- and extracellular compartments and serve as diffusional pathways for ions and small molecules. Here, evidence supporting the functional role of hemichannels in the neurovascular unit and white matter under physiological and pathological conditions are reviewed. A sub-threshold acute pathological threatening condition (e.g., stroke and brain infection) leads to glial cell activation, which maintains an active defense and restores the normal function of the neurovascular unit. However, if the stimulus is deleterious, microglia and the endothelium become overactivated, both releasing bioactive molecules (e.g., glutamate, cytokines, prostaglandins and ATP) that increase the activity of astroglial hemichannels, reducing the astrocyte neuroprotective functions, and further reducing neuronal cell viability. Moreover, ATP is known to contribute to myelin degeneration of axons. Consequently, hemichannels might play a relevant role in the excitotoxic response of oligodendrocytes observed in ischemia and encephalomyelitis. Regulated changes in hemichannel permeability in healthy brain cells can have positive consequences in terms of paracrine/autocrine signaling, whereas persistent changes in cells affected by neurological disorders can be detrimental. Therefore, blocking hemichannels expressed by glial cells and/or neurons of the inflamed central nervous system might prevent neurovascular unit dysfunction and

  19. Pore size determination using normalized J-function for different hydraulic flow units

    Directory of Open Access Journals (Sweden)

    Ali Abedini

    2015-06-01

    Full Text Available Pore size determination of hydrocarbon reservoirs is one of the main challenging areas in reservoir studies. Precise estimation of this parameter leads to enhance the reservoir simulation, process evaluation, and further forecasting of reservoir behavior. Hence, it is of great importance to estimate the pore size of reservoir rocks with an appropriate accuracy. In the present study, a modified J-function was developed and applied to determine the pore radius in one of the hydrocarbon reservoir rocks located in the Middle East. The capillary pressure data vs. water saturation (Pc–Sw as well as routine reservoir core analysis include porosity (φ and permeability (k were used to develop the J-function. First, the normalized porosity (φz, the rock quality index (RQI, and the flow zone indicator (FZI concepts were used to categorize all data into discrete hydraulic flow units (HFU containing unique pore geometry and bedding characteristics. Thereafter, the modified J-function was used to normalize all capillary pressure curves corresponding to each of predetermined HFU. The results showed that the reservoir rock was classified into five separate rock types with the definite HFU and reservoir pore geometry. Eventually, the pore radius for each of these HFUs was determined using a developed equation obtained by normalized J-function corresponding to each HFU. The proposed equation is a function of reservoir rock characteristics including φz, FZI, lithology index (J*, and pore size distribution index (ɛ. This methodology used, the reservoir under study was classified into five discrete HFU with unique equations for permeability, normalized J-function and pore size. The proposed technique is able to apply on any reservoir to determine the pore size of the reservoir rock, specially the one with high range of heterogeneity in the reservoir rock properties.

  20. Rheology and orientational distributions of rodlike particles with magnetic moment normal to the particle axis for semi-dense dispersions (analysis by means of mean field approximation).

    Science.gov (United States)

    Satoh, Akira; Sakuda, Yasuhiro

    2007-04-15

    We have considered a semi-dense dispersion composed of ferromagnetic rodlike particles with a magnetic moment normal to the particle axis to investigate the rheological properties and particle orientational distribution in a simple shear flow as well as an external magnetic field. We have adopted the mean field approximation to take into account magnetic particle-particle interactions. The basic equation of the orientational distribution function has been derived from the balance of the torques and solved numerically. The results obtained here are summarized as follows. For a very strong magnetic field, the magnetic moment of the rodlike particle is strongly restricted in the field direction, so that the particle points to directions normal to the flow direction (and also to the magnetic field direction). This characteristic of the particle orientational distribution is also valid for the case of a strong particle-particle interaction, as in the strong magnetic field case. To the contrary, for a weak interaction among particles, the particle orientational distribution is governed by a shear flow as well as an applied magnetic field. When the magnetic particle-particle interaction is strong under circumstances of an applied magnetic field, the magnetic moment has a tendency to incline to the magnetic field direction more strongly. This leads to the characteristic that the viscosity decreases with decreasing the distance between particles, and this tendency becomes more significant for a stronger particle-particle interaction. These characteristics concerning the viscosity are quite different from those for a semi-dense dispersion composed of rodlike particles with a magnetic moment along the particle direction.

  1. Motor unit firing intervals and other parameters of electrical activity in normal and pathological muscle

    DEFF Research Database (Denmark)

    Fuglsang-Frederiksen, Anders; Smith, T; Høgenhaven, H

    1987-01-01

    The analysis of the firing intervals of motor units has been suggested as a diagnostic tool in patients with neuromuscular disorders. Part of the increase in number of turns seen in patients with myopathy could be secondary to the decrease in motor unit firing intervals at threshold force...... of the motor units, as noted in previous studies. In the brachial biceps muscle we have studied the firing intervals of 164 motor units in 14 controls, 140 motor units in 13 patients with myopathy and 86 motor units in 8 patients with neurogenic disorders, and related the findings to those of the turns...... analysis and the analysis of properties of individual motor unit potentials. To ensure comparable conditions we have examined motor unit firing intervals and turns at a force of 10% of maximum. The average of motor unit firing intervals and of interval variability was the same in controls and in patients...

  2. Development of Normalization Factors for Canada and the United States and Comparison with European Factors

    DEFF Research Database (Denmark)

    Lautier, Anne; Rosenbaum, Ralph K.; Margni, Manuele

    2010-01-01

    showed that normalized profiles are highly dependent on the selected reference due to differences in the industrial and economic activities. To meet practitioners' needs, Canadian normalization factors have been calculated using the characterization factors from LUCAS (Canadian), IMPACT 2002+ (European...

  3. Motor unit firing intervals and other parameters of electrical activity in normal and pathological muscle

    DEFF Research Database (Denmark)

    Fuglsang-Frederiksen, Anders; Smith, T; Høgenhaven, H

    1987-01-01

    analysis and the analysis of properties of individual motor unit potentials. To ensure comparable conditions we have examined motor unit firing intervals and turns at a force of 10% of maximum. The average of motor unit firing intervals and of interval variability was the same in controls and in patients......, and the diagnostic yield of the motor unit firing intervals analysis was none. Although the number of turns increased with decreasing motor unit firing intervals, this relation was physiological rather than pathophysiological. In patients with neurogenic disorders, interval variability indicated unstable firing...

  4. A note on self-normalized Dickey-Fuller test for unit root in autoregressive time series with GARCH errors

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-rong; ZHANG Li-xin

    2008-01-01

    In this article, the unit root test for AR (p) model with GARCH errors is considered. The Dickey-Fuller test statistics are rewritten in the form of self-normalized sums, and the asymptotic distribution of the test statistics is derived under the weak conditions.

  5. Currently used dosage regimens of vancomycin fail to achieve therapeutic levels in approximately 40% of intensive care unit patients

    Science.gov (United States)

    Obara, Vitor Yuzo; Zacas, Carolina Petrus; Carrilho, Claudia Maria Dantas de Maio; Delfino, Vinicius Daher Alvares

    2016-01-01

    Objective This study aimed to assess whether currently used dosages of vancomycin for treatment of serious gram-positive bacterial infections in intensive care unit patients provided initial therapeutic vancomycin trough levels and to examine possible factors associated with the presence of adequate initial vancomycin trough levels in these patients. Methods A prospective descriptive study with convenience sampling was performed. Nursing note and medical record data were collected from September 2013 to July 2014 for patients who met inclusion criteria. Eighty-three patients were included. Initial vancomycin trough levels were obtained immediately before vancomycin fourth dose. Acute kidney injury was defined as an increase of at least 0.3mg/dL in serum creatinine within 48 hours. Results Considering vancomycin trough levels recommended for serious gram-positive infection treatment (15 - 20µg/mL), patients were categorized as presenting with low, adequate, and high vancomycin trough levels (35 [42.2%], 18 [21.7%], and 30 [36.1%] patients, respectively). Acute kidney injury patients had significantly greater vancomycin trough levels (p = 0.0055, with significance for a trend, p = 0.0023). Conclusion Surprisingly, more than 40% of the patients did not reach an effective initial vancomycin trough level. Studies on pharmacokinetic and dosage regimens of vancomycin in intensive care unit patients are necessary to circumvent this high proportion of failures to obtain adequate initial vancomycin trough levels. Vancomycin use without trough serum level monitoring in critically ill patients should be discouraged. PMID:28099635

  6. Development of Normalization Factors for Canada and the United States and Comparison with European Factors

    Science.gov (United States)

    In Life Cycle Assessment (LCA), normalization calculates the magnitude of an impact (midpoint or endpoint) relative to the total effect of a given reference. Using a country or a continent as a reference system is a first step towards global normalization. The goal of this wor...

  7. 关于样本均值的抽样分布能否作正态近似的探讨%The Argument whether Sampling Distribution of Expectation of Samples Approximately Obey Normal Distribution

    Institute of Scientific and Technical Information of China (English)

    王学民

    2005-01-01

    Based on the concepts of skewness and kurtosis, the paper points out a kind of method to judge whether the sampling distribution of sample mean can be normally approximated. Besides, the paper also provides the discussion of how to make judgement uinder the following conditions: 1. Finite population with outlier; 2. Unknown population distribution.

  8. On the cost of approximating and recognizing a noise perturbed straight line or a quadratic curve segment in the plane. [central processing units

    Science.gov (United States)

    Cooper, D. B.; Yalabik, N.

    1975-01-01

    Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.

  9. Building an Orthonormal Basis from a 3D Unit Vector Without Normalization

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall

    2012-01-01

    I present two tools that save the computation of a dot product and a reciprocal square root in operations that are used frequently in the core of many rendering programs. The first tool is a formula for rotating a direction sampled around the z -axis to a direction sampled around an arbitrary uni...... extracted from the first formula, namely a faster way of building an orthonormal basis from a 3D unit vector. These tools require fewer arithmetic operations than other methods I am aware of, and a performance test of the more general tool confirms that it is faster.......I present two tools that save the computation of a dot product and a reciprocal square root in operations that are used frequently in the core of many rendering programs. The first tool is a formula for rotating a direction sampled around the z -axis to a direction sampled around an arbitrary unit...... vector. This is useful in Monte Carlo rendering techniques, such as path tracing, where directions are usually sampled in spherical coordinates and then transformed to a Cartesian unit vector in a local coordinate system where the zenith direction is the z -axis. The second tool is a more general result...

  10. Theory of approximation

    CERN Document Server

    Achieser, N I

    2004-01-01

    A pioneer of many modern developments in approximation theory, N. I. Achieser designed this graduate-level text from the standpoint of functional analysis. The first two chapters address approximation problems in linear normalized spaces and the ideas of P. L. Tchebysheff. Chapter III examines the elements of harmonic analysis, and Chapter IV, integral transcendental functions of the exponential type. The final two chapters explore the best harmonic approximation of functions and Wiener's theorem on approximation. Professor Achieser concludes this exemplary text with an extensive section of pr

  11. EnviroAtlas - Average Direct Normal Solar resources kWh/m2/Day by 12-Digit HUC for the Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — The annual average direct normal solar resources by 12-Digit Hydrologic Unit (HUC) was estimated from maps produced by the National Renewable Energy Laboratory for...

  12. Approximation by Multivariate Singular Integrals

    CERN Document Server

    Anastassiou, George A

    2011-01-01

    Approximation by Multivariate Singular Integrals is the first monograph to illustrate the approximation of multivariate singular integrals to the identity-unit operator. The basic approximation properties of the general multivariate singular integral operators is presented quantitatively, particularly special cases such as the multivariate Picard, Gauss-Weierstrass, Poisson-Cauchy and trigonometric singular integral operators are examined thoroughly. This book studies the rate of convergence of these operators to the unit operator as well as the related simultaneous approximation. The last cha

  13. Approximate Representations and Approximate Homomorphisms

    CERN Document Server

    Moore, Cristopher

    2010-01-01

    Approximate algebraic structures play a defining role in arithmetic combinatorics and have found remarkable applications to basic questions in number theory and pseudorandomness. Here we study approximate representations of finite groups: functions f:G -> U_d such that Pr[f(xy) = f(x) f(y)] is large, or more generally Exp_{x,y} ||f(xy) - f(x)f(y)||^2$ is small, where x and y are uniformly random elements of the group G and U_d denotes the unitary group of degree d. We bound these quantities in terms of the ratio d / d_min where d_min is the dimension of the smallest nontrivial representation of G. As an application, we bound the extent to which a function f : G -> H can be an approximate homomorphism where H is another finite group. We show that if H's representations are significantly smaller than G's, no such f can be much more homomorphic than a random function. We interpret these results as showing that if G is quasirandom, that is, if d_min is large, then G cannot be embedded in a small number of dimensi...

  14. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  15. Birkhoff normalization

    NARCIS (Netherlands)

    Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.

    2003-01-01

    The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for comput

  16. Particle Swarm Optimization Algorithm for B-spline Curve Approximation with Normal Constraint%PSO求解带法向约束的B样条曲线逼近问题

    Institute of Scientific and Technical Information of China (English)

    胡良臣; 寿华好

    2016-01-01

    若是 B 样条拟合曲线的节点向量与控制顶点均为变量,则该问题变为一个带约束的多维多变量高度非线性的优化问题,反求方程系统的方法已经难以求得最优解。针对该类问题,提出一种带有法向约束的粒子群优化算法(PSO)求解曲线逼近问题的方法,首先将带有法向约束的非线性最优化问题以罚函数的方法转化为无约束的最优化问题,建立一个与数据点和法向同时相关且比较合适的适应度函数(误差函数),然后以PSO调节节点向量,并使用最小二乘法求解在该节点向量下的最优拟合曲线,通过判断适应度函数值的优劣循环迭代,直到达到终止条件或者产生令人满意(误差容忍值)的拟合曲线为止。将文中算法产生的拟合曲线通过实验数据的对比与说明,突出了该方法的优越性,表明其用于解决带法向约束的逼近问题切实可行。%If the knot vector and control points of a B-spline curve are variable, the B-spline curve approxi-mation with normal constraint problem becomes a multidimensional, multivariate and highly nonlinear op-timization problem with normal constraints, the conventional method of inverse equation system is difficult to obtain the optimal solution. Aiming at this kind of problem, a particle swarm optimization (PSO) method is introduced to solve the curve approximation problem with normal constraints. Firstly, the penalty function method is used to transform the constrained optimization problem into an unconstrained optimization prob-lem. Secondly, a suitable fitness function which is closely related to both data points and normal constraints is constructed. Finally, PSO is applied to adjust the knot vector, and at the same time, the least square method is used to solve the optimal control points, do loop iteration until the best B-spline curve approxima-tion is produced. By a comparison with existing methods, the superiority of

  17. Attenuation of Lg waves in the New Madrid seismic zone of the central United States using the coda normalization method

    Science.gov (United States)

    Nazemi, Nima; Pezeshk, Shahram; Sedaghati, Farhad

    2017-08-01

    Unique properties of coda waves are employed to evaluate the frequency dependent quality factor of Lg waves using the coda normalization method in the New Madrid seismic zone of the central United States. Instrument and site responses are eliminated and source functions are isolated to construct the inversion problem. For this purpose, we used 121 seismograms from 37 events with moment magnitudes, M, ranging from 2.5 to 5.2 and hypocentral distances from 120 to 440 km recorded by 11 broadband stations. A singular value decomposition (SVD) algorithm is used to extract Q values from the data, while the geometric spreading exponent is assumed to be a constant. Inversion results are then fitted with a power law equation from 3 to 12 Hz to derive the frequency dependent quality factor function. The final results of the analysis are QVLg (f) = (410 ± 38) f0.49 ± 0.05 for the vertical component and QHLg (f) = (390 ± 26) f0.56 ± 0.04 for the horizontal component, where the term after ± sign represents one standard error. For stations within the Mississippi embayment with an average sediment depth of 1 km around the Memphis metropolitan area, estimation of quality factor using the coda normalization method is not well-constrained at low frequencies (f < 3 Hz). There may be several reasons contributing to this issue, such as low frequency surface wave contamination, site effects, or even a change in coda wave scattering regime which can exacerbate the scatter of the data.

  18. THE FEATURES OF CONNEXINS EXPRESSION IN THE CELLS OF NEUROVASCLAR UNIT IN NORMAL CONDITIONS AND HYPOXIA IN VITRO

    Directory of Open Access Journals (Sweden)

    A. V. Morgun

    2014-01-01

    Full Text Available The aim of this research was to assess a role of connexin 43 (Cx43 and associated molecule CD38 in the regulation of cell-cell interactions in the neurovascular unit (NVU in vitro in physiological conditions and in hypoxia.Materials and methods. The study was done using the original neurovascular unit model in vitro. The NVU consisted of three cell types: neurons, astrocytes, and cerebral endothelial cells derived from rats. Hypoxia was induced by incubating cells with sodium iodoacetate for 30 min at37 °C in standard culture conditions.Results. We investigated the role of connexin 43 in the regulation of cell interactions within the NVU in normal and hypoxic injury in vitro. We found that astrocytes were characterized by high levels of expression of Cx43 and low level of CD38 expression, neurons demonstrated high levels of CD38 and low levels of Cx43. In hypoxic conditions, the expression of Cx43 and CD38 in astrocytes markedly increased while CD38 expression in neurons decreased, however no changes were found in endothelial cells. Suppression of Cx43 activity resulted in down-regulation of CD38 in NVU cells, both in physiological conditions and at chemical hypoxia.Conclusion. Thus, the Cx-regulated intercellular NAD+-dependent communication and secretory phenotype of astroglial cells that are the part of the blood-brain barrier is markedly changed in hypoxia.

  19. Diophantine approximations on fractals

    CERN Document Server

    Einsiedler, Manfred; Shapira, Uri

    2009-01-01

    We exploit dynamical properties of diagonal actions to derive results in Diophantine approximations. In particular, we prove that the continued fraction expansion of almost any point on the middle third Cantor set (with respect to the natural measure) contains all finite patterns (hence is well approximable). Similarly, we show that for a variety of fractals in [0,1]^2, possessing some symmetry, almost any point is not Dirichlet improvable (hence is well approximable) and has property C (after Cassels). We then settle by similar methods a conjecture of M. Boshernitzan saying that there are no irrational numbers x in the unit interval such that the continued fraction expansions of {nx mod1 : n is a natural number} are uniformly eventually bounded.

  20. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  1. Summary of Time Period-Based and Other Approximation Methods for Determining the Capacity Value of Wind and Solar in the United States: September 2010 - February 2012

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, J.; Porter, K.

    2012-03-01

    This paper updates previous work that describes time period-based and other approximation methods for estimating the capacity value of wind power and extends it to include solar power. The paper summarizes various methods presented in utility integrated resource plans, regional transmission organization methodologies, regional stakeholder initiatives, regulatory proceedings, and academic and industry studies. Time period-based approximation methods typically measure the contribution of a wind or solar plant at the time of system peak - sometimes over a period of months or the average of multiple years.

  2. Relativistic calculation of nuclear magnetic shielding tensor using the regular approximation to the normalized elimination of the small component. III. Introduction of gauge-including atomic orbitals and a finite-size nuclear model

    Science.gov (United States)

    Hamaya, S.; Maeda, H.; Funaki, M.; Fukui, H.

    2008-12-01

    The relativistic calculation of nuclear magnetic shielding tensors in hydrogen halides is performed using the second-order regular approximation to the normalized elimination of the small component (SORA-NESC) method with the inclusion of the perturbation terms from the metric operator. This computational scheme is denoted as SORA-Met. The SORA-Met calculation yields anisotropies, Δσ =σ∥-σ⊥, for the halogen nuclei in hydrogen halides that are too small. In the NESC theory, the small component of the spinor is combined to the large component via the operator σ⃗ṡπ⃗U/2c, in which π⃗=p⃗+A⃗, U is a nonunitary transformation operator, and c ≅137.036 a.u. is the velocity of light. The operator U depends on the vector potential A⃗ (i.e., the magnetic perturbations in the system) with the leading order c-2 and the magnetic perturbation terms of U contribute to the Hamiltonian and metric operators of the system in the leading order c-4. It is shown that the small Δσ for halogen nuclei found in our previous studies is related to the neglect of the U(0,1) perturbation operator of U, which is independent of the external magnetic field and of the first order with respect to the nuclear magnetic dipole moment. Introduction of gauge-including atomic orbitals and a finite-size nuclear model is also discussed.

  3. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  4. The proximal hamstring muscle–tendon–bone unit: A review of the normal anatomy, biomechanics, and pathophysiology

    Energy Technology Data Exchange (ETDEWEB)

    Beltran, Luis, E-mail: luisbeltran@mac.com [Department of Radiology, Hospital for Joint Diseases, NYU, New York, NY (United States); Ghazikhanian, Varand, E-mail: varandg@aol.com [Department of Radiology, Maimonides Medical Center, Brooklyn, NY (United States); Padron, Mario, E-mail: mario.padron@cemtro.es [Clinica CEMTRO, Avenida del Ventisquero de la Condesa 42, 28035 Madrid (Spain); Beltran, Javier, E-mail: Jbeltran46@msn.com [Department of Radiology, Maimonides Medical Center, Brooklyn, NY (United States)

    2012-12-15

    Proximal hamstring injuries occur during eccentric contraction with the hip and the knee on extension; hence they are relatively frequent lesions in specific sports such as water skiing and hurdle jumping. Additionally, the trend toward increasing activity and fitness training in the general population has resulted in similar injuries. Myotendinous strains are more frequent than avulsion injuries. Discrimination between the two types of lesions is relevant for patient management, since the former is treated conservatively and the latter surgically. MRI and Ultrasonography are both well suited techniques for the diagnosis and evaluation of hamstring tendon injuries. Each one has its advantages and disadvantages. The purpose of this article is to provide a comprehensive review of the anatomy and biomechanics of the proximal hamstring muscle–tendon–bone unit and the varied imaging appearances of hamstring injury, which is vital for optimizing patient care. This will enable the musculoskeletal radiologist to contribute accurate and useful information in the treatment of athletes at all levels of participation.

  5. Elevated CK-MB with a normal troponin does not predict 30-day adverse cardiac events in emergency department chest pain observation unit patients.

    Science.gov (United States)

    Safdar, Basmah; Bezek, Sarah K; Sinusas, Albert J; Russell, Raymond R; Klein, Matthew R; Dziura, James D; D'Onofrio, Gail

    2014-03-01

    Prior studies indicate that an elevated creatinine kinase (CK)-MB imparts poor prognosis in patients with acute coronary syndrome despite a normal troponin. Its prognosis in the undifferentiated chest pain observation unit (CPU) population remains undefined. To compare rates and predictors of 30-day adverse cardiac events in 2 cohorts (CK ±/MB+ vs. normal [CK ±/MB-]) in low-moderate-risk CPU patients. Consecutive CPU patients were followed in a retrospective cohort study for primary outcome (acute coronary syndrome, percutaneous transluminal coronary angioplasty, coronary artery bypass graft, abnormal stress test, cardiac hospitalization, or death within 30 days) by using standardized chart reviews and national death registry. Exclusions were: those aged 30 years or younger, positive troponin, ischemic electrocardiogram, hemodynamic instability, heart failure, or dialysis. Between January 2006 and April 2009, 2979 patients were eligible, of which 350 excluded and 2629 analyzed. MB+ compared with normal patients were more likely to be: older (mean, 53.4 ± 14 vs. 51.5 ± 12 years; P = 0.04); male (71% vs. 40%; P = 0.01); renal insufficient (5% vs. 2%; P = 0.01); hypertensive (50% vs. 44%; P = 0.04); dyslipidemic (44% vs. 33%; P = 0.01) obese (55% vs. 43%; P = 0.01); and with known coronary artery disease (14% vs. 5%; P MB+ vs. normal (9.1%, 8.0%; odds ratio, 1.1, 0.7-1.9) or serial MB+ vs. normal (7.5%, 7.4%; odds ratio, 1.0, 0.5-1.8). In a multiple logistic regression model, male sex, diabetes, and prior CAD predicted adverse events, whereas CK-MB along with race, hypertension, smoking, dyslipidemia, family history, and obesity did not. Elevated CK-MB does not add value to serial troponin testing in low-moderate-risk CPU patients.

  6. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X. The clas......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X....... The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  7. Optimal Belief Approximation

    CERN Document Server

    Leike, Reimar H

    2016-01-01

    In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a ranking function that quantifies how "embarrassing" it is to communicate a given approximation. We show that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. We find that this ranking is equivalent to the Kullback-Leibler divergence that is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments, the approximated and non-approximated beliefs, should be used. We hope that our elementary derivation settles the apparent confusion. We show for example that when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many su...

  8. Covariate adjusted weighted normal spatial scan statistics with applications to study geographic clustering of obesity and lung cancer mortality in the United States.

    Science.gov (United States)

    Huang, Lan; Tiwari, Ram C; Pickle, Linda W; Zou, Zhaohui

    2010-10-15

    In the field of cluster detection, a weighted normal model-based scan statistic was recently developed to analyze regional continuous data and to evaluate the clustering pattern of pre-defined cells (such as state, county, tract, school, hospital) that include many individuals. The continuous measures of interest are, for example, the survival rate, mortality rate, length of physical activity, or the obesity measure, namely, body mass index, at the cell level with an uncertainty measure for each cell. In this paper, we extend the method to search for clusters of the cells after adjusting for single/multiple categorical/continuous covariates. We apply the proposed method to 1999-2003 obesity data in the United States (US) collected by CDC's Behavioral Risk Factor Surveillance System with adjustment for age and race, and to 1999-2003 lung cancer age-adjusted mortality data by gender in the United States from the Surveillance Epidemiology and End Results (SEER Program) with adjustment for smoking and income.

  9. Approximate flavor symmetries

    OpenAIRE

    Rašin, Andrija

    1994-01-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  10. On Element SDD Approximability

    CERN Document Server

    Avron, Haim; Toledo, Sivan

    2009-01-01

    This short communication shows that in some cases scalar elliptic finite element matrices cannot be approximated well by an SDD matrix. We also give a theoretical analysis of a simple heuristic method for approximating an element by an SDD matrix.

  11. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  12. Approximation of distributed delays

    CERN Document Server

    Lu, Hao; Eberard, Damien; Simon, Jean-Pierre

    2010-01-01

    We address in this paper the approximation problem of distributed delays. Such elements are convolution operators with kernel having bounded support, and appear in the control of time-delay systems. From the rich literature on this topic, we propose a general methodology to achieve such an approximation. For this, we enclose the approximation problem in the graph topology, and work with the norm defined over the convolution Banach algebra. The class of rational approximates is described, and a constructive approximation is proposed. Analysis in time and frequency domains is provided. This methodology is illustrated on the stabilization control problem, for which simulations results show the effectiveness of the proposed methodology.

  13. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X. The clas......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X...

  14. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  15. 简化球谐近似模型的图形处理器加速求解%Graphics processing units-accelerated solving for simplify spherical harmonic approximation model

    Institute of Scientific and Technical Information of China (English)

    贺小伟; 陈政; 侯榆青; 郭红波

    2016-01-01

    As a high-order approximation model to Radiative Transfer Equation, simplify spherical harmonic (SPN) approximation has become a hot research topic in optical molecular imaging research. However, low computational efficiency imposes restrictions on its wide applications. This paper presented a graphics processing units (GPU)-parallel accelerated strategy for solving SPN model. The proposed strategy adopted compute unified device architecture (CUDA) parallel processing architecture introduced by NVIDIA Company to build parallel acceleration of two most time-consuming modules, generation of stiffness matrix and solving linear equations. Based on the feature of CUDA, the strategy optimized the parallel computing in tasks distribution, use of memory units and data preprocessing. Simulations on phantom and digital mouse model are designed to evaluate the accelerating effect by comparing the time for system matrix generation and average time of each step iteration. Experimental results show that the overall speedup ratio is around 30 times, which exhibit the advantage and potential of the proposed strategy in optical molecular imaging.%作为辐射传输方程的高阶近似,简化球谐近似模型成为近年光学分子成像研究的重点,但计算效率低限制了它的广泛应用,为此提出一种基于图形处理器的并行加速策略,采用NVIDIA 公司推出的统一计算设备架构,对求解过程中耗时最多的两个模块———有限元刚度矩阵的生成和线性方程组的求解进行基于图形处理器的并行加速;根据统一计算设备架构的特点,进行计算任务的分配、存储器的合理使用以及数据的预处理三方面的优化;仿体及数字鼠仿真实验对比刚度矩阵生成时间以及平均迭代时间,以评价所提出方法的加速效果。实验结果表明,该方法可使求解速度提高30倍左右,展示了该方法在光学分子成像中的优势及潜力。

  16. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  17. Approximate formulas of number of transfer unit of U shaped fin-tube heat exchangers%U型翅片管换热器传热单元数计算式

    Institute of Scientific and Technical Information of China (English)

    吴小舟; 赵加宁

    2012-01-01

    To obtain the formulas of number of transfer unit (NTU) of U shaped fin-tube heat exchangers, a U shaped fin-tube heat exchanger is taked as research object. Approximate NTU formulas of number of transfer unit are derived by establishing heat transfer model of U shaped fin-tube when the trend of two flows in fin-tube heat exchanger is counter and parallel, respectively. Subsequently, the heat transfer coefficients of fin-tube are com- pared and analyzed by effectiveness-number of transfer unit method (e-NTU method) and logarithm mean tem- perature difference method ( LMTD method) when inlet water temperature is from 45 ℃ to 60℃ and water flow rate is from 30 kg/h to 110 kg/h. The results show that, the differenbe of the heat transfer coefficients respectively calculated by 6-NTU method and LMTD method is so small that these NTU formulas are valid.%为了得到U型翅片管换热器传热单元数计算式,以U型翅片管换热器为研究对象,通过建立翅片管换热模型,推导出了逆流和顺流2种流体流动趋势的传热单元数计算式(NTU计算式).分别采用推导出的传热单元数计算式(F—NTU法)和前人推导出的平均温差关系式(LMTD法)计算U型翅片管传热系数,并进行对比分析.结果表明,当进水温度为45~60℃,热水流量为30~110kg/h时,由ε-NTU法和LMTD法计算出来的翅片管传热系数相差很小.所推导出来的U型翅片管换热器传热单元数计算式是合理的.

  18. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  19. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  20. Approximate Modified Policy Iteration

    CERN Document Server

    Scherrer, Bruno; Ghavamzadeh, Mohammad; Geist, Matthieu

    2012-01-01

    Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three approximate MPI (AMPI) algorithms that are extensions of the well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide an error propagation analysis for AMPI that unifies those for approximate policy and value iteration. We also provide a finite-sample analysis for the classification-based implementation of AMPI (CBMPI), which is more general (and somehow contains) than the analysis of the other presented AMPI algorithms. An interesting observation is that the MPI's parameter allows us to control the balance of errors (in value function approximation and in estimating the greedy policy) in the fina...

  1. Approximate calculation of integrals

    CERN Document Server

    Krylov, V I

    2006-01-01

    A systematic introduction to the principal ideas and results of the contemporary theory of approximate integration, this volume approaches its subject from the viewpoint of functional analysis. In addition, it offers a useful reference for practical computations. Its primary focus lies in the problem of approximate integration of functions of a single variable, rather than the more difficult problem of approximate integration of functions of more than one variable.The three-part treatment begins with concepts and theorems encountered in the theory of quadrature. The second part is devoted to t

  2. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  3. Approximating Stationary Statistical Properties

    Institute of Scientific and Technical Information of China (English)

    Xiaoming WANG

    2009-01-01

    It is well-known that physical laws for large chaotic dynamical systems are revealed statistically. Many times these statistical properties of the system must be approximated numerically. The main contribution of this manuscript is to provide simple and natural criterions on numerical methods (temporal and spatial discretization) that are able to capture the stationary statistical properties of the underlying dissipative chaotic dynamical systems asymptotically. The result on temporal approximation is a recent finding of the author, and the result on spatial approximation is a new one. Applications to the infinite Prandtl number model for convection and the barotropic quasi-geostrophic model are also discussed.

  4. Approximation of irrationals

    Directory of Open Access Journals (Sweden)

    Malvina Baica

    1985-01-01

    Full Text Available The author uses a new modification of Jacobi-Perron Algorithm which holds for complex fields of any degree (abbr. ACF, and defines it as Generalized Euclidean Algorithm (abbr. GEA to approximate irrationals.

  5. Approximations in Inspection Planning

    DEFF Research Database (Denmark)

    Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.

    2000-01-01

    Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations....... One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found...... by the inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....

  6. The Karlqvist approximation revisited

    CERN Document Server

    Tannous, C

    2015-01-01

    The Karlqvist approximation signaling the historical beginning of magnetic recording head theory is reviewed and compared to various approaches progressing from Green, Fourier, Conformal mapping that obeys the Sommerfeld edge condition at angular points and leads to exact results.

  7. Approximations in Inspection Planning

    DEFF Research Database (Denmark)

    Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.

    2000-01-01

    Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations....... One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found...... by the inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....

  8. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  9. Approximation Behooves Calibration

    DEFF Research Database (Denmark)

    da Silva Ribeiro, André Manuel; Poulsen, Rolf

    2013-01-01

    Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....

  10. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  11. Approximation and supposition

    Directory of Open Access Journals (Sweden)

    Maksim Duškin

    2015-11-01

    Full Text Available Approximation and supposition This article compares exponents of approximation (expressions like Russian около, примерно, приблизительно, более, свыше and the words expressing supposition (for example Russian скорее всего, наверное, возможно. These words are often confused in research, in particular researchers often mention exponents of supposition in case of exponents of approximation. Such approach arouses some objections. The author intends to demonstrate in this article a notional difference between approximation and supposition, therefore the difference between exponents of these two notions. This difference could be described by specifying different attitude of approximation and supposition to the notion of knowledge. Supposition implies speaker’s ignorance of the exact number, while approximation does not mean such ignorance. The article offers examples proving this point of view.

  12. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  13. Monotone Boolean approximation

    Energy Technology Data Exchange (ETDEWEB)

    Hulme, B.L.

    1982-12-01

    This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.

  14. On Convex Quadratic Approximation

    NARCIS (Netherlands)

    den Hertog, D.; de Klerk, E.; Roos, J.

    2000-01-01

    In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of

  15. Local spline approximants

    OpenAIRE

    Norton, Andrew H.

    1991-01-01

    Local spline approximants offer a means for constructing finite difference formulae for numerical solution of PDEs. These formulae seem particularly well suited to situations in which the use of conventional formulae leads to non-linear computational instability of the time integration. This is explained in terms of frequency responses of the FDF.

  16. On Convex Quadratic Approximation

    NARCIS (Netherlands)

    den Hertog, D.; de Klerk, E.; Roos, J.

    2000-01-01

    In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of

  17. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  18. Shared values and normality

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wen-hua; PANG Xue-cheng

    2006-01-01

    This paper investigates the relationship between the normality and the shared values for a meromorphic function on the unit disc △.Based on Marty's normality criterion and through a detailed analysis of the meromorphic functions,it is shown that if for every f∈F,f and f(k) share a and b on △ and the zeros of f(z)-a are of multiplicity k≥3,then F is normal on △,where F is a family of meromorphic functions on the unit disc △,and a and b are distinct values.

  19. Topology, calculus and approximation

    CERN Document Server

    Komornik, Vilmos

    2017-01-01

    Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...

  20. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  1. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  2. Topics in Metric Approximation

    Science.gov (United States)

    Leeb, William Edward

    This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.

  3. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Normalized Atmospheric Deposition for 2002, Total Inorganic Nitrogen

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set represents the average normalized atmospheric (wet) deposition, in kilograms, of Total Inorganic Nitrogen for the year 2002 compiled for every...

  4. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Normalized Atmospheric Deposition for 2002, Nitrate (NO3)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set represents the average normalized atmospheric (wet) deposition, in kilograms, of Nitrate (NO3) for the year 2002 compiled for every catchment of...

  5. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Normalized Atmospheric Deposition for 2002, Ammonium (NH4)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set represents the average normalized atmospheric (wet) deposition, in kilograms, of Ammonium (NH4) for the year 2002 compiled for every catchment of...

  6. Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Normalized Atmospheric Deposition for 2002, Ammonium (NH4)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the average normalized (wet) deposition, in kilograms per square kilometer multiplied by 100, of ammonium (NH4) for the year 2002...

  7. Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Normalized Atmospheric Deposition for 2002, Total Inorganic Nitrogen

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the average normalized atmospheric (wet) deposition, in kilograms per square kilometer multiplied by 100, of Total Inorganic...

  8. Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Normalized Atmospheric Deposition for 2002, Nitrate (NO3)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the average normalized (wet) deposition, in kilograms per square kilometer multiplied by 100, of Nitrate (NO3) for the year 2002...

  9. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  10. Approximate option pricing

    Energy Technology Data Exchange (ETDEWEB)

    Chalasani, P.; Saias, I. [Los Alamos National Lab., NM (United States); Jha, S. [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  11. On-line EM algorithm for the normalized gaussian network.

    Science.gov (United States)

    Sato, M; Ishii, S

    2000-02-01

    A normalized gaussian network (NGnet) (Moody & Darken, 1989) is a network of local linear regression units. The model softly partitions the input space by normalized gaussian functions, and each local unit linearly approximates the output within the partition. In this article, we propose a new on-line EMalgorithm for the NGnet, which is derived from the batch EMalgorithm (Xu, Jordan, &Hinton 1995), by introducing a discount factor. We show that the on-line EM algorithm is equivalent to the batch EM algorithm if a specific scheduling of the discount factor is employed. In addition, we show that the on-line EM algorithm can be considered as a stochastic approximation method to find the maximum likelihood estimator. A new regularization method is proposed in order to deal with a singular input distribution. In order to manage dynamic environments, where the input-output distribution of data changes over time, unit manipulation mechanisms such as unit production, unit deletion, and unit division are also introduced based on probabilistic interpretation. Experimental results show that our approach is suitable for function approximation problems in dynamic environments. We also apply our on-line EM algorithm to robot dynamics problems and compare our algorithm with the mixtures-of-experts family.

  12. Finite elements and approximation

    CERN Document Server

    Zienkiewicz, O C

    2006-01-01

    A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o

  13. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  14. Diophantine approximations and Diophantine equations

    CERN Document Server

    Schmidt, Wolfgang M

    1991-01-01

    "This book by a leading researcher and masterly expositor of the subject studies diophantine approximations to algebraic numbers and their applications to diophantine equations. The methods are classical, and the results stressed can be obtained without much background in algebraic geometry. In particular, Thue equations, norm form equations and S-unit equations, with emphasis on recent explicit bounds on the number of solutions, are included. The book will be useful for graduate students and researchers." (L'Enseignement Mathematique) "The rich Bibliography includes more than hundred references. The book is easy to read, it may be a useful piece of reading not only for experts but for students as well." Acta Scientiarum Mathematicarum

  15. Approximate equivalence in von Neumann algebras

    Institute of Scientific and Technical Information of China (English)

    DING; Huiru; Don; Hadwin

    2005-01-01

    One formulation of D. Voiculescu's theorem on approximate unitary equivalence is that two unital representations π and ρ of a separable C*-algebra are approximately unitarily equivalent if and only if rank o π = rank o ρ. We study the analog when the ranges of π and ρ are contained in a von Neumann algebra R, the unitaries inducing the approximate equivalence must come from R, and "rank" is replaced with "R-rank" (defined as the Murray-von Neumann equivalence of the range projection).

  16. Approximate strip exchanging.

    Science.gov (United States)

    Roy, Swapnoneel; Thakur, Ashok Kumar

    2008-01-01

    Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.

  17. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...... projection of the surface onto this plane, a reference curve is determined by use of methods for thinning of binary images. Finally, the cylinder surface is constructed as follows: the directrix of the cylinder surface is determined by a least squares method minimizing the distance to the points...... in the projection within a tolerance given by the reference curve, and the rulings are lines perpendicular to the projection plane. Application of the method in ship design is given....

  18. S-Approximation: A New Approach to Algebraic Approximation

    Directory of Open Access Journals (Sweden)

    M. R. Hooshmandasl

    2014-01-01

    Full Text Available We intend to study a new class of algebraic approximations, called S-approximations, and their properties. We have shown that S-approximations can be used for applied problems which cannot be modeled by inclusion based approximations. Also, in this work, we studied a subclass of S-approximations, called Sℳ-approximations, and showed that this subclass preserves most of the properties of inclusion based approximations but is not necessarily inclusionbased. The paper concludes by studying some basic operations on S-approximations and counting the number of S-min functions.

  19. Universal Approximation of Markov Kernels by Shallow Stochastic Feedforward Networks

    OpenAIRE

    Montufar, Guido

    2015-01-01

    We establish upper bounds for the minimal number of hidden units for which a binary stochastic feedforward network with sigmoid activation probabilities and a single hidden layer is a universal approximator of Markov kernels. We show that each possible probabilistic assignment of the states of $n$ output units, given the states of $k\\geq1$ input units, can be approximated arbitrarily well by a network with $2^{k-1}(2^{n-1}-1)$ hidden units.

  20. Normalized information distance

    NARCIS (Netherlands)

    Vitányi, P.M.B.; Balbach, F.J.; Cilibrasi, R.L.; Li, M.; Emmert-Streib, F.; Dehmer, M.

    2009-01-01

    The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string

  1. Normalized information distance

    NARCIS (Netherlands)

    Vitányi, P.M.B.; Balbach, F.J.; Cilibrasi, R.L.; Li, M.

    2008-01-01

    The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string

  2. Ratios of Normal Variables

    Directory of Open Access Journals (Sweden)

    George Marsaglia

    2006-05-01

    Full Text Available This article extends and amplifies on results from a paper of over forty years ago. It provides software for evaluating the density and distribution functions of the ratio z/w for any two jointly normal variates z,w, and provides details on methods for transforming a general ratio z/w into a standard form, (a+x/(b+y , with x and y independent standard normal and a, b non-negative constants. It discusses handling general ratios when, in theory, none of the moments exist yet practical considerations suggest there should be approximations whose adequacy can be verified by means of the included software. These approximations show that many of the ratios of normal variates encountered in practice can themselves be taken as normally distributed. A practical rule is developed: If a < 2.256 and 4 < b then the ratio (a+x/(b+y is itself approximately normally distributed with mean μ = a/(1.01b − .2713 and variance 2 = (a2 + 1/(b2 + .108b − 3.795 − μ2.

  3. Ratios of Normal Variables

    Directory of Open Access Journals (Sweden)

    George Marsaglia

    2006-05-01

    Full Text Available This article extends and amplifies on results from a paper of over forty years ago. It provides software for evaluating the density and distribution functions of the ratio z/w for any two jointly normal variates z,w, and provides details on methods for transforming a general ratio z/w into a standard form, (a+x/(b+y , with x and y independent standard normal and a, b non-negative constants. It discusses handling general ratios when, in theory, none of the moments exist yet practical considerations suggest there should be approximations whose adequacy can be verified by means of the included software. These approximations show that many of the ratios of normal variates encountered in practice can themselves be taken as normally distributed. A practical rule is developed: If a < 2.256 and 4 < b then the ratio (a+x/(b+y is itself approximately normally distributed with mean μ = a/(1.01b - .2713 and variance σ2 = (a2 + 1/(b2 + .108b - 3.795 μ2.

  4. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2012-05-01

    Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.

  5. Polynomial approximation and cubature at approximate Fekete and Leja points of the cylinder

    CERN Document Server

    De Marchi, Stefano

    2011-01-01

    The paper deals with polynomial interpolation, least-square approximation and cubature of functions defined on the rectangular cylinder, $K=D\\times [-1,1]$, with $D$ the unit disk. The nodes used for these processes are the {\\it Approximate Fekete Points} (AFP) and the {\\it Discrete Leja Points} (DLP) extracted from suitable {\\it Weakly Admissible Meshes} (WAMs) of the cylinder. From the analysis of the growth of the Lebesgue constants, approximation and cubature errors, we show that the AFP and the DLP extracted from WAM are good points for polynomial approximation and numerical integration of functions defined on the cylinder.

  6. Approximation methods in gravitational-radiation theory

    Science.gov (United States)

    Will, C. M.

    1986-02-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913+16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. The author summarizes recent developments in two areas in which approximations are important: (1) the quadrupole approximation, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (2) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  7. Operators of Approximations and Approximate Power Set Spaces

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xian-yong; MO Zhi-wen; SHU Lan

    2004-01-01

    Boundary inner and outer operators are introduced; and union, intersection, complement operators of approximations are redefined. The approximation operators have a good property of maintaining union, intersection, complement operators, so the rough set theory has been enriched from the operator-oriented and set-oriented views. Approximate power set spaces are defined, and it is proved that the approximation operators are epimorphisms from power set space to approximate power set spaces. Some basic properties of approximate power set space are got by epimorphisms in contrast to power set space.

  8. A systematic sequence of relativistic approximations.

    Science.gov (United States)

    Dyall, Kenneth G

    2002-06-01

    An approach to the development of a systematic sequence of relativistic approximations is reviewed. The approach depends on the atomically localized nature of relativistic effects, and is based on the normalized elimination of the small component in the matrix modified Dirac equation. Errors in the approximations are assessed relative to four-component Dirac-Hartree-Fock calculations or other reference points. Projection onto the positive energy states of the isolated atoms provides an approximation in which the energy-dependent parts of the matrices can be evaluated in separate atomic calculations and implemented in terms of two sets of contraction coefficients. The errors in this approximation are extremely small, of the order of 0.001 pm in bond lengths and tens of microhartrees in absolute energies. From this approximation it is possible to partition the atoms into relativistic and nonrelativistic groups and to treat the latter with the standard operators of nonrelativistic quantum mechanics. This partitioning is shared with the relativistic effective core potential approximation. For atoms in the second period, errors in the approximation are of the order of a few hundredths of a picometer in bond lengths and less than 1 kJ mol(-1) in dissociation energies; for atoms in the third period, errors are a few tenths of a picometer and a few kilojoule/mole, respectively. A third approximation for scalar relativistic effects replaces the relativistic two-electron integrals with the nonrelativistic integrals evaluated with the atomic Foldy-Wouthuysen coefficients as contraction coefficients. It is similar to the Douglas-Kroll-Hess approximation, and is accurate to about 0.1 pm and a few tenths of a kilojoule/mole. The integrals in all the approximations are no more complicated than the integrals in the full relativistic methods, and their derivatives are correspondingly easy to formulate and evaluate.

  9. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  10. Rational offset approximation of rational Bézier curves

    Institute of Scientific and Technical Information of China (English)

    CHENG Min; WANG Guo-jin

    2006-01-01

    The problem of parametric speed approximation of a rational curve is raised in this paper. Offset curves are widely used in various applications. As for the reason that in most cases the offset curves do not preserve the same polynomial or rational polynomial representations, it arouses difficulty in applications. Thus approximation methods have been introduced to solve this problem. In this paper, it has been pointed out that the crux of offset curve approximation lies in the approximation of parametric speed. Based on the Jacobi polynomial approximation theory with endpoints interpolation, an algebraic rational approximation algorithm of offset curve, which preserves the direction of normal, is presented.

  11. Nonlinear Approximation Using Gaussian Kernels

    CERN Document Server

    Hangelbroek, Thomas

    2009-01-01

    It is well-known that non-linear approximation has an advantage over linear schemes in the sense that it provides comparable approximation rates to those of the linear schemes, but to a larger class of approximands. This was established for spline approximations and for wavelet approximations, and more recently for homogeneous radial basis function (surface spline) approximations. However, no such results are known for the Gaussian function. The crux of the difficulty lies in the necessity to vary the tension parameter in the Gaussian function spatially according to local information about the approximand: error analysis of Gaussian approximation schemes with varying tension are, by and large, an elusive target for approximators. We introduce and analyze in this paper a new algorithm for approximating functions using translates of Gaussian functions with varying tension parameters. Our scheme is sophisticated to a degree that it employs even locally Gaussians with varying tensions, and that it resolves local ...

  12. Comparison of Muscle Activities Using a Pressure Biofeedback Unit during Abdominal Muscle Training Performed by Normal Adults in the Standing and Supine Positions.

    Science.gov (United States)

    Jung, Da-Eun; Kim, Kyoung; Lee, Su-Kyoung

    2014-02-01

    [Purpose] The purpose of this study was to assess the effects of draw-in exercise on abdominal muscle activity in the standing and supine positions. [Methods] Twenty healthy women participated in this study. The subjects were required to complete two draw-in exercises (standing and supine positions) using a biofeedback pressure unit. The root mean square (RMS) values of the EMG data were expressed as a percentage of the resting contraction. The data were analyzed using the independent t-test. [Results] According to the changes in the activities of the abdominal muscles, the draw-in exercise in the standing position produced the most significant increase in the activities of the rectus abdominis, the transverse abdominis, the internal oblique, and the external oblique muscles. [Conclusion] The activities of the trunk stability muscles (rectus abdominis, transverse abdominis, internal oblique, and external oblique) increased more in the standing than in the supine position, enabling the subjects to overcome gravity. Therefore, to strengthen the activation of the abdominal muscles, a standing position seems to be more effective than a supine position for draw-in exercises.

  13. The StreamCat Dataset: Accumulated Attributes for NHDPlusV2 (Version 2.1) Catchments for the Conterminous United States: PRISM Normals Data

    Science.gov (United States)

    This dataset represents climate observations within individual, local NHDPlusV2 catchments and upstream, contributing watersheds. Attributes of the landscape layer were calculated for every local NHDPlusV2 catchment and accumulated to provide watershed-level metrics. (See Supplementary Info for Glossary of Terms) PRISM is a set of monthly, yearly, and single-event gridded data products of mean temperature and precipitation, max/min temperatures, and dewpoints, primarily for the United States. In-situ point measurements are ingested into the PRISM (Parameter elevation Regression on Independent Slopes Model) statistical mapping system. The PRISM products use a weighted regression scheme to account for complex climate regimes associated with orography, rain shadows, temperature inversions, slope aspect, coastal proximity, and other factors. (see Data Sources for links to NHDPlusV2 data and USGS Data) These data are summarized by local catchment and by watershed to produce local catchment-level and watershed-level metrics as a continuous data type (see Data Structure and Attribute Information for a description).

  14. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  15. Approximations of fractional Brownian motion

    CERN Document Server

    Li, Yuqiang; 10.3150/10-BEJ319

    2012-01-01

    Approximations of fractional Brownian motion using Poisson processes whose parameter sets have the same dimensions as the approximated processes have been studied in the literature. In this paper, a special approximation to the one-parameter fractional Brownian motion is constructed using a two-parameter Poisson process. The proof involves the tightness and identification of finite-dimensional distributions.

  16. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  17. Nonlinear Stochastic PDEs: Analysis and Approximations

    Science.gov (United States)

    2016-05-23

    Distribution free Skorokhod-Malliavian Calculus , Stochastic And Partial Differential Equations: Analysis and Computations, (06 2016): 319. doi : Z. Zhang... doi : X. Wang, Boris Rozovskii. The Wick-Malliavin Approximation on Elliptic Problems with Long-Normal Random Coefficients, SIAM J Scientific...Computing, (10 2013): 2370. doi : Z. Zhang, M.V. Trrtykov, B. Rozovskii, G.E. Karniadakis. A Recursive Sparse Grid Collocation Methd for Differential

  18. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  19. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  20. On Born approximation in black hole scattering

    Energy Technology Data Exchange (ETDEWEB)

    Batic, D. [University of West Indies, Department of Mathematics, Kingston (Jamaica); Kelkar, N.G.; Nowakowski, M. [Universidad de los Andes, Departamento de Fisica, Bogota (Colombia)

    2011-12-15

    A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordstroem and Reissner-Nordstroem-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes. (orig.)

  1. Normalized Information Distance

    CERN Document Server

    Vitanyi, Paul M B; Cilibrasi, Rudi L; Li, Ming

    2008-01-01

    The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string representation. Second, for names and abstract concepts, page count statistics from the World Wide Web can be used. These practical realizations of the normalized information distance can then be applied to machine learning tasks, expecially clustering, to perform feature-free and parameter-free data mining. This chapter discusses the theoretical foundations of the normalized information distance and both practical realizations. It presents numerous examples of successful real-world applications based on these distance measures, ranging from bioinformatics to music clustering to machine translation.

  2. Approximation algorithm for multiprocessor parallel job scheduling

    Institute of Scientific and Technical Information of China (English)

    陈松乔; 黄金贵; 陈建二

    2002-01-01

    Pk|fix|Cmax problem is a new scheduling problem based on the multiprocessor parallel job, and it is proved to be NP-hard problem when k≥3. This paper focuses on the case of k=3. Some new observations and new techniques for P3|fix|Cmax problem are offered. The concept of semi-normal schedulings is introduced, and a very simple linear time algorithm Semi-normal Algorithm for constructing semi-normal schedulings is developed. With the method of the classical Graham List Scheduling, a thorough analysis of the optimal scheduling on a special instance is provided, which shows that the algorithm is an approximation algorithm of ratio of 9/8 for any instance of P3|fix|Cmax problem, and improves the previous best ratio of 7/6 by M.X.Goemans.

  3. RECENT PROGRESS ON SPHERICAL HARMONIC APPROXIMATION MADE BY BNU RESEARCH GROUP-In memory of Professor Sun Yongsheng

    Institute of Scientific and Technical Information of China (English)

    Kunyang Wang; Feng Dai

    2007-01-01

    As early as in 1990, Professor Sun Yongsheng, suggested his students at Beijing Normal University to consider research problems on the unit sphere. Under his guidance and encouragement his students started the research on spherical harmonic analysis and approximation. In this paper, we incompletely introduce the main achievements in this area obtained by our group and relative researchers during recent 5 years (2001-2005).Fourier-Laplace series; smoothness and K-functionals; Kolmogorov and linear widths.

  4. BDD Minimization for Approximate Computing

    OpenAIRE

    Soeken, Mathias; Grosse, Daniel; Chandrasekharan, Arun; Drechsler, Rolf

    2016-01-01

    We present Approximate BDD Minimization (ABM) as a problem that has application in approximate computing. Given a BDD representation of a multi-output Boolean function, ABM asks whether there exists another function that has a smaller BDD representation but meets a threshold w.r.t. an error metric. We present operators to derive approximated functions and present algorithms to exactly compute the error metrics directly on the BDD representation. An experimental evaluation demonstrates the app...

  5. Tree wavelet approximations with applications

    Institute of Scientific and Technical Information of China (English)

    XU Yuesheng; ZOU Qingsong

    2005-01-01

    We construct a tree wavelet approximation by using a constructive greedy scheme(CGS). We define a function class which contains the functions whose piecewise polynomial approximations generated by the CGS have a prescribed global convergence rate and establish embedding properties of this class. We provide sufficient conditions on a tree index set and on bi-orthogonal wavelet bases which ensure optimal order of convergence for the wavelet approximations encoded on the tree index set using the bi-orthogonal wavelet bases. We then show that if we use the tree index set associated with the partition generated by the CGS to encode a wavelet approximation, it gives optimal order of convergence.

  6. Diophantine approximation and automorphic spectrum

    CERN Document Server

    Ghosh, Anish; Nevo, Amos

    2010-01-01

    The present paper establishes qunatitative estimates on the rate of diophantine approximation in homogeneous varieties of semisimple algebraic groups. The estimates established generalize and improve previous ones, and are sharp in a number of cases. We show that the rate of diophantine approximation is controlled by the spectrum of the automorphic representation, and is thus subject to the generalised Ramanujan conjectures.

  7. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered...

  8. Beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2013-01-01

    We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...

  9. Uniform approximation by (quantum) polynomials

    NARCIS (Netherlands)

    Drucker, A.; de Wolf, R.

    2011-01-01

    We show that quantum algorithms can be used to re-prove a classical theorem in approximation theory, Jackson's Theorem, which gives a nearly-optimal quantitative version of Weierstrass's Theorem on uniform approximation of continuous functions by polynomials. We provide two proofs, based respectivel

  10. Global approximation of convex functions

    CERN Document Server

    Azagra, D

    2011-01-01

    We show that for every (not necessarily bounded) open convex subset $U$ of $\\R^n$, every (not necessarily Lipschitz or strongly) convex function $f:U\\to\\R$ can be approximated by real analytic convex functions, uniformly on all of $U$. In doing so we provide a technique which transfers results on uniform approximation on bounded sets to results on uniform approximation on unbounded sets, in such a way that not only convexity and $C^k$ smoothness, but also local Lipschitz constants, minimizers, order, and strict or strong convexity, are preserved. This transfer method is quite general and it can also be used to obtain new results on approximation of convex functions defined on Riemannian manifolds or Banach spaces. We also provide a characterization of the class of convex functions which can be uniformly approximated on $\\R^n$ by strongly convex functions.

  11. Approximate circuits for increased reliability

    Energy Technology Data Exchange (ETDEWEB)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  12. Approximate circuits for increased reliability

    Energy Technology Data Exchange (ETDEWEB)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  13. Cylindrical Helix Spline Approximation of Spatial Curves

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we present a new method for approximating spatial curves with a G1 cylindrical helix spline within a prescribed tolerance. We deduce the general formulation of a cylindrical helix,which has 11 freedoms. This means that it needs 11 restrictions to determine a cylindrical helix. Given a spatial parametric curve segment, including the start point and the end point of this segment, the tangent and the principal normal of the start point, we can always find a cylindrical segment to interpolate the given direction and position vectors. In order to approximate the known parametric curve within the prescribed tolerance, we adopt the trial method step by step. First, we must ensure the helix segment to interpolate the given two end points and match the principal normal and tangent of the start point, and then, we can keep the deviation between the cylindrical helix segment and the known curve segment within the prescribed tolerance everywhere. After the first segment had been formed, we can construct the next segment. Circularly, we can construct the G1 cylindrical helix spline to approximate the whole spatial parametric curve within the prescribed tolerance. Several examples are also given to show the efficiency of this method.

  14. Weighted Approximation for Jackson-Matsuoka Polynomials on the Sphere

    Directory of Open Access Journals (Sweden)

    Guo Feng

    2012-01-01

    Full Text Available We consider the best approximation by Jackson-Matsuoka polynomials in the weighted Lp space on the unit sphere of Rd. Using the relation between K-functionals and modulus of smoothness on the sphere, we obtain the direct and inverse estimate of approximation by these polynomials for the h-spherical harmonics.

  15. Approximate Preservers on Banach Algebras and C*-Algebras

    Directory of Open Access Journals (Sweden)

    M. Burgos

    2013-01-01

    Full Text Available The aim of the present paper is to give approximate versions of Hua’s theorem and other related results for Banach algebras and C*-algebras. We also study linear maps approximately preserving the conorm between unital C*-algebras.

  16. Flow past a porous approximate spherical shell

    Science.gov (United States)

    Srinivasacharya, D.

    2007-07-01

    In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.

  17. Rytov approximation in electron scattering

    Science.gov (United States)

    Krehl, Jonas; Lubk, Axel

    2017-06-01

    In this work we introduce the Rytov approximation in the scope of high-energy electron scattering with the motivation of developing better linear models for electron scattering. Such linear models play an important role in tomography and similar reconstruction techniques. Conventional linear models, such as the phase grating approximation, have reached their limits in current and foreseeable applications, most importantly in achieving three-dimensional atomic resolution using electron holographic tomography. The Rytov approximation incorporates propagation effects which are the most pressing limitation of conventional models. While predominately used in the weak-scattering regime of light microscopy, we show that the Rytov approximation can give reasonable results in the inherently strong-scattering regime of transmission electron microscopy.

  18. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  19. Approximate common divisors via lattices

    CERN Document Server

    Cohn, Henry

    2011-01-01

    We analyze the multivariate generalization of Howgrave-Graham's algorithm for the approximate common divisor problem. In the m-variable case with modulus N and approximate common divisor of size N^beta, this improves the size of the error tolerated from N^(beta^2) to N^(beta^((m+1)/m)), under a commonly used heuristic assumption. This gives a more detailed analysis of the hardness assumption underlying the recent fully homomorphic cryptosystem of van Dijk, Gentry, Halevi, and Vaikuntanathan. While these results do not challenge the suggested parameters, a 2^sqrt(n) approximation algorithm for lattice basis reduction in n dimensions could be used to break these parameters. We have implemented our algorithm, and it performs better in practice than the theoretical analysis suggests. Our results fit into a broader context of analogies between cryptanalysis and coding theory. The multivariate approximate common divisor problem is the number-theoretic analogue of noisy multivariate polynomial interpolation, and we ...

  20. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  1. Binary nucleation beyond capillarity approximation

    NARCIS (Netherlands)

    Kalikmanov, V.I.

    2010-01-01

    Large discrepancies between binary classical nucleation theory (BCNT) and experiments result from adsorption effects and inability of BCNT, based on the phenomenological capillarity approximation, to treat small clusters. We propose a model aimed at eliminating both of these deficiencies. Adsorption

  2. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  3. Nonlinear approximation with redundant dictionaries

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, M.; Gribonval, R.

    2005-01-01

    In this paper we study nonlinear approximation and data representation with redundant function dictionaries. In particular, approximation with redundant wavelet bi-frame systems is studied in detail. Several results for orthonormal wavelets are generalized to the redundant case. In general......, for a wavelet bi-frame system the approximation properties are limited by the number of vanishing moments of the system. In some cases this can be overcome by oversampling, but at a price of replacing the canonical expansion by another linear expansion. Moreover, for special non-oversampled wavelet bi-frames we...... can obtain good approximation properties not restricted by the number of vanishing moments, but again without using the canonical expansion....

  4. Mathematical algorithms for approximate reasoning

    Science.gov (United States)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  5. Improved Approximations for Multiprocessor Scheduling Under Uncertainty

    CERN Document Server

    Crutchfield, Christopher; Fineman, Jeremy T; Karger, David R; Scott, Jacob

    2008-01-01

    This paper presents improved approximation algorithms for the problem of multiprocessor scheduling under uncertainty, or SUU, in which the execution of each job may fail probabilistically. This problem is motivated by the increasing use of distributed computing to handle large, computationally intensive tasks. In the SUU problem we are given n unit-length jobs and m machines, a directed acyclic graph G of precedence constraints among jobs, and unrelated failure probabilities q_{ij} for each job j when executed on machine i for a single timestep. Our goal is to find a schedule that minimizes the expected makespan, which is the expected time at which all jobs complete. Lin and Rajaraman gave the first approximations for this NP-hard problem for the special cases of independent jobs, precedence constraints forming disjoint chains, and precedence constraints forming trees. In this paper, we present asymptotically better approximation algorithms. In particular, we give an O(loglog min(m,n))-approximation for indep...

  6. Approximate Analysis of Production Systems

    NARCIS (Netherlands)

    M.B.M. de Koster (René)

    1988-01-01

    textabstractIn this paper complex production systems are studied where a single product is manufactured and where each production unit stores its output in at most one buffer and receives its input from at most one buffer. The production units and the buffers may be connected nearly arbitrarily. The

  7. Approximate Analysis of Production Systems

    NARCIS (Netherlands)

    M.B.M. de Koster (René)

    1988-01-01

    textabstractIn this paper complex production systems are studied where a single product is manufactured and where each production unit stores its output in at most one buffer and receives its input from at most one buffer. The production units and the buffers may be connected nearly arbitrarily. The

  8. Twisted inhomogeneous Diophantine approximation and badly approximable sets

    CERN Document Server

    Harrap, Stephen

    2010-01-01

    For any real pair i, j geq 0 with i+j=1 let Bad(i, j) denote the set of (i, j)-badly approximable pairs. That is, Bad(i, j) consists of irrational vectors x:=(x_1, x_2) in R^2 for which there exists a positive constant c(x) such that max {||qx_1||^(-i), ||qx_2||^(-j)} > c(x)/q for all q in N. Building on a result of Kurzweil, a new characterization of the set Bad(i, j) in terms of `well-approximable' vectors in the area of `twisted' inhomogeneous Diophantine approximation is established. In addition, it is shown that Bad^x(i, j), the `twisted' inhomogeneous analogue of Bad(i, j), has full Hausdorff dimension 2 when x is chosen from the set Bad(i, j).

  9. Approximately isometric lifting in quasidiagonal extensions

    Institute of Scientific and Technical Information of China (English)

    FANG XiaoChun; ZHAO YiLe

    2009-01-01

    Let O→I→A→A/I→O be a short exact sequence of C*-algebras with A unital.Suppose that the extension O→I→A→A/I→O is quasidiagonal,then it is shown that any positive element (projection,partial isometry,unitary element,respectively) in A/I has a lifting with the same form which commutes with some quasicentral approximate unit of I consisting of projections.Furthermore,it is shown that for any given positive number e,two positive elements (projections,As an application,it is shown that for any positive numbers e and (u) in U(A/I)0,there exists u in U(A)0which is a lifting of (u) such that cel(u) < cel(u) +e.

  10. Reinforcement Learning via AIXI Approximation

    CERN Document Server

    Veness, Joel; Hutter, Marcus; Silver, David

    2010-01-01

    This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. This approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a Monte Carlo Tree Search algorithm along with an agent-specific extension of the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a number of stochastic, unknown, and partially observable domains.

  11. Approximate Matching of Hierarchial Data

    DEFF Research Database (Denmark)

    Augsten, Nikolaus

    The goal of this thesis is to design, develop, and evaluate new methods for the approximate matching of hierarchical data represented as labeled trees. In approximate matching scenarios two items should be matched if they are similar. Computing the similarity between labeled trees is hard...... as in addition to the data values also the structure must be considered. A well-known measure for comparing trees is the tree edit distance. It is computationally expensive and leads to a prohibitively high run time. Our solution for the approximate matching of hierarchical data are pq-grams. The pq...... formally proof that the pq-gram index can be incrementally updated based on the log of edit operations without reconstructing intermediate tree versions. The incremental update is independent of the data size and scales to a large number of changes in the data. We introduce windowed pq...

  12. Concept Approximation between Fuzzy Ontologies

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Fuzzy ontologies are efficient tools to handle fuzzy and uncertain knowledge on the semantic web; but there are heterogeneity problems when gaining interoperability among different fuzzy ontologies. This paper uses concept approximation between fuzzy ontologies based on instances to solve the heterogeneity problems. It firstly proposes an instance selection technology based on instance clustering and weighting to unify the fuzzy interpretation of different ontologies and reduce the number of instances to increase the efficiency. Then the paper resolves the problem of computing the approximations of concepts into the problem of computing the least upper approximations of atom concepts. It optimizes the search strategies by extending atom concept sets and defining the least upper bounds of concepts to reduce the searching space of the problem. An efficient algorithm for searching the least upper bounds of concept is given.

  13. Approximating Graphic TSP by Matchings

    CERN Document Server

    Mömke, Tobias

    2011-01-01

    We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.

  14. Diophantine approximation and Dirichlet series

    CERN Document Server

    Queffélec, Hervé

    2013-01-01

    This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...

  15. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  16. Approximate Sparse Regularized Hyperspectral Unmixing

    Directory of Open Access Journals (Sweden)

    Chengzhi Deng

    2014-01-01

    Full Text Available Sparse regression based unmixing has been recently proposed to estimate the abundance of materials present in hyperspectral image pixel. In this paper, a novel sparse unmixing optimization model based on approximate sparsity, namely, approximate sparse unmixing (ASU, is firstly proposed to perform the unmixing task for hyperspectral remote sensing imagery. And then, a variable splitting and augmented Lagrangian algorithm is introduced to tackle the optimization problem. In ASU, approximate sparsity is used as a regularizer for sparse unmixing, which is sparser than l1 regularizer and much easier to be solved than l0 regularizer. Three simulated and one real hyperspectral images were used to evaluate the performance of the proposed algorithm in comparison to l1 regularizer. Experimental results demonstrate that the proposed algorithm is more effective and accurate for hyperspectral unmixing than state-of-the-art l1 regularizer.

  17. Transfinite Approximation of Hindman's Theorem

    CERN Document Server

    Beiglböck, Mathias

    2010-01-01

    Hindman's Theorem states that in any finite coloring of the integers, there is an infinite set all of whose finite sums belong to the same color. This is much stronger than the corresponding finite form, stating that in any finite coloring of the integers there are arbitrarily long finite sets with the same property. We extend the finite form of Hindman's Theorem to a "transfinite" version for each countable ordinal, and show that Hindman's Theorem is equivalent to the appropriate transfinite approximation holding for every countable ordinal. We then give a proof of Hindman's Theorem by directly proving these transfinite approximations.

  18. Normality concerning shared values

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Let F be a family of meromorphic functions in a plane domain D, and a and b be finite non-zero complex values such that a/b ∈ N \\ {1}. If for every f ∈ F, f(z) = a =■ f (z) = a and f (z) = b =■ f (z) = b, then F is normal. We also construct a non-normal family F of meromorphic functions in the unit disk Δ = {|z| < 1} such that for every f ∈ F, f(z) = m + 1  f (z) = m + 1 and f (z) = 1  f (z) = 1 in Δ, where m is a given positive integer. This answers Problem 5.1 in the works of Gu, Pang and Fang.

  19. Normality concerning shared values

    Institute of Scientific and Technical Information of China (English)

    CHANG JianMing

    2009-01-01

    Let F be a family of meromorphic functions in a plane domain D,and a and b be finite non-zero complex values such that a/b ∈ N \\ {1}.If for every f ∈ F,f(z)=a=>(z) = a and f'(z)=b=>f"(z)=b,then F is normal.We also construct a non-normal family F of meromorphic functions in the unit disk △= {|z|<1} such that for every f ∈F,f(z) =m+1f'(z) = m+1and f'(z)=1 f"(z) = 1 in △ A,where m is a given positive integer.This answers Problem 5.1 in the works of Gu,Pang and Fang.

  20. Tree wavelet approximations with applications

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    [1]Baraniuk, R. G., DeVore, R. A., Kyriazis, G., Yu, X. M., Near best tree approximation, Adv. Comput. Math.,2002, 16: 357-373.[2]Cohen, A., Dahmen, W., Daubechies, I., DeVore, R., Tree approximation and optimal encoding, Appl. Comput.Harmonic Anal., 2001, 11: 192-226.[3]Dahmen, W., Schneider, R., Xu, Y., Nonlinear functionals of wavelet expansions-adaptive reconstruction and fast evaluation, Numer. Math., 2000, 86: 49-101.[4]DeVore, R. A., Nonlinear approximation, Acta Numer., 1998, 7: 51-150.[5]Davis, G., Mallat, S., Avellaneda, M., Adaptive greedy approximations, Const. Approx., 1997, 13: 57-98.[6]DeVore, R. A., Temlyakov, V. N., Some remarks on greedy algorithms, Adv. Comput. Math., 1996, 5: 173-187.[7]Kashin, B. S., Temlyakov, V. N., Best m-term approximations and the entropy of sets in the space L1, Mat.Zametki (in Russian), 1994, 56: 57-86.[8]Temlyakov, V. N., The best m-term approximation and greedy algorithms, Adv. Comput. Math., 1998, 8:249-265.[9]Temlyakov, V. N., Greedy algorithm and m-term trigonometric approximation, Constr. Approx., 1998, 14:569-587.[10]Hutchinson, J. E., Fractals and self similarity, Indiana. Univ. Math. J., 1981, 30: 713-747.[11]Binev, P., Dahmen, W., DeVore, R. A., Petruchev, P., Approximation classes for adaptive methods, Serdica Math.J., 2002, 28: 1001-1026.[12]Gilbarg, D., Trudinger, N. S., Elliptic Partial Differential Equations of Second Order, Berlin: Springer-Verlag,1983.[13]Ciarlet, P. G., The Finite Element Method for Elliptic Problems, New York: North Holland, 1978.[14]Birman, M. S., Solomiak, M. Z., Piecewise polynomial approximation of functions of the class Wαp, Math. Sb.,1967, 73: 295-317.[15]DeVore, R. A., Lorentz, G. G., Constructive Approximation, New York: Springer-Verlag, 1993.[16]DeVore, R. A., Popov, V., Interpolation of Besov spaces, Trans. Amer. Math. Soc., 1988, 305: 397-414.[17]Devore, R., Jawerth, B., Popov, V., Compression of wavelet decompositions, Amer. J. Math., 1992, 114: 737-785.[18]Storozhenko, E

  1. On polyhedral approximations in an n-dimensional space

    Science.gov (United States)

    Balashov, M. V.

    2016-10-01

    The polyhedral approximation of a positively homogeneous (and, in general, nonconvex) function on a unit sphere is investigated. Such a function is presupporting (i.e., its convex hull is the supporting function) for a convex compact subset of R n . The considered polyhedral approximation of this function provides a polyhedral approximation of this convex compact set. The best possible estimate for the error of the considered approximation is obtained in terms of the modulus of uniform continuous subdifferentiability in the class of a priori grids of given step in the Hausdorff metric.

  2. WKB Approximation in Noncommutative Gravity

    Directory of Open Access Journals (Sweden)

    Maja Buric

    2007-12-01

    Full Text Available We consider the quasi-commutative approximation to a noncommutative geometry defined as a generalization of the moving frame formalism. The relation which exists between noncommutativity and geometry is used to study the properties of the high-frequency waves on the flat background.

  3. Approximation properties of haplotype tagging

    Directory of Open Access Journals (Sweden)

    Dreiseitl Stephan

    2006-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.

  4. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...

  5. Approximate Reasoning with Fuzzy Booleans

    NARCIS (Netherlands)

    Broek, van den P.M.; Noppen, J.A.R.

    2004-01-01

    This paper introduces, in analogy to the concept of fuzzy numbers, the concept of fuzzy booleans, and examines approximate reasoning with the compositional rule of inference using fuzzy booleans. It is shown that each set of fuzzy rules is equivalent to a set of fuzzy rules with singleton crisp ante

  6. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    2013-01-01

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  7. On badly approximable complex numbers

    DEFF Research Database (Denmark)

    Esdahl-Schou, Rune; Kristensen, S.

    We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...

  8. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  9. Approximation on the complex sphere

    OpenAIRE

    Alsaud, Huda; Kushpel, Alexander; Levesley, Jeremy

    2012-01-01

    We develop new elements of harmonic analysis on the complex sphere on the basis of which Bernstein's, Jackson's and Kolmogorov's inequalities are established. We apply these results to get order sharp estimates of $m$-term approximations. The results obtained is a synthesis of new results on classical orthogonal polynomials, harmonic analysis on manifolds and geometric properties of Euclidean spaces.

  10. On badly approximable complex numbers

    DEFF Research Database (Denmark)

    Esdahl-Schou, Rune; Kristensen, S.

    We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...

  11. Pythagorean Approximations and Continued Fractions

    Science.gov (United States)

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  12. Approximation of Surfaces by Cylinders

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1998-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  13. Approximate Reanalysis in Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole

    2009-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...

  14. Low Rank Approximation in $G_0W_0$ Approximation

    CERN Document Server

    Shao, Meiyue; Yang, Chao; Liu, Fang; da Jornada, Felipe H; Deslippe, Jack; Louie, Steven G

    2016-01-01

    The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cos...

  15. Unité micropilote pour l'étude de charges de vapocraquage. Exemple d'un mélange de normales paraffines Micropilot Plant for the Study of Steam-Cracking Feedstocks. Example of a Mixture of Normal Paraffins

    Directory of Open Access Journals (Sweden)

    Billaud F.

    2006-11-01

    Full Text Available La décomposition thermique d'un mélange de normales paraffines (nom commercial Solpar , provenance British Petroleum a été étudiée dans une unité micropilote entre 640 et 820 °C ; les produits principaux dosés par chromatographie en phase gazeuse sont : hydrogène, méthane, éthylène, propène, butène-1, pentène-1, hexène-1, heptène-1, octène-1 et nonène-1. Un des intérêts du travail est la description mécanistique de la pyrolyse d'un hydrocarbure lourd qui permet d'interpréter la formation primaire de ces produits principaux. On a aussi montré expérimentalement l'intérêt du vapocraquage haute température et faible temps de séjour lorsque l'on veut produire sélectivement des oléfines légères en minimisant la production d'aromatiques. The thermal decomposition of a mixture of normal paraffins (trademark Solpar, by British Petroleum has been studied in a micropilot plant in a temperature range of 640 to 820°C. The main products determined by gas chromatography are hydrogen, methane, ethylene, propene, 1-butene, 1-pentene, 1-hexene, 1-heptene, 1-octene and 1-nonene. On of the important results of the present work is the mechanistic description of heavy hydrocarbon pyrolysis so that the primary formation of these principal products can be interpreted. Moreover, the advantage of using high-temperature steam cracking and short residence time for the selective production of light olefins, thus minimizing production of aromatics, is experimentally demonstrated.

  16. 经济新常态下地勘单位如何发展壮大%Discussion on the Initiatives that the Geological Exploration Unit should Take for Its Growing and Development under"New Normal"Economy

    Institute of Scientific and Technical Information of China (English)

    宫瑞峰; 孙延宗; 宫正

    2015-01-01

    Under new normal economy, the ten-year’s golden development in geological exploration industry has been ended, and the proifts have been decreased in a short time. However, in a midterm or long term, new normal economy will provide an opportunity for geological exploration industry to reform and build modern enterprise system. At the same time, downturn in the international mining market will offer good opportunities for Chinese enterprises abroad. Furthermore, traditional resource-based geology is changing its pattern; and green mining industry has become the new growth point. On this basis, this paper discusses that in order to grow and development stronger, geological exploration units should have a keen awareness of new normal and further enhance risk awareness, enhance efforts to top-level design and overall planning to promote the modernization of geological work. They should carry out cross-sectoral operation, put emphasis on the ifnancial world;accelerate scientiifc and technological innovation, and upgrade technical qualiifcations of workers and staff. Through hedging, they can enjoy the proift when the prices of mineral fall. In addition, they should set up a development fund, which can provide protection for major investment project for the fate of the units’ development;set up helping and rescue funds to solve the worries behind the unit staff.%经济新常态下,我国地勘行业结束了十年黄金发展期,短期效益下滑。从中长期来看,经济新常态为地勘行业改革、建立现代企业制度提供契机,国际矿业市场低迷为中国企业走出国门提供良机,传统资源型地质实施转型,绿色矿业成为新的增长点。地勘单位发展壮大,要认识新常态,进一步强化风险意识;加强顶层设计和总体规划,推进地质工作现代化;实行跨行业经营,重点突破金融领域;加快科技创新,提高职工技术素质;套期保值,在矿产品价格下跌

  17. APPROXIMATE MODELS FOR FLOOD ROUTING

    African Journals Online (AJOL)

    For rapid calculation of the downstream effects» of the propagation of floods ... kinematic model and a nonlinear convection-diffusion model are extracted ... immensely to the development of this area of study. ... journal of science and technology, volume 23 no.1 2003 .... of change of flow rate RD (ratio of final normal flow.

  18. Approximate Inference for Wireless Communications

    DEFF Research Database (Denmark)

    Hansen, Morten

    This thesis investigates signal processing techniques for wireless communication receivers. The aim is to improve the performance or reduce the computationally complexity of these, where the primary focus area is cellular systems such as Global System for Mobile communications (GSM) (and extensions...... complexity can potentially lead to limited power consumption, which translates into longer battery life-time in the handsets. The scope of the thesis is more specifically to investigate approximate (nearoptimal) detection methods that can reduce the computationally complexity significantly compared...... to the optimal one, which usually requires an unacceptable high complexity. Some of the treated approximate methods are based on QL-factorization of the channel matrix. In the work presented in this thesis it is proven how the QL-factorization of frequency-selective channels asymptotically provides the minimum...

  19. Hydrogen Beyond the Classic Approximation

    CERN Document Server

    Scivetti, I

    2003-01-01

    The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position

  20. Validity of the eikonal approximation

    CERN Document Server

    Kabat, D

    1992-01-01

    We summarize results on the reliability of the eikonal approximation in obtaining the high energy behavior of a two particle forward scattering amplitude. Reliability depends on the spin of the exchanged field. For scalar fields the eikonal fails at eighth order in perturbation theory, when it misses the leading behavior of the exchange-type diagrams. In a vector theory the eikonal gets the exchange diagrams correctly, but fails by ignoring certain non-exchange graphs which dominate the asymptotic behavior of the full amplitude. For spin--2 tensor fields the eikonal captures the leading behavior of each order in perturbation theory, but the sum of eikonal terms is subdominant to graphs neglected by the approximation. We also comment on the eikonal for Yang-Mills vector exchange, where the additional complexities of the non-abelian theory may be absorbed into Regge-type modifications of the gauge boson propagators.

  1. Approximate Privacy: Foundations and Quantification

    CERN Document Server

    Feigenbaum, Joan; Schapira, Michael

    2009-01-01

    Increasing use of computers and networks in business, government, recreation, and almost all aspects of daily life has led to a proliferation of online sensitive data about individuals and organizations. Consequently, concern about the privacy of these data has become a top priority, particularly those data that are created and used in electronic commerce. There have been many formulations of privacy and, unfortunately, many negative results about the feasibility of maintaining privacy of sensitive data in realistic networked environments. We formulate communication-complexity-based definitions, both worst-case and average-case, of a problem's privacy-approximation ratio. We use our definitions to investigate the extent to which approximate privacy is achievable in two standard problems: the second-price Vickrey auction and the millionaires problem of Yao. For both the second-price Vickrey auction and the millionaires problem, we show that not only is perfect privacy impossible or infeasibly costly to achieve...

  2. Validity of the Eikonal Approximation

    OpenAIRE

    Kabat, Daniel

    1992-01-01

    We summarize results on the reliability of the eikonal approximation in obtaining the high energy behavior of a two particle forward scattering amplitude. Reliability depends on the spin of the exchanged field. For scalar fields the eikonal fails at eighth order in perturbation theory, when it misses the leading behavior of the exchange-type diagrams. In a vector theory the eikonal gets the exchange diagrams correctly, but fails by ignoring certain non-exchange graphs which dominate the asymp...

  3. Approximate Counting of Graphical Realizations.

    Science.gov (United States)

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.

  4. Approximate Counting of Graphical Realizations.

    Directory of Open Access Journals (Sweden)

    Péter L Erdős

    Full Text Available In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007, for regular directed graphs (by Greenhill, 2011 and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013. Several heuristics on counting the number of possible realizations exist (via sampling processes, and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS for counting of all realizations.

  5. Many Faces of Boussinesq Approximations

    CERN Document Server

    Vladimirov, Vladimir A

    2016-01-01

    The \\emph{equations of Boussinesq approximation} (EBA) for an incompressible and inhomogeneous in density fluid are analyzed from a viewpoint of the asymptotic theory. A systematic scaling shows that there is an infinite number of related asymptotic models. We have divided them into three classes: `poor', `reasonable' and `good' Boussinesq approximations. Each model can be characterized by two parameters $q$ and $k$, where $q =1, 2, 3, \\dots$ and $k=0, \\pm 1, \\pm 2,\\dots$. Parameter $q$ is related to the `quality' of approximation, while $k$ gives us an infinite set of possible scales of velocity, time, viscosity, \\emph{etc.} Increasing $q$ improves the quality of a model, but narrows the limits of its applicability. Parameter $k$ allows us to vary the scales of time, velocity and viscosity and gives us the possibility to consider any initial and boundary conditions. In general, we discover and classify a rich variety of possibilities and restrictions, which are hidden behind the routine use of the Boussinesq...

  6. SU-C-BRC-01: A Monte Carlo Study of Out-Of-Field Doses From Cobalt-60 Teletherapy Units Intended for Historical Correlations of Dose to Normal Tissue

    Energy Technology Data Exchange (ETDEWEB)

    Petroccia, H [University of Florida, Gainesville, FL (United States); Olguin, E [Gainesville, FL (United States); Culberson, W [University of Wisconsin Madison, Madison, WI (United States); Bednarz, B [University of Wisconsin, Madison, WI (United States); Mendenhall, N [UF Health Proton Therapy Institute, Jacksonville, FL (United States); Bolch, W [University Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: Innovations in radiotherapy treatments, such as dynamic IMRT, VMAT, and SBRT/SRS, result in larger proportions of low-dose regions where normal tissues are exposed to low doses levels. Low doses of radiation have been linked to secondary cancers and cardiac toxicities. The AAPM TG Committee No.158 entitled, ‘Measurements and Calculations of Doses outside the Treatment Volume from External-Beam Radiation Therapy’, has been formed to review the dosimetry of non-target and out-of-field exposures using experimental and computational approaches. Studies on historical patients can provide comprehensive information about secondary effects from out-of-field doses when combined with long-term patient follow-up, thus providing significant insight into projecting future outcomes of patients undergoing modern-day treatments. Methods: We present a Monte Carlo model of a Theratron-1000 cobalt-60 teletherapy unit, which historically treated patients at the University of Florida, as a means of determining doses located outside the primary beam. Experimental data for a similar Theratron-1000 was obtained at the University of Wisconsin’s ADCL to benchmark the model for out-of-field dosimetry. An Exradin A12 ion chamber and TLD100 chips were used to measure doses in an extended water phantom to 60 cm outside the primary field at 5 and 10 cm depths. Results: Comparison between simulated and experimental measurements of PDDs and lateral profiles show good agreement for in-field and out-of-field doses. At 10 cm away from the edge of a 6×6, 10×10, and 20×20 cm2 field, relative out-of-field doses were measured in the range of 0.5% to 3% of the dose measured at 5 cm depth along the CAX. Conclusion: Out-of-field doses can be as high as 90 to 180 cGy assuming historical prescription doses of 30 to 60 Gy and should be considered when correlating late effects with normal tissue dose.

  7. Approximation of conditional densities by smooth mixtures of regressions

    CERN Document Server

    Norets, Andriy

    2010-01-01

    This paper shows that large nonparametric classes of conditional multivariate densities can be approximated in the Kullback--Leibler distance by different specifications of finite mixtures of normal regressions in which normal means and variances and mixing probabilities can depend on variables in the conditioning set (covariates). These models are a special case of models known as "mixtures of experts" in statistics and computer science literature. Flexible specifications include models in which only mixing probabilities, modeled by multinomial logit, depend on the covariates and, in the univariate case, models in which only means of the mixed normals depend flexibly on the covariates. Modeling the variance of the mixed normals by flexible functions of the covariates can weaken restrictions on the class of the approximable densities. Obtained results can be generalized to mixtures of general location scale densities. Rates of convergence and easy to interpret bounds are also obtained for different model spec...

  8. Rollout Sampling Approximate Policy Iteration

    CERN Document Server

    Dimitrakakis, Christos

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.

  9. Approximate Deconvolution Reduced Order Modeling

    CERN Document Server

    Xie, Xuping; Wang, Zhu; Iliescu, Traian

    2015-01-01

    This paper proposes a large eddy simulation reduced order model(LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition(POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution(AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient(10^{-3})

  10. Approximation for Bayesian Ability Estimation.

    Science.gov (United States)

    1987-02-18

    posterior pdfs of ande are given by p(-[Y) p(F) F P((y lei’ j)P )d. SiiJ i (4) a r~d p(e Iy) - p(t0) 1 J i P(Yij ei, (5) As shown in Tsutakawa and Lin...inverse A Hessian of the log of (27) with respect to , evaulatedat a Then, under regularity conditions, the marginal posterior pdf of O is...two-way contingency tables. Journal of Educational Statistics, 11, 33-56. Lindley, D.V. (1980). Approximate Bayesian methods. Trabajos Estadistica , 31

  11. Plasma Physics Approximations in Ares

    Energy Technology Data Exchange (ETDEWEB)

    Managan, R. A.

    2015-01-08

    Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for Aα (ζ ),Aβ (ζ ), ζ, f(ζ ) = (1 + e-μ/θ)F1/2(μ/θ), F1/2'/F1/2, Fcα, and Fcβ. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

  12. Rational approximations to fluid properties

    Science.gov (United States)

    Kincaid, J. M.

    1990-05-01

    The purpose of this report is to summarize some results that were presented at the Spring AIChE meeting in Orlando, Florida (20 March 1990). We report on recent attempts to develop a systematic method, based on the technique of rational approximation, for creating mathematical models of real-fluid equations of state and related properties. Equation-of-state models for real fluids are usually created by selecting a function tilde p(T,rho) that contains a set of parameters (gamma sub i); the (gamma sub i) is chosen such that tilde p(T,rho) provides a good fit to the experimental data. (Here p is the pressure, T the temperature and rho is the density). In most cases, a nonlinear least-squares numerical method is used to determine (gamma sub i). There are several drawbacks to this method: one has essentially to guess what tilde p(T,rho) should be; the critical region is seldom fit very well and nonlinear numerical methods are time consuming and sometimes not very stable. The rational approximation approach we describe may eliminate all of these drawbacks. In particular, it lets the data choose the function tilde p(T,rho) and its numerical implementation involves only linear algorithms.

  13. Uniform approximation from symbol calculus on a spherical phase space

    Energy Technology Data Exchange (ETDEWEB)

    Yu Liang, E-mail: liangyu@wigner.berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2011-12-16

    We use symbol correspondence and quantum normal form theory to develop a more general method for finding uniform asymptotic approximations. We then apply this method to derive a result we announced in an earlier paper, namely the uniform approximation of the 6j-symbol in terms of the rotation matrices. The derivation is based on the Stratonovich-Weyl symbol correspondence between matrix operators and functions on a spherical phase space. The resulting approximation depends on a canonical, or area-preserving, map between two pairs of intersecting level sets on the spherical phase space. (paper)

  14. Normal Pressure Hydrocephalus (NPH)

    Science.gov (United States)

    ... your local chapter Join our online community Normal Pressure Hydrocephalus (NPH) Normal pressure hydrocephalus is a brain ... About Symptoms Diagnosis Causes & risks Treatments About Normal Pressure Hydrocephalus Normal pressure hydrocephalus occurs when excess cerebrospinal ...

  15. SFU-Driven Transparent Approximation Acceleration on GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ang; Song, Shuaiwen; Wijtvliet, Mark; Kumar, Akash; Corporaal, Henk

    2016-06-01

    Approximate computing, the technique that sacrifices certain amount of accuracy in exchange for substantial performance boost or power reduction, is one of the most promising solutions to enable power control and performance scaling towards exascale. Although most existing approximation designs target the emerging data-intensive applications that are comparatively more error-tolerable, there is still high demand for the acceleration of traditional scientific applications (e.g., weather and nuclear simulation), which often comprise intensive transcendental function calls and are very sensitive to accuracy loss. To address this challenge, we focus on a very important but often ignored approximation unit on GPUs.

  16. Dodgson's Rule Approximations and Absurdity

    CERN Document Server

    McCabe-Dansted, John C

    2010-01-01

    With the Dodgson rule, cloning the electorate can change the winner, which Young (1977) considers an "absurdity". Removing this absurdity results in a new rule (Fishburn, 1977) for which we can compute the winner in polynomial time (Rothe et al., 2003), unlike the traditional Dodgson rule. We call this rule DC and introduce two new related rules (DR and D&). Dodgson did not explicitly propose the "Dodgson rule" (Tideman, 1987); we argue that DC and DR are better realizations of the principle behind the Dodgson rule than the traditional Dodgson rule. These rules, especially D&, are also effective approximations to the traditional Dodgson's rule. We show that, unlike the rules we have considered previously, the DC, DR and D& scores differ from the Dodgson score by no more than a fixed amount given a fixed number of alternatives, and thus these new rules converge to Dodgson under any reasonable assumption on voter behaviour, including the Impartial Anonymous Culture assumption.

  17. Approximation by double Walsh polynomials

    Directory of Open Access Journals (Sweden)

    Ferenc Móricz

    1992-01-01

    Full Text Available We study the rate of approximation by rectangular partial sums, Cesàro means, and de la Vallée Poussin means of double Walsh-Fourier series of a function in a homogeneous Banach space X. In particular, X may be Lp(I2, where 1≦p<∞ and I2=[0,1×[0,1, or CW(I2, the latter being the collection of uniformly W-continuous functions on I2. We extend the results by Watari, Fine, Yano, Jastrebova, Bljumin, Esfahanizadeh and Siddiqi from univariate to multivariate cases. As by-products, we deduce sufficient conditions for convergence in Lp(I2-norm and uniform convergence on I2 as well as characterizations of Lipschitz classes of functions. At the end, we raise three problems.

  18. Interplay of approximate planning strategies.

    Science.gov (United States)

    Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P

    2015-03-10

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options."

  19. Approximate reduction of dynamical systems

    CERN Document Server

    Tabuada, Paulo; Julius, Agung; Pappas, George J

    2007-01-01

    The reduction of dynamical systems has a rich history, with many important applications related to stability, control and verification. Reduction of nonlinear systems is typically performed in an exact manner - as is the case with mechanical systems with symmetry--which, unfortunately, limits the type of systems to which it can be applied. The goal of this paper is to consider a more general form of reduction, termed approximate reduction, in order to extend the class of systems that can be reduced. Using notions related to incremental stability, we give conditions on when a dynamical system can be projected to a lower dimensional space while providing hard bounds on the induced errors, i.e., when it is behaviorally similar to a dynamical system on a lower dimensional space. These concepts are illustrated on a series of examples.

  20. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...... maximization in this setting. With m being the number of alternatives, we exhibit a randomized truthful-in-expectation ordinal mechanism implementing an outcome whose expected social welfare is at least an Omega(m^{-3/4}) fraction of the social welfare of the socially optimal alternative. On the other hand, we...... show that for sufficiently many agents and any truthful-in-expectation ordinal mechanism, there is a valuation profile where the mechanism achieves at most an O(m^{-{2/3}) fraction of the optimal social welfare in expectation. We get tighter bounds for the natural special case of m = 3...

  1. Approximation of Surfaces by Cylinders

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1998-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...... projection of the surface onto this plane, a reference curve is determined by use of methods for thinning of binary images. Finally, the cylinder surface is constructed as follows: the directrix of the cylinder surface is determined by a least squares method minimizing the distance to the points...... in the projection within a tolerance given by the reference curve, and the rulings are lines perpendicular to the projection plane. Application of the method in ship design is given....

  2. Analytical approximations for spiral waves

    Energy Technology Data Exchange (ETDEWEB)

    Löber, Jakob, E-mail: jakob@physik.tu-berlin.de; Engel, Harald [Institut für Theoretische Physik, Technische Universität Berlin, Hardenbergstrasse 36, EW 7-1, 10623 Berlin (Germany)

    2013-12-15

    We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.

  3. On quantum and approximate privacy

    CERN Document Server

    Klauck, H

    2001-01-01

    This paper studies privacy in communication complexity. The focus is on quantum versions of the model and on protocols with only approximate privacy against honest players. We show that the privacy loss (the minimum divulged information) in computing a function can be decreased exponentially by using quantum protocols, while the class of privately computable functions (i.e., those with privacy loss 0) is not increased by quantum protocols. Quantum communication combined with small information leakage on the other hand makes certain functions computable (almost) privately which are not computable using quantum communication without leakage or using classical communication with leakage. We also give an example of an exponential reduction of the communication complexity of a function by allowing a privacy loss of o(1) instead of privacy loss 0.

  4. IONIS: Approximate atomic photoionization intensities

    Science.gov (United States)

    Heinäsmäki, Sami

    2012-02-01

    A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a

  5. Approximate analytic solutions to the NPDD: Short exposure approximations

    Science.gov (United States)

    Close, Ciara E.; Sheridan, John T.

    2014-04-01

    There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.

  6. Meromorphic approximants to complex Cauchy transforms with polar singularities

    Science.gov (United States)

    Baratchart, Laurent; Yattselev, Maxim L.

    2009-10-01

    We study AAK-type meromorphic approximants to functions of the form \\displaystyle F(z)=\\int\\frac{d\\lambda(t)}{z-t}+R(z), where R is a rational function and \\lambda is a complex measure with compact regular support included in (-1,1), whose argument has bounded variation on the support. The approximation is understood in the L^p-norm of the unit circle, p\\geq2. We dwell on the fact that the denominators of such approximants satisfy certain non-Hermitian orthogonal relations with varying weights. They resemble the orthogonality relations that arise in the study of multipoint Padé approximants. However, the varying part of the weight implicitly depends on the orthogonal polynomials themselves, which constitutes the main novelty and the main difficulty of the undertaken analysis. We obtain that the counting measures of poles of the approximants converge to the Green equilibrium distribution on the support of \\lambda relative to the unit disc, that the approximants themselves converge in capacity to F, and that the poles of R attract at least as many poles of the approximants as their multiplicity and not much more. Bibliography: 35 titles.

  7. Randomized approximate nearest neighbors algorithm.

    Science.gov (United States)

    Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir

    2011-09-20

    We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.

  8. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  9. Normalization of Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  10. Obtaining exact value by approximate computations

    Institute of Scientific and Technical Information of China (English)

    Jing-zhong ZHANG; Yong FENG

    2007-01-01

    Numerical approximate computations can solve large and complex problems fast. They have the advantage of high efficiency. However they only give approximate results, whereas we need exact results in some fields. There is a gap between approximate computations and exact results.In this paper, we build a bridge by which exact results can be obtained by numerical approximate computations.

  11. Fuzzy Set Approximations in Fuzzy Formal Contexts

    Institute of Scientific and Technical Information of China (English)

    Mingwen Shao; Shiqing Fan

    2006-01-01

    In this paper, a kind of multi-level formal concept is introduced. Based on the proposed multi-level formal concept, we present a pair of rough fuzzy set approximations within fuzzy formal contexts. By the proposed rough fuzzy set approximations, we can approximate a fuzzy set according to different precision level. We discuss the properties of the proposed approximation operators in detail.

  12. Obtaining exact value by approximate computations

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Numerical approximate computations can solve large and complex problems fast.They have the advantage of high efficiency.However they only give approximate results,whereas we need exact results in some fields.There is a gap between approximate computations and exact results. In this paper,we build a bridge by which exact results can be obtained by numerical approximate computations.

  13. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation...... with algorithmic constraints: thresholding and Chebychev approximation classes are studied, respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space, and we prove...

  14. Nonlinear approximation with dictionaries, I: Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    We study various approximation classes associated with $m$-term approximation by elements from a (possibly redundant) dictionary in a Banach space. The standard approximation class associated with the best $m$-term approximation is compared to new classes defined by considering $m......$-term approximation with algorithmic constraints: thresholding and Chebychev approximation classes are studied respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space...

  15. Testing for normality

    CERN Document Server

    Thode, Henry C

    2002-01-01

    Describes the selection, design, theory, and application of tests for normality. Covers robust estimation, test power, and univariate and multivariate normality. Contains tests ofr multivariate normality and coordinate-dependent and invariant approaches.

  16. Renormalization of the frozen Gaussian approximation to the quantum propagator.

    Science.gov (United States)

    Tatchen, Jörg; Pollak, Eli; Tao, Guohua; Miller, William H

    2011-04-07

    The frozen Gaussian approximation to the quantum propagator may be a viable method for obtaining "on the fly" quantum dynamical information on systems with many degrees of freedom. However, it has two severe limitations, it rapidly loses normalization and one needs to know the Gaussian averaged potential, hence it is not a purely local theory in the force field. These limitations are in principle remedied by using the Herman-Kluk (HK) form for the semiclassical propagator. The HK propagator approximately conserves unitarity for relatively long times and depends only locally on the bare potential and its second derivatives. However, the HK propagator involves a much more expensive computation due to the need for evaluating the monodromy matrix elements. In this paper, we (a) derive a new formula for the normalization integral based on a prefactor free HK propagator which is amenable to "on the fly" computations; (b) show that a frozen Gaussian version of the normalization integral is not readily computable "on the fly"; (c) provide a new insight into how the HK prefactor leads to approximate unitarity; and (d) how one may construct a prefactor free approximation which combines the advantages of the frozen Gaussian and the HK propagators. The theoretical developments are backed by numerical examples on a Morse oscillator and a quartic double well potential.

  17. Differential Inequalities, Normality and Quasi-Normality

    CERN Document Server

    Liu, Xiaojun; Pang, Xuecheng

    2011-01-01

    We prove that if D is a domain in C, alpha>1 and c>0, then the family F of functions meromorphic in D such that |f'(z)|/(1+|f(z)|^alpha)>c for every z in D is normalin D. For alpha=1, the same assumptions imply quasi-normality but not necessarily normality.

  18. Normal Functions Concerning Shared Values

    Institute of Scientific and Technical Information of China (English)

    WANG XIAO-JING

    2009-01-01

    In this paper we discuss normal functions concerning shared values. We obtain the follow result. Let F be a family of meromorphic functions in the unit disc △, and a be a nonzero finite complex number. If for any f ∈ F, the zeros of f are of multiplicity, f and f' share a, then there exists a positive number M such that for any f ∈ F, (1 -|z|~2)(|f'(z)|)/(1+|f(z)|~2)≤ M.

  19. APPROXIMATE SAMPLING THEOREM FOR BIVARIATE CONTINUOUS FUNCTION

    Institute of Scientific and Technical Information of China (English)

    杨守志; 程正兴; 唐远炎

    2003-01-01

    An approximate solution of the refinement equation was given by its mask, and the approximate sampling theorem for bivariate continuous function was proved by applying the approximate solution. The approximate sampling function defined uniquely by the mask of the refinement equation is the approximate solution of the equation, a piece-wise linear function, and posseses an explicit computation formula. Therefore the mask of the refinement equation is selected according to one' s requirement, so that one may controll the decay speed of the approximate sampling function.

  20. Bernstein-type approximations of smooth functions

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available The Bernstein-type approximation for smooth functions is proposed and studied. We propose the Bernstein-type approximation with definitions that directly apply the binomial distribution and the multivariate binomial distribution. The Bernstein-type approximations generalize the corresponding Bernstein polynomials, by considering definitions that depend on a convenient approximation coefficient in linear kernels. In the Bernstein-type approximations, we study the uniform convergence and the degree of approximation. The Bernstein-type estimators of smooth functions of population means are also proposed and studied.

  1. Mechanical stratigraphy and normal faulting

    Science.gov (United States)

    Ferrill, David A.; Morris, Alan P.; McGinnis, Ronald N.; Smart, Kevin J.; Wigginton, Sarah S.; Hill, Nicola J.

    2017-01-01

    Mechanical stratigraphy encompasses the mechanical properties, thicknesses, and interface properties of rock units. Although mechanical stratigraphy often relates directly to lithostratigraphy, lithologic description alone does not adequately describe mechanical behavior. Analyses of normal faults with displacements of millimeters to 10's of kilometers in mechanically layered rocks reveal that mechanical stratigraphy influences nucleation, failure mode, fault geometry, displacement gradient, displacement distribution, fault core and damage zone characteristics, and fault zone deformation processes. The relationship between normal faulting and mechanical stratigraphy can be used either to predict structural style using knowledge of mechanical stratigraphy, or conversely to interpret mechanical stratigraphy based on characterization of the structural style. This review paper explores a range of mechanical stratigraphic controls on normal faulting illustrated by natural and modeled examples.

  2. Groupoid normalizers of tensor products

    CERN Document Server

    Fang, Junsheng; White, Stuart A; Wiggins, Alan D

    2008-01-01

    We consider an inclusion $B\\subseteq M$ of finite von Neumann algebras satisfying $B'\\cap M\\subseteq B$. A partial isometry $v\\in M$ is called a groupoid normalizer if $vBv^*, v^*Bv\\subseteq B$. Given two such inclusions $B_i\\subseteq M_i$, $i=1,2$, we find approximations to the groupoid normalizers of $B_1 \\vnotimes B_2$ in $M_1\\vnotimes M_2$, from which we deduce that the von Neumann algebra generated by the groupoid normalizers of the tensor product is equal to the tensor product of the von Neumann algebras generated by the groupoid normalizers. Examples are given to show that this can fail without the hypothesis $B_i'\\cap M_i\\subseteq B_i$, $i=1,2$. We also prove a parallel result where the groupoid normalizers are replaced by the intertwiners, those partial isometries $v\\in M$ satisfying $vBv^*\\subseteq B$ and $v^*v, vv^*\\in B$.

  3. Gutzwiller approximation in strongly correlated electron systems

    Science.gov (United States)

    Li, Chunhua

    concepts and techniques are developed to study the Mott transition in inhomogeneous electronic superstructures. The latter is termed "SuperMottness" which is shown to be a general framework that unifies the two paradigms in the physics of strong electronic correlation: Mott transition and Wigner crystallization. A cluster Gutzwiller approximation (CGA) approach is developed that treats the local ( U) and extended Coulomb interactions (V) on equal footing. It is shown with explicit calculations that the Mott-Wigner metal-insulator transition can take place far away from half-filling. The mechanism by which a superlattice potential enhances the correlation effects and the tendency towards local moment formation is investigated and the results reveal a deeper connection among the strongly correlated inhomogeneous electronic states, the Wigner-Mott physics, and the multiorbital Mott physics that can all be united under the notion of SuperMottness. It is proposed that doping into a superMott insulator can lead to coexistence of local moment and itinerant carriers. The last part of the thesis studies the possible Kondo effect that couples the local moment and the itinerant carriers. In connection to the sodium rich phases of the cobaltates, a new Kondo lattice model is proposed where the itinerant carriers form a Stoner ferromagnet. The competition between the Kondo screening and the Stoner ferromagnetism is investigated when the conduction band is both at and away from half-filling.

  4. Origin and quantification of differences between normal and tumor tissues observed by terahertz spectroscopy

    Science.gov (United States)

    Yamaguchi, Sayuri; Fukushi, Yasuko; Kubota, Oichi; Itsuji, Takeaki; Ouchi, Toshihiko; Yamamoto, Seiji

    2016-09-01

    The origin of the differences in the refractive index observed between normal and tumor tissues using terahertz spectroscopy has been described quantitatively. To estimate water content differences in tissues, we prepared fresh and paraffin-embedded samples from rats. An approximately 5% increase of water content in tumor tissues was calculated from terahertz time domain spectroscopy measurements compared to normal tissues. A greater than 15% increase in percentage of cell nuclei per unit area in tumor tissues was observed by hematoxylin and eosin stained samples, which generates a higher refractive index of biological components other than water. Both high water content and high cell density resulted in higher refractive index by approximately 0.05 in tumor tissues. It is predicted that terahertz spectroscopy can also be used to detect brain tumors in human tissue due to the same underlying mechanism as in rats.

  5. Applications of Discrepancy Theory in Multiobjective Approximation

    CERN Document Server

    Glaßer, Christian; Witek, Maximilian

    2011-01-01

    We apply a multi-color extension of the Beck-Fiala theorem to show that the multiobjective maximum traveling salesman problem is randomized 1/2-approximable on directed graphs and randomized 2/3-approximable on undirected graphs. Using the same technique we show that the multiobjective maximum satisfiablilty problem is 1/2-approximable.

  6. Fractal Trigonometric Polynomials for Restricted Range Approximation

    Science.gov (United States)

    Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

    2016-05-01

    One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

  7. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  8. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  9. Nonlinear approximation with dictionaries, I: Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    $-term approximation with algorithmic constraints: thresholding and Chebychev approximation classes are studied respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space...

  10. Operator approximant problems arising from quantum theory

    CERN Document Server

    Maher, Philip J

    2017-01-01

    This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.

  11. Advanced Concepts and Methods of Approximate Reasoning

    Science.gov (United States)

    1989-12-01

    and L. Valverde. On mode and implication in approximate reasoning. In M.M. Gupta, A. Kandel, W. Bandler , J.B. Kiszka, editors, Approximate Reasoning and...190, 1981. [43] E. Trillas and L. Valverde. On mode and implication in approximate reasoning. In M.M. Gupta, A. Kandel, W. Bandler , J.B. Kiszka

  12. NONLINEAR APPROXIMATION WITH GENERAL WAVE PACKETS

    Institute of Scientific and Technical Information of China (English)

    L. Borup; M. Nielsen

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete characterization of the approximation spaces is derived.

  13. Approximate Nearest Neighbor Queries among Parallel Segments

    DEFF Research Database (Denmark)

    Emiris, Ioannis Z.; Malamatos, Theocharis; Tsigaridas, Elias

    2010-01-01

    We develop a data structure for answering efficiently approximate nearest neighbor queries over a set of parallel segments in three dimensions. We connect this problem to approximate nearest neighbor searching under weight constraints and approximate nearest neighbor searching on historical data...

  14. Nonlinear approximation with general wave packets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...... characterization of the approximation spaces is derived....

  15. Nonlinear approximation with bi-framelets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten; Gribonval, Rémi

    2005-01-01

    We study the approximation in Lebesgue spaces of wavelet bi-frame systems given by translations and dilations of a finite set of generators. A complete characterization of the approximation spaces associated with best m-term approximation of wavelet bi-framelet systems is given...

  16. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    Benyin Fu

    2016-05-01

    In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use the techniques of Ozawa’s to prove that a fine hyperbolic graph has the metric invariant translation approximation property.

  17. Symmetries of th-Order Approximate Stochastic Ordinary Differential Equations

    Directory of Open Access Journals (Sweden)

    E. Fredericks

    2012-01-01

    Full Text Available Symmetries of th-order approximate stochastic ordinary differential equations (SODEs are studied. The determining equations of these SODEs are derived in an Itô calculus context. These determining equations are not stochastic in nature. SODEs are normally used to model nature (e.g., earthquakes or for testing the safety and reliability of models in construction engineering when looking at the impact of random perturbations.

  18. Resonant-state expansion Born Approximation

    CERN Document Server

    Doost, M B

    2015-01-01

    The Born Approximation is a fundamental formula in Physics, it allows the calculation of weak scattering via the Fourier transform of the scattering potential. I extend the Born Approximation by including in the formula the Fourier transform of a truncated basis of the infinite number of appropriately normalised resonant states. This extension of the Born Approximation is named the Resonant-State Expansion Born Approximation or RSE Born Approximation. The resonant-states of the system can be calculated using the recently discovered RSE perturbation theory for electrodynamics and normalised correctly to appear in spectral Green's functions via the flux volume normalisation.

  19. Canonical Sets of Best L1-Approximation

    Directory of Open Access Journals (Sweden)

    Dimiter Dryanov

    2012-01-01

    Full Text Available In mathematics, the term approximation usually means either interpolation on a point set or approximation with respect to a given distance. There is a concept, which joins the two approaches together, and this is the concept of characterization of the best approximants via interpolation. It turns out that for some large classes of functions the best approximants with respect to a certain distance can be constructed by interpolation on a point set that does not depend on the choice of the function to be approximated. Such point sets are called canonical sets of best approximation. The present paper summarizes results on canonical sets of best L1-approximation with emphasis on multivariate interpolation and best L1-approximation by blending functions. The best L1-approximants are characterized as transfinite interpolants on canonical sets. The notion of a Haar-Chebyshev system in the multivariate case is discussed also. In this context, it is shown that some multivariate interpolation spaces share properties of univariate Haar-Chebyshev systems. We study also the problem of best one-sided multivariate L1-approximation by sums of univariate functions. Explicit constructions of best one-sided L1-approximants give rise to well-known and new inequalities.

  20. Finite-part singular integral approximations in Hilbert spaces

    Directory of Open Access Journals (Sweden)

    E. G. Ladopoulos

    2004-01-01

    Full Text Available Some new approximation methods are proposed for the numerical evaluation of the finite-part singular integral equations defined on Hilbert spaces when their singularity consists of a homeomorphism of the integration interval, which is a unit circle, on itself. Therefore, some existence theorems are proved for the solutions of the finite-part singular integral equations, approximated by several systems of linear algebraic equations. The method is further extended for the proof of the existence of solutions for systems of finite-part singular integral equations defined on Hilbert spaces, when their singularity consists of a system of diffeomorphisms of the integration interval, which is a unit circle, on itself.

  1. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  2. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  3. On Gakerkin approximations for the quasigeostrophic equations

    CERN Document Server

    Rocha, Cesar B; Grooms, Ian

    2015-01-01

    We study the representation of approximate solutions of the three-dimensional quasigeostrophic (QG) equations using Galerkin series with standard vertical modes. In particular, we show that standard modes are compatible with nonzero buoyancy at the surfaces and can be used to solve the Eady problem. We extend two existing Galerkin approaches (A and B) and develop a new Galerkin approximation (C). Approximation A, due to Flierl (1978), represents the streamfunction as a truncated Galerkin series and defines the potential vorticity (PV) that satisfies the inversion problem exactly. Approximation B, due to Tulloch and Smith (2009b), represents the PV as a truncated Galerkin series and calculates the streamfunction that satisfies the inversion problem exactly. Approximation C, the true Galerkin approximation for the QG equations, represents both streamfunction and PV as truncated Galerkin series, but does not satisfy the inversion equation exactly. The three approximations are fundamentally different unless the b...

  4. Approximate dynamic programming solving the curses of dimensionality

    CERN Document Server

    Powell, Warren B

    2007-01-01

    Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. The recipient of the 2004 INFORMS Fellow Award, Dr. Powell has authored over 100 refereed publications on stochastic optimization, approximate dynamic programming, and dynamic resource management.

  5. On stochastic approximation algorithms for classes of PAC learning problems

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V.; Uppuluri, V.R.R.; Oblow, E.M.

    1994-03-01

    The classical stochastic approximation methods are shown to yield algorithms to solve several formulations of the PAC learning problem defined on the domain [o,1]{sup d}. Under some assumptions on different ability of the probability measure functions, simple algorithms to solve some PAC learning problems are proposed based on networks of non-polynomial units (e.g. artificial neural networks). Conditions on the sizes of these samples required to ensure the error bounds are derived using martingale inequalities.

  6. Development of Geological Prospecting Units under the New Normal Based on SWOT Analysis%基于SWOT分析的新常态下地勘单位发展研究

    Institute of Scientific and Technical Information of China (English)

    曹敏

    2016-01-01

    In this paper, we use SWOT analysis model to analyze the advantages,weaknesses, opportunities and threats presented togeological prospecting units under the economic newnormal background, and made some suggestions to these units for further development: We should actively promote the reform of geological prospecting units, foster new industries and new dynamics, integratethe resources to optimize the geological prospecting industrial structure, make the best use of capital markets to broaden the ifnancing channels.%文章运用SWOT模型分析地勘单位在经济新常态背景下的优势、劣势、面临的机会与威胁,提出了新常态下地勘单位发展的对策建议:积极推进地勘单位改革、打造新兴产业培育新动能、整合资源优化地勘产业结构、借力资本市场拓宽融资渠道。

  7. APPROXIMATE REPRESENTATION OF THE p_NORM DISTRIBUTION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In surveying data processing,we generally suppose that the observatio nal errors distribute normally.In this case the method of least squares can giv e the minimum variance unbiased estimation of the parameters.The method of leas t squares does not have the character of robustness,so the use of it will becom e unsuitable when a few measurements inheriting gross error mix with others.W e can use the robust estimating methods that can avoid the influence of gross er rors.With this kind of method there is no need to know the exact distr ibution of the o bservations.But it will cause other difficulties such as the hypothesis tes ting for estimated parameters when the sample size is not so big.For non_normal ly distributed measurements we can suppose they obey the p_norm distribution law.The p_norm distribution is a distributional class,which includes the mo st frequently used distributions such as the Laplace,Normal and Rectangular ones .This distribution is symmetric and has a kurtosis between 3 and -6/5 when p is larger than 1.Using p_norm distribution to describe the statistica l char acter of the errors,the only assumption is that the error distribution is a symm etric and unimodal curve. This method possesses the property of a kind of self_ad apting.But the density function of the p_norm distribution is so complex tha t it makes the theoretical analysis more difficult.And the troublesome calculati on also makes this method not suitable for practice.The research of this paper i ndicates that the p_norm distribution can be represented by the linear combi nation of Laplace distribution and normal distribution or by the linear combinat ion of normal distribution and rectangular distribution approximately.Which kind of representation will be taken is according to whether the parameter p is larger than 1 and less than 2 or p is larger than 2.The approximate d i stribution have the same first four order moments with the exact one.It mean s that approximate distribution

  8. Improving biconnectivity approximation via local optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ka Wong Chong; Tak Wah Lam [Univ. of Hong Kong (Hong Kong)

    1996-12-31

    The problem of finding the minimum biconnected spanning subgraph of an undirected graph is NP-hard. A lot of effort has been made to find biconnected spanning subgraphs that approximate to the minimum one as close as possible. Recently, new polynomial-time (sequential) approximation algorithms have been devised to improve the approximation factor from 2 to 5/3 , then 3/2, while NC algorithms have also been known to achieve 7/4 + {epsilon}. This paper presents a new technique which can be used to further improve parallel approximation factors to 5/3 + {epsilon}. In the sequential context, the technique reveals an algorithm with a factor of {alpha} + 1/5, where a is the approximation factor of any 2-edge connectivity approximation algorithm.

  9. Frankenstein's Glue: Transition functions for approximate solutions

    CERN Document Server

    Yunes, N

    2006-01-01

    Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate solutions together. In particular, we propose certain sufficient conditions on these functions and proof that these conditions guarantee that the joined solution still satisfies the Einstein equations to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the...

  10. Floating-Point $L^2$-Approximations

    OpenAIRE

    Brisebarre, Nicolas; Hanrot, Guillaume

    2007-01-01

    International audience; Computing good polynomial approximations to usual functions is an important topic for the computer evaluation of those functions. These approximations can be good under several criteria, the most desirable being probably that the relative error is as small as possible in the $L^{\\infty}$ sense, i.e. everywhere on the interval under study. In the present paper, we investigate a simpler criterion, the $L^2$ case. Though finding a best polynomial $L^2$-approximation with ...

  11. Metric Diophantine approximation on homogeneous varieties

    CERN Document Server

    Ghosh, Anish; Nevo, Amos

    2012-01-01

    We develop the metric theory of Diophantine approximation on homogeneous varieties of semisimple algebraic groups and prove results analogous to the classical Khinchin and Jarnik theorems. In full generality our results establish simultaneous Diophantine approximation with respect to several completions, and Diophantine approximation over general number fields using S-algebraic integers. In several important examples, the metric results we obtain are optimal. The proof uses quantitative equidistribution properties of suitable averaging operators, which are derived from spectral bounds in automorphic representations.

  12. Approximately liner phase IIR digital filter banks

    OpenAIRE

    J. D. Ćertić; M. D. Lutovac; L. D. Milić

    2013-01-01

    In this paper, uniform and nonuniform digital filter banks based on approximately linear phase IIR filters and frequency response masking technique (FRM) are presented. Both filter banks are realized as a connection of an interpolated half-band approximately linear phase IIR filter as a first stage of the FRM design and an appropriate number of masking filters. The masking filters are half-band IIR filters with an approximately linear phase. The resulting IIR filter banks are compared with li...

  13. A Note on Generalized Approximation Property

    Directory of Open Access Journals (Sweden)

    Antara Bhar

    2013-01-01

    Full Text Available We introduce a notion of generalized approximation property, which we refer to as --AP possessed by a Banach space , corresponding to an arbitrary Banach sequence space and a convex subset of , the class of bounded linear operators on . This property includes approximation property studied by Grothendieck, -approximation property considered by Sinha and Karn and Delgado et al., and also approximation property studied by Lissitsin et al. We characterize a Banach space having --AP with the help of -compact operators, -nuclear operators, and quasi--nuclear operators. A particular case for ( has also been characterized.

  14. Upper Bounds on Numerical Approximation Errors

    DEFF Research Database (Denmark)

    Raahauge, Peter

    2004-01-01

    This paper suggests a method for determining rigorous upper bounds on approximationerrors of numerical solutions to infinite horizon dynamic programming models.Bounds are provided for approximations of the value function and the policyfunction as well as the derivatives of the value function....... The bounds apply to moregeneral problems than existing bounding methods do. For instance, since strict concavityis not required, linear models and piecewise linear approximations can bedealt with. Despite the generality, the bounds perform well in comparison with existingmethods even when applied...... to approximations of a standard (strictly concave)growth model.KEYWORDS: Numerical approximation errors, Bellman contractions, Error bounds...

  15. TMB: Automatic differentiation and laplace approximation

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Nielsen, Anders; Berg, Casper Willestofte

    2016-01-01

    computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects...... are automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three) of the joint likelihood. The computations are designed to be fast for problems with many random effects (approximate to 10(6)) and parameters (approximate to 10...

  16. Inversion and approximation of Laplace transforms

    Science.gov (United States)

    Lear, W. M.

    1980-01-01

    A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

  17. Computing Functions by Approximating the Input

    Science.gov (United States)

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  18. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-06-23

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  19. Random Attractors of Stochastic Modified Boussinesq Approximation

    Institute of Scientific and Technical Information of China (English)

    郭春晓

    2011-01-01

    The Boussinesq approximation is a reasonable model to describe processes in body interior in planetary physics. We refer to [1] and [2] for a derivation of the Boussinesq approximation, and [3] for some related results of existence and uniqueness of solution.

  20. Approximating a harmonizable isotropic random field

    Directory of Open Access Journals (Sweden)

    Randall J. Swift

    2001-01-01

    Full Text Available The class of harmonizable fields is a natural extension of the class of stationary fields. This paper considers a stochastic series approximation of a harmonizable isotropic random field. This approximation is useful for numerical simulation of such a field.

  1. On approximating multi-criteria TSP

    NARCIS (Netherlands)

    Manthey, Bodo; Albers, S.; Marion, J.-Y.

    2009-01-01

    We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP), whose performances are independent of the number $k$ of criteria and come close to the approximation ratios obtained for TSP with a single objective function. We present randomized app

  2. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected by...

  3. A case where BO Approximation breaks down

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ The Bom-Oppenheimer (BO)Approximation is ubiquitous in molecular physics,quantum physics and quantum chemistry. However, CAS researchers recently observed a breakdown of the Approximation in the reaction of fluorine with deuterium atoms.The result has been published in the August 24 issue of Science.

  4. Two Point Pade Approximants and Duality

    CERN Document Server

    Banks, Tom

    2013-01-01

    We propose the use of two point Pade approximants to find expressions valid uniformly in coupling constant for theories with both weak and strong coupling expansions. In particular, one can use these approximants in models with a strong/weak duality, when the symmetries do not determine exact expressions for some quantity.

  5. Function Approximation Using Probabilistic Fuzzy Systems

    NARCIS (Netherlands)

    J.H. van den Berg (Jan); U. Kaymak (Uzay); R.J. Almeida e Santos Nogueira (Rui Jorge)

    2011-01-01

    textabstractWe consider function approximation by fuzzy systems. Fuzzy systems are typically used for approximating deterministic functions, in which the stochastic uncertainty is ignored. We propose probabilistic fuzzy systems in which the probabilistic nature of uncertainty is taken into account.

  6. Approximation of the Inverse -Frame Operator

    Indian Academy of Sciences (India)

    M R Abdollahpour; A Najati

    2011-05-01

    In this paper, we introduce the concept of (strong) projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.

  7. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    with algorithmic constraints: thresholding and Chebychev approximation classes are studied, respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space, and we prove...

  8. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are use

  9. Quirks of Stirling's Approximation

    Science.gov (United States)

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  10. INVARIANT RANDOM APPROXIMATION IN NONCONVEX DOMAIN

    Directory of Open Access Journals (Sweden)

    R. Shrivastava

    2012-05-01

    Full Text Available Random fixed point results in the setup of compact and weakly compact domain of Banach spaces which is not necessary starshaped have been obtained in the present work. Invariant random approximation results have also been determined asits application. In this way, random version of invariant approximation results due toMukherjee and Som [13] and Singh [17] have been given.

  11. Approximability and Parameterized Complexity of Minmax Values

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Hansen, Thomas Dueholm; Miltersen, Peter Bro;

    2008-01-01

    We consider approximating the minmax value of a multi player game in strategic form. Tightening recent bounds by Borgs et al., we observe that approximating the value with a precision of ε log n digits (for any constant ε > 0) is NP-hard, where n is the size of the game. On the other hand...

  12. Hardness of approximation for strip packing

    DEFF Research Database (Denmark)

    Adamaszek, Anna Maria; Kociumaka, Tomasz; Pilipczuk, Marcin

    2017-01-01

    [SODA 2016] have recently proposed a (1.4 + ϵ)-approximation algorithm for this variant, thus showing that strip packing with polynomially bounded data can be approximated better than when exponentially large values are allowed in the input. Their result has subsequently been improved to a (4/3 + ϵ...

  13. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem; Kallenberg, Wilbert C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are use

  14. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, R.; Albers, W.; Kallenberg, W.C.M.

    2003-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are use

  15. Lifetime of the Nonlinear Geometric Optics Approximation

    DEFF Research Database (Denmark)

    Binzer, Knud Andreas

    The subject of the thesis is to study acertain approximation method for highly oscillatory solutions to nonlinear partial differential equations.......The subject of the thesis is to study acertain approximation method for highly oscillatory solutions to nonlinear partial differential equations....

  16. Simple Lie groups without the approximation property

    DEFF Research Database (Denmark)

    Haagerup, Uffe; de Laat, Tim

    2013-01-01

    For a locally compact group G, let A(G) denote its Fourier algebra, and let M0A(G) denote the space of completely bounded Fourier multipliers on G. The group G is said to have the Approximation Property (AP) if the constant function 1 can be approximated by a net in A(G) in the weak-∗ topology...

  17. An improved proximity force approximation for electrostatics

    CERN Document Server

    Fosco, C D; Mazzitelli, F D

    2012-01-01

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated to their shapes. Indeed, in the so called "proximity force approximation" the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contribution of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied to different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful to discuss the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction i...

  18. Approximate Furthest Neighbor in High Dimensions

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Silvestri, Francesco; Sivertsen, Johan von Tangen;

    2015-01-01

    Much recent work has been devoted to approximate nearest neighbor queries. Motivated by applications in recommender systems, we consider approximate furthest neighbor (AFN) queries. We present a simple, fast, and highly practical data structure for answering AFN queries in high-dimensional Euclid......Much recent work has been devoted to approximate nearest neighbor queries. Motivated by applications in recommender systems, we consider approximate furthest neighbor (AFN) queries. We present a simple, fast, and highly practical data structure for answering AFN queries in high......-dimensional Euclidean space. We build on the technique of Indyk (SODA 2003), storing random projections to provide sublinear query time for AFN. However, we introduce a different query algorithm, improving on Indyk’s approximation factor and reducing the running time by a logarithmic factor. We also present a variation...

  19. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  20. Approximating maximum clique with a Hopfield network.

    Science.gov (United States)

    Jagota, A

    1995-01-01

    In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.

  1. Approximate reflection coefficients for a thin VTI layer

    KAUST Repository

    Hao, Qi

    2017-09-18

    We present an approximate method to derive simple expressions for the reflection coefficients of P- and SV-waves for a thin transversely isotropic layer with a vertical symmetry axis (VTI) embedded in a homogeneous VTI background. The layer thickness is assumed to be much smaller than the wavelengths of P- and SV-waves inside. The exact reflection and transmission coefficients are derived by the propagator matrix method. In the case of normal incidence, the exact reflection and transmission coefficients are expressed in terms of the impedances of vertically propagating P- and S-waves. For subcritical incidence, the approximate reflection coefficients are expressed in terms of the contrast in the VTI parameters between the layer and the background. Numerical examples are designed to analyze the reflection coefficients at normal and oblique incidence, and investigate the influence of transverse isotropy on the reflection coefficients. Despite giving numerical errors, the approximate formulae are sufficiently simple to qualitatively analyze the variation of the reflection coefficients with the angle of incidence.

  2. Normalization in econometrics

    OpenAIRE

    Hamilton, James D.; Daniel F. Waggoner; Zha, Tao

    2004-01-01

    The issue of normalization arises whenever two different values for a vector of unknown parameters imply the identical economic model. A normalization does not just imply a rule for selecting which point, among equivalent ones, to call the maximum likelihood estimator (MLE). It also governs the topography of the set of points that go into a small-sample confidence interval associated with that MLE. A poor normalization can lead to multimodal distributions, disjoint confidence intervals, and v...

  3. Normal cognitive aging.

    Science.gov (United States)

    Harada, Caroline N; Natelson Love, Marissa C; Triebel, Kristen L

    2013-11-01

    Even those who do not experience dementia or mild cognitive impairment may experience subtle cognitive changes associated with aging. Normal cognitive changes can affect an older adult's everyday function and quality of life, and a better understanding of this process may help clinicians distinguish normal from disease states. This article describes the neurocognitive changes observed in normal aging, followed by a description of the structural and functional alterations seen in aging brains. Practical implications of normal cognitive aging are then discussed, followed by a discussion of what is known about factors that may mitigate age-associated cognitive decline.

  4. Normalizers of Irreducible Subfactors

    CERN Document Server

    Smith, Roger R; Wiggins, Alan D

    2007-01-01

    We consider normalizers of an irreducible inclusion $N\\subseteq M$ of $\\mathrm{II}_1$ factors. In the infinite index setting an inclusion $uNu^*\\subseteq N$ can be strict, forcing us to also investigate the semigroup of one-sided normalizers. We relate these normalizers of $N$ in $M$ to projections in the basic construction and show that every trace one projection in the relative commutant $N'\\cap $ is of the form $u^*e_Nu$ for some unitary $u\\in M$ with $uNu^*\\subseteq N$. This enables us to identify the normalizers and the algebras they generate in several situations. In particular each normalizer of a tensor product of irreducible subfactors is a tensor product of normalizers modulo a unitary. We also examine normalizers of irreducible subfactors arising from subgroup--group inclusions $H\\subseteq G$. Here the normalizers are the normalizing group elements modulo a unitary from $L(H)$. We are also able to identify the finite trace $L(H)$-bimodules in $\\ell^2(G)$ as double cosets which are also finite union...

  5. Saddlepoint approximations for studentized compound Poisson sums with no moment conditions in audit sampling

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Saddlepoint approximations for the studentized compound Poisson sums with no moment conditions in audit sampling are derived. This result not only provides a very accurate approximation for studentized compound Poisson sums, but also can be applied much more widely in statistical inference of the error amount in an audit population of accounts to check the validity of financial statements of a firm. Some numerical illustrations and comparison with the normal approximation method are presented.

  6. Frankenstein's glue: transition functions for approximate solutions

    Science.gov (United States)

    Yunes, Nicolás

    2007-09-01

    Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.

  7. The tendon approximator device in traumatic injuries.

    Science.gov (United States)

    Forootan, Kamal S; Karimi, Hamid; Forootan, Nazilla-Sadat S

    2015-01-01

    Precise and tension-free approximation of two tendon endings is the key predictor of outcomes following tendon lacerations and repairs. We evaluate the efficacy of a new tendon approximator device in tendon laceration repairs. In a comparative study, we used our new tendon approximator device in 99 consecutive patients with laceration of 266 tendons who attend a university hospital and evaluated the operative time to repair the tendons, surgeons' satisfaction as well as patient's outcomes in a long-term follow-up. Data were compared with the data of control patients undergoing tendon repair by conventional method. Totally 266 tendons were repaired by approximator device and 199 tendons by conventional technique. 78.7% of patients in first group were male and 21.2% were female. In approximator group 38% of patients had secondary repair of cut tendons and 62% had primary repair. Patients were followed for a mean period of 3years (14-60 months). Time required for repair of each tendon was significantly reduced with the approximator device (2 min vs. 5.5 min, ptendon repair were identical in the two groups and were not significantly different. 1% of tendons in group A and 1.2% in group B had rupture that was not significantly different. The new nerve approximator device is cheap, feasible to use and reduces the time of tendon repair with sustained outcomes comparable to the conventional methods.

  8. Entanglement in the Born-Oppenheimer Approximation

    CERN Document Server

    Izmaylov, Artur F

    2016-01-01

    The role of electron-nuclear entanglement on the validity of the Born-Oppenheimer (BO) approximation is investigated. While nonadiabatic couplings generally lead to entanglement and to a failure of the BO approximation, surprisingly the degree of electron-nuclear entanglement is found to be uncorrelated with the degree of validity of the BO approximation. This is because while the degree of entanglement of BO states is determined by their deviation from the corresponding states in the crude BO approximation, the accuracy of the BO approximation is dictated, instead, by the deviation of the BO states from the exact electron-nuclear states. In fact, in the context of a minimal avoided crossing model, extreme cases are identified where an adequate BO state is seen to be maximally entangled, and where the BO approximation fails but the associated BO state remains approximately unentangled. Further, the BO states are found to not preserve the entanglement properties of the exact electron-nuclear eigenstates, and t...

  9. k-Edge-Connectivity: Approximation and LP Relaxation

    CERN Document Server

    Pritchard, David

    2010-01-01

    This paper's focus is the following family of problems, denoted k-ECSS, where k denotes a positive integer: given a graph (V, E) and costs for each edge, find a minimum-cost subset F of E such that (V, F) is k-edge-connected. For k=1 it is the spanning tree problem which is in P; for every other k it is APX-hard and has a 2-approximation. Moreover, assuming P != NP, it is known that for unit costs, the best possible approximation ratio is 1 + Theta(1/k) for k>1. Our first main result is to determine the analogous asymptotic ratio for general costs: we show there is a constant eps>0 so that for all k>1, finding a (1+eps)-approximation for k-ECSS is NP-hard. Thus we establish a gap between the unit-cost and general-cost versions, for large enough k. Next, we consider the multi-subgraph cousin of k-ECSS, in which we are allowed to buy arbitrarily many copies of any edges (i.e., F is now a multi-subset of E, with parallel copies having the same cost as the original edge). Not so much is known about this natural v...

  10. DIFFERENCE SCHEMES BASING ON COEFFICIENT APPROXIMATION

    Institute of Scientific and Technical Information of China (English)

    MOU Zong-ze; LONG Yong-xing; QU Wen-xiao

    2005-01-01

    In respect of variable coefficient differential equations, the equations of coefficient function approximation were more accurate than the coefficient to be frozen as a constant in every discrete subinterval. Usually, the difference schemes constructed based on Taylor expansion approximation of the solution do not suit the solution with sharp function.Introducing into local bases to be combined with coefficient function approximation, the difference can well depict more complex physical phenomena, for example, boundary layer as well as high oscillatory, with sharp behavior. The numerical test shows the method is more effective than the traditional one.

  11. Approximation of free-discontinuity problems

    CERN Document Server

    Braides, Andrea

    1998-01-01

    Functionals involving both volume and surface energies have a number of applications ranging from Computer Vision to Fracture Mechanics. In order to tackle numerical and dynamical problems linked to such functionals many approximations by functionals defined on smooth functions have been proposed (using high-order singular perturbations, finite-difference or non-local energies, etc.) The purpose of this book is to present a global approach to these approximations using the theory of gamma-convergence and of special functions of bounded variation. The book is directed to PhD students and researchers in calculus of variations, interested in approximation problems with possible applications.

  12. Mathematical analysis, approximation theory and their applications

    CERN Document Server

    Gupta, Vijay

    2016-01-01

    Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.

  13. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  14. Orthorhombic rational approximants for decagonal quasicrystals

    Indian Academy of Sciences (India)

    S Ranganathan; Anandh Subramaniam

    2003-10-01

    An important exercise in the study of rational approximants is to derive their metric, especially in relation to the corresponding quasicrystal or the underlying clusters. Kuo’s model has been the widely accepted model to calculate the metric of the decagonal approximants. Using an alternate model, the metric of the approximants and other complex structures with the icosahedral cluster are explained elsewhere. In this work a comparison is made between the two models bringing out their equivalence. Further, using the concept of average lattices, a modified model is proposed.

  15. Approximation of the semi-infinite interval

    Directory of Open Access Journals (Sweden)

    A. McD. Mercer

    1980-01-01

    Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.

  16. Monitoring the normal body

    DEFF Research Database (Denmark)

    Nissen, Nina Konstantin; Holm, Lotte; Baarts, Charlotte

    2015-01-01

    recruited by strategic sampling based on self-reported BMI 18.5-29.9 kg/m2 and socio-demographic factors. Inductive analysis was conducted. Results : Normal-weight and moderately overweight people have clear ideals for their body size. Despite being normal weight or close to this, they construct a variety...

  17. An overview on Approximate Bayesian computation*

    Directory of Open Access Journals (Sweden)

    Baragatti Meïli

    2014-01-01

    Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.

  18. Trigonometric Approximations for Some Bessel Functions

    OpenAIRE

    Muhammad Taher Abuelma'atti

    1999-01-01

    Formulas are obtained for approximating the tabulated Bessel functions Jn(x), n = 0–9 in terms of trigonometric functions. These formulas can be easily integrated and differentiated and are convenient for personal computers and pocket calculators.

  19. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  20. Asynchronous stochastic approximation with differential inclusions

    Directory of Open Access Journals (Sweden)

    David S. Leslie

    2012-01-01

    Full Text Available The asymptotic pseudo-trajectory approach to stochastic approximation of Benaïm, Hofbauer and Sorin is extended for asynchronous stochastic approximations with a set-valued mean field. The asynchronicity of the process is incorporated into the mean field to produce convergence results which remain similar to those of an equivalent synchronous process. In addition, this allows many of the restrictive assumptions previously associated with asynchronous stochastic approximation to be removed. The framework is extended for a coupled asynchronous stochastic approximation process with set-valued mean fields. Two-timescales arguments are used here in a similar manner to the original work in this area by Borkar. The applicability of this approach is demonstrated through learning in a Markov decision process.

  1. An approximate Expression for Viscosity of Nanosuspensions

    CERN Document Server

    Domostroeva, N G

    2009-01-01

    We consider liquid suspensions with dispersed nanoparticles. Using two-points Pade approximants and combining results of both hydrodynamic and molecular dynamics methods, we obtain the effective viscosity for any diameters of nanoparticles

  2. On Approximating Four Covering and Packing Problems

    CERN Document Server

    Ashley, Mary; Berman, Piotr; Chaovalitwongse, Wanpracha; DasGupta, Bhaskar; Kao, Ming-Yang; 10.1016/j.jcss.2009.01.002

    2011-01-01

    In this paper, we consider approximability issues of the following four problems: triangle packing, full sibling reconstruction, maximum profit coverage and 2-coverage. All of them are generalized or specialized versions of set-cover and have applications in biology ranging from full-sibling reconstructions in wild populations to biomolecular clusterings; however, as this paper shows, their approximability properties differ considerably. Our inapproximability constant for the triangle packing problem improves upon the previous results; this is done by directly transforming the inapproximability gap of Haastad for the problem of maximizing the number of satisfied equations for a set of equations over GF(2) and is interesting in its own right. Our approximability results on the full siblings reconstruction problems answers questions originally posed by Berger-Wolf et al. and our results on the maximum profit coverage problem provides almost matching upper and lower bounds on the approximation ratio, answering a...

  3. Staying thermal with Hartree ensemble approximations

    Energy Technology Data Exchange (ETDEWEB)

    Salle, Mischa E-mail: msalle@science.uva.nl; Smit, Jan E-mail: jsmit@science.uva.nl; Vink, Jeroen C. E-mail: jcvink@science.uva.nl

    2002-03-25

    We study thermal behavior of a recently introduced Hartree ensemble approximation, which allows for non-perturbative inhomogeneous field configurations as well as for approximate thermalization, in the phi (cursive,open) Greek{sup 4} model in 1+1 dimensions. Using ensembles with a free field thermal distribution as out-of-equilibrium initial conditions we determine thermalization time scales. The time scale for which the system stays in approximate quantum thermal equilibrium is an indication of the time scales for which the approximation method stays reasonable. This time scale turns out to be two orders of magnitude larger than the time scale for thermalization, in the range of couplings and temperatures studied. We also discuss simplifications of our method which are numerically more efficient and make a comparison with classical dynamics.

  4. Approximations of solutions to retarded integrodifferential equations

    Directory of Open Access Journals (Sweden)

    Dhirendra Bahuguna

    2004-11-01

    Full Text Available In this paper we consider a retarded integrodifferential equation and prove existence, uniqueness and convergence of approximate solutions. We also give some examples to illustrate the applications of the abstract results.

  5. APPROXIMATE DEVELOPMENTS FOR SURFACES OF REVOLUTION

    Directory of Open Access Journals (Sweden)

    Mădălina Roxana Buneci

    2016-12-01

    Full Text Available The purpose of this paper is provide a set of Maple procedures to construct approximate developments of a general surface of revolution generalizing the well-known gore method for sphere

  6. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  7. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  8. Approximate Flavor Symmetry in Supersymmetric Model

    OpenAIRE

    Tao, Zhijian

    1998-01-01

    We investigate the maximal approximate flavor symmetry in the framework of generic minimal supersymmetric standard model. We consider the low energy effective theory of the flavor physics with all the possible operators included. Spontaneous flavor symmetry breaking leads to the approximate flavor symmetry in Yukawa sector and the supersymmetry breaking sector. Fermion mass and mixing hierachies are the results of the hierachy of the flavor symmetry breaking. It is found that in this theory i...

  9. Pointwise approximation by elementary complete contractions

    CERN Document Server

    Magajna, Bojan

    2009-01-01

    A complete contraction on a C*-algebra A, which preserves all closed two sided ideals J, can be approximated pointwise by elementary complete contractions if and only if the induced map on the tensor product of B with A/J is contractive for every C*-algebra B, ideal J in A and C*-tensor norm on the tensor product. A lifting obstruction for such an approximation is also obtained.

  10. Polynomial approximation of functions in Sobolev spaces

    Science.gov (United States)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  11. Parallel local approximation MCMC for expensive models

    OpenAIRE

    Conrad, Patrick; Davis, Andrew; Marzouk, Youssef; Pillai, Natesh; Smith, Aaron

    2016-01-01

    Performing Bayesian inference via Markov chain Monte Carlo (MCMC) can be exceedingly expensive when posterior evaluations invoke the evaluation of a computationally expensive model, such as a system of partial differential equations. In recent work [Conrad et al. JASA 2015, arXiv:1402.1694] we described a framework for constructing and refining local approximations of such models during an MCMC simulation. These posterior--adapted approximations harness regularity of the model to reduce the c...

  12. The Actinide Transition Revisited by Gutzwiller Approximation

    Science.gov (United States)

    Xu, Wenhu; Lanata, Nicola; Yao, Yongxin; Kotliar, Gabriel

    2015-03-01

    We revisit the problem of the actinide transition using the Gutzwiller approximation (GA) in combination with the local density approximation (LDA). In particular, we compute the equilibrium volumes of the actinide series and reproduce the abrupt change of density found experimentally near plutonium as a function of the atomic number. We discuss how this behavior relates with the electron correlations in the 5 f states, the lattice structure, and the spin-orbit interaction. Our results are in good agreement with the experiments.

  13. Intuitionistic Fuzzy Automaton for Approximate String Matching

    Directory of Open Access Journals (Sweden)

    K.M. Ravi

    2014-03-01

    Full Text Available This paper introduces an intuitionistic fuzzy automaton model for computing the similarity between pairs of strings. The model details the possible edit operations needed to transform any input (observed string into a target (pattern string by providing a membership and non-membership value between them. In the end, an algorithm is given for approximate string matching and the proposed model computes the similarity and dissimilarity between the pair of strings leading to better approximation.

  14. Approximations for the Erlang Loss Function

    DEFF Research Database (Denmark)

    Mejlbro, Leif

    1998-01-01

    Theoretically, at least three formulae are needed for arbitrarily good approximates of the Erlang Loss Function. In the paper, for convenience five formulae are presented guaranteeing a relative error <1E-2, and methods are indicated for improving this bound.......Theoretically, at least three formulae are needed for arbitrarily good approximates of the Erlang Loss Function. In the paper, for convenience five formulae are presented guaranteeing a relative error

  15. Staying Thermal with Hartree Ensemble Approximations

    CERN Document Server

    Salle, M; Vink, Jeroen C

    2000-01-01

    Using Hartree ensemble approximations to compute the real time dynamics of scalar fields in 1+1 dimension, we find that with suitable initial conditions, approximate thermalization is achieved much faster than found in our previous work. At large times, depending on the interaction strength and temperature, the particle distribution slowly changes: the Bose-Einstein distribution of the particle densities develops classical features. We also discuss variations of our method which are numerically more efficient.

  16. Lattice quantum chromodynamics with approximately chiral fermions

    Energy Technology Data Exchange (ETDEWEB)

    Hierl, Dieter

    2008-05-15

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  17. Nonlinear approximation in alpha-modulation spaces

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2006-01-01

    The α-modulation spaces are a family of spaces that contain the Besov and modulation spaces as special cases. In this paper we prove that brushlet bases can be constructed to form unconditional and even greedy bases for the α-modulation spaces. We study m -term nonlinear approximation with brushlet...... bases, and give complete characterizations of the associated approximation spaces in terms of α-modulation spaces....

  18. On surface approximation using developable surfaces

    DEFF Research Database (Denmark)

    Chen, H. Y.; Lee, I. K.; Leopoldseder, s.

    1999-01-01

    We introduce a method for approximating a given surface by a developable surface. It will be either a G(1) surface consisting of pieces of cones or cylinders of revolution or a G(r) NURBS developable surface. Our algorithm will also deal properly with the problems of reverse engineering and produce...... robust approximation of given scattered data. The presented technique can be applied in computer aided manufacturing, e.g. in shipbuilding. (C) 1999 Academic Press....

  19. On surface approximation using developable surfaces

    DEFF Research Database (Denmark)

    Chen, H. Y.; Lee, I. K.; Leopoldseder, S.

    1998-01-01

    We introduce a method for approximating a given surface by a developable surface. It will be either a G_1 surface consisting of pieces of cones or cylinders of revolution or a G_r NURBS developable surface. Our algorithm will also deal properly with the problems of reverse engineering and produce...... robust approximation of given scattered data. The presented technique can be applied in computer aided manufacturing, e.g. in shipbuilding....

  20. Approximating perfection a mathematician's journey into the world of mechanics

    CERN Document Server

    Lebedev, Leonid P

    2004-01-01

    This is a book for those who enjoy thinking about how and why Nature can be described using mathematical tools. Approximating Perfection considers the background behind mechanics as well as the mathematical ideas that play key roles in mechanical applications. Concentrating on the models of applied mechanics, the book engages the reader in the types of nuts-and-bolts considerations that are normally avoided in formal engineering courses: how and why models remain imperfect, and the factors that motivated their development. The opening chapter reviews and reconsiders the basics of c

  1. Theory of Casimir Forces without the Proximity-Force Approximation.

    Science.gov (United States)

    Lapas, Luciano C; Pérez-Madrid, Agustín; Rubí, J Miguel

    2016-03-18

    We analyze both the attractive and repulsive Casimir-Lifshitz forces recently reported in experimental investigations. By using a kinetic approach, we obtain the Casimir forces from the power absorbed by the materials. We consider collective material excitations through a set of relaxation times distributed in frequency according to a log-normal function. A generalized expression for these forces for arbitrary values of temperature is obtained. We compare our results with experimental measurements and conclude that the model goes beyond the proximity-force approximation.

  2. Validity of the local approximation in iron pnictides and chalcogenides

    Science.gov (United States)

    Sémon, Patrick; Haule, Kristjan; Kotliar, Gabriel

    2017-05-01

    We introduce a methodology to treat different degrees of freedom at different levels of approximation. We use cluster DMFT (dynamical mean field theory) for the t2 g electrons and single site DMFT for the eg electrons to study the normal state of the iron pnictides and chalcogenides. In the regime of moderate mass renormalizations, the self-energy is very local, justifying the success of single site DMFT for these materials and for other Hunds metals. We solve the corresponding impurity model with CTQMC (continuous time quantum Monte Carlo) and find that the minus sign problem is not severe in regimes of moderate mass renormalization.

  3. Differential geometry of proteins. Helical approximations.

    Science.gov (United States)

    Louie, A H; Somorjai, R L

    1983-07-25

    We regard a protein molecule as a geometric object, and in a first approximation represent it as a regular parametrized space curve passing through its alpha-carbon atoms (the backbone). In an earlier paper we argued that the regular patterns of secondary structures of proteins (morphons) correspond to geodesics on minimal surfaces. In this paper we discuss methods of recognizing these morphons on space curves that represent the protein backbone conformation. The mathematical tool we employ is the differential geometry of curves and surfaces. We introduce a natural approximation of backbone space curves in terms of helical approximating elements and present a computer algorithm to implement the approximation. Simple recognition criteria are given for the various morphons of proteins. These are incorporated into our helical approximation algorithm, together with more non-local criteria for the recognition of beta-sheet topologies. The method and the algorithm are illustrated with several examples of representative proteins. Generalizations of the helical approximation method are considered and their possible implications for protein energetics are sketched.

  4. The Fine Structure of Dyadically Badly Approximable Numbers

    CERN Document Server

    Nilsson, Johan

    2010-01-01

    We consider badly approximable numbers in the case of dyadic diophantine approximation. For the unit circle $\\mathbb{S}$ and the smallest distance to an integer $\\|\\cdot\\|$ we give elementary proofs that the set $F(c) = \\{x \\in \\mathbb{S}: \\|2^nx\\| \\geq c, n\\geq 0\\}$ is a fractal set whose Hausdorff dimension depends continuously on $c$, is constant on intervals which form a set of Lebesgue measure 1 and is self-similar. Hence it has a fractal graph. Moreover, the dimension of $F(c)$ is zero if and only if $c\\geq 1-2\\tau$, where $\\tau$ is the Thue-Morse constant. We completely characterise the intervals where the dimension remains unchanged. As a consequence we can completely describe the graph of $ c\\mapsto \\dim_H \\{x\\in[0,1]: \\|x-\\frac{m}{2^n}\\|< \\frac{c}{2^n} \\textnormal{finitely often}\\}$.

  5. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...

  6. Normal families related to shared sets

    Institute of Scientific and Technical Information of China (English)

    LV Feng-jiao; LI Jiang-tao

    2008-01-01

    We studied the normality criterion for families of meromorphic functions related to shared sets. Let F be a family of meromorphic functions on the unit disc (, a and b be distinct non-zero values, S={a,b}, and k be a positive integer. If for every f∈F, i) the zeros of f(z) have a multiplicity of at least k+1, and ii) , then F is normal on (. At the same time, the corresponding results of normal function are also proved.

  7. Legendre-Tau approximation for functional differential equations. Part 3: Eigenvalue approximations and uniform stability

    Science.gov (United States)

    Ito, K.

    1984-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  8. Legendre-tau approximation for functional differential equations. III - Eigenvalue approximations and uniform stability

    Science.gov (United States)

    Ito, K.

    1985-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A characteristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  9. Normalized Compression Distance of Multiples

    CERN Document Server

    Cohen, Andrew R

    2012-01-01

    Normalized compression distance (NCD) is a parameter-free similarity measure based on compression. The NCD between pairs of objects is not sufficient for all applications. We propose an NCD of finite multisets (multiples) of objacts that is metric and is better for many applications. Previously, attempts to obtain such an NCD failed. We use the theoretical notion of Kolmogorov complexity that for practical purposes is approximated from above by the length of the compressed version of the file involved, using a real-world compression program. We applied the new NCD for multiples to retinal progenitor cell questions that were earlier treated with the pairwise NCD. Here we get significantly better results. We also applied the NCD for multiples to synthetic time sequence data. The preliminary results are as good as nearest neighbor Euclidean classifier.

  10. Liver fibrosis progression in HIV/hepatitis C virus coinfected patients with normal aminotransferases levels

    Directory of Open Access Journals (Sweden)

    Fábio Heleno de Lima Pace

    2012-08-01

    Full Text Available INTRODUCTION: Approximately 30% of hepatitis C virus (HCV monoinfected patients present persistently normal alanine aminotransferase (ALT levels. Most of these patients have a slow progression of liver fibrosis. Studies have demonstrated the rate of liver fibrosis progression in hepatitis C virus-human immunodeficiency virus (HCV-HIV coinfected patients is faster than in patients infected only by HCV. Few studies have evaluated the histological features of chronic hepatitis C in HIV-infected patients with normal ALT levels. METHODS: HCV-HIV coinfected patients (HCV-RNA and anti-HIV positive with known time of HCV infection (intravenous drugs users were selected. Patients with hepatitis B surface antigen (HBsAg positive or hepatitis C treatment before liver biopsy were excluded. Patients were considered to have a normal ALT levels if they had at least 3 normal determinations in the previous 6 months prior to liver biopsy. All patients were submitted to liver biopsy and METAVIR scale was used. RESULTS: Of 50 studied patients 40 (80% were males. All patients were treated with antiretroviral therapy. The ALT levels were normal in 13 (26% patients. HCV-HIV co-infected patients with normal ALT levels had presented means of the liver fibrosis stages (0.77±0.44 versus 1.86±1.38; p<0.001 periportal inflammatory activity (0.62±0.77 versus 2.24±1.35; p<0.001 and liver fibrosis progression rate (0.058±0.043 fibrosis unit/year versus 0.118±0.102 fibrosis unit/year significantly lower as compared to those with elevated ALT. CONCLUSIONS: HCV-HIV coinfected patients with persistently normal ALTs showed slower progression of liver fibrosis. In these patients the development of liver cirrhosis is improbable.

  11. Normal pressure hydrocephalus

    Science.gov (United States)

    Hydrocephalus - occult; Hydrocephalus - idiopathic; Hydrocephalus - adult; Hydrocephalus - communicating; Dementia - hydrocephalus; NPH ... Ferri FF. Normal pressure hydrocephalus. In: Ferri FF, ed. ... Elsevier; 2016:chap 648. Rosenberg GA. Brain edema and disorders ...

  12. Normality in analytical psychology.

    Science.gov (United States)

    Myers, Steve

    2013-12-01

    Although C.G. Jung's interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault's criticism, had Foucault chosen to review Jung's work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault's own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung's disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity.

  13. Normal Functioning Family

    Science.gov (United States)

    ... Spread the Word Shop AAP Find a Pediatrician Family Life Medical Home Family Dynamics Adoption & Foster Care ... Español Text Size Email Print Share Normal Functioning Family Page Content Article Body Is there any way ...

  14. Idiopathic Normal Pressure Hydrocephalus

    OpenAIRE

    2016-01-01

    Idiopathic normal pressure hydrocephalus (iNPH) is a potentially reversible neurodegenerative disease commonly characterized by a triad of dementia, gait, and urinary disturbance. Advancements in diagnosis and treatment have aided in properly identifying and improving symptoms in patients. However, a large proportion of iNPH patients remain either undiagnosed or misdiagnosed. Using PubMed search engine of keywords “normal pressure hydrocephalus,” “diagnosis,” “shunt treatment,” “biomarkers,” ...

  15. Tree-fold loop approximation of AMD

    Energy Technology Data Exchange (ETDEWEB)

    Ono, Akira [Tohoku Univ., Sendai (Japan). Faculty of Science

    1997-05-01

    AMD (antisymmetrized molecular dynamics) is a frame work for describing a wave function of nucleon multi-body system by Slater determinant of Gaussian wave flux, and a theory for integrally describing a wide range of nuclear reactions such as intermittent energy heavy ion reaction, nucleon incident reaction and so forth. The aim of this study is induction on approximation equation of expected value, {nu}, in correlation capable of calculation with time proportional A (exp 3) (or lower), and to make AMD applicable to the heavier system such as Au+Au. As it must be avoided to break characteristics of AMD, it needs not to be anxious only by approximating the {nu}-value. However, in order to give this approximation any meaning, error of this approximation will have to be sufficiently small in comparison with bond energy of atomic nucleus and smaller than 1 MeV/nucleon. As the absolute expected value in correlation may be larger than 50 MeV/nucleon, the approximation is required to have a high accuracy within 2 percent. (G.K.)

  16. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  17. Approximation of Bivariate Functions via Smooth Extensions

    Science.gov (United States)

    Zhang, Zhihua

    2014-01-01

    For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316

  18. Approximation Limits of Linear Programs (Beyond Hierarchies)

    CERN Document Server

    Braun, Gábor; Pokutta, Sebastian; Steurer, David

    2012-01-01

    We develop a framework for approximation limits of polynomial-size linear programs from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any linear program as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n^{1/2-eps})-approximations for CLIQUE require linear programs of size 2^{n^\\Omega(eps)}. (This lower bound applies to linear programs using a certain encoding of CLIQUE as a linear optimization problem.) Moreover, we establish a similar result for approximations of semidefinite programs by linear programs. Our main ingredient is a quantitative improvement of Razborov's rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of certain perturbations of the unique disjointness matrix.

  19. Discontinuous Galerkin Methods with Trefftz Approximation

    CERN Document Server

    Kretzschmar, Fritz; Tsukerman, Igor; Weiland, Thomas

    2013-01-01

    We present a novel Discontinuous Galerkin Finite Element Method for wave propagation problems. The method employs space-time Trefftz-type basis functions that satisfy the underlying partial differential equations and the respective interface boundary conditions exactly in an element-wise fashion. The basis functions can be of arbitrary high order, and we demonstrate spectral convergence in the $\\Lebesgue_2$-norm. In this context, spectral convergence is obtained with respect to the approximation error in the entire space-time domain of interest, i.e. in space and time simultaneously. Formulating the approximation in terms of a space-time Trefftz basis makes high order time integration an inherent property of the method and clearly sets it apart from methods, that employ a high order approximation in space only.

  20. Approximating light rays in the Schwarzschild field

    CERN Document Server

    Semerak, Oldrich

    2014-01-01

    A short formula is suggested which approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various "low-order competitors", namely with those following from exact formulas for small $M$, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behaviour is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable--and very accurate--for practical solving of the ray-deflection exercise.

  1. On the approximate zero of Newton method

    Institute of Scientific and Technical Information of China (English)

    黄正达

    2003-01-01

    A judgment criterion to guarantee a point to be a Chen' s approximate zero of Newton method for solving nonlinear equation is sought by dominating sequence techniques. The criterion is based on the fact that the dominating function may have only one simple positive zero, assuming that the operator is weak Lipschitz continuous, which is much more relaxed and can be checked much more easily than Lipschitz continuous in practice. It is demonstrated that a Chen' s approximate zero may not be a Smale' s approximate zero. The error estimate obtained indicated the convergent order when we use |f(x) | < ε to stop computation in software.The result can also be applied for solving partial derivative and integration equations.

  2. On the approximate zero of Newton method

    Institute of Scientific and Technical Information of China (English)

    黄正达

    2003-01-01

    A judgment criterion to guarantee a point to be a Chen's approximate zero of Newton method for solving nonlinear equation is sought by dominating sequence techniques. The criterion is based on the fact that the dominating function may have only one simple positive zero, assuming that the operator is weak Lipschitz continuous, which is much more relaxed and can be checked much more easily than Lipschitz continuous in practice. It is demonstrated that a Chen's approximate zero may not be a Smale's approximate zero. The error estimate obtained indicated the convergent order when we use |f(x)|<ε to stop computation in software. The result can also be applied for solving partial derivative and integration equations.

  3. Optical pulse propagation with minimal approximations

    Science.gov (United States)

    Kinsler, Paul

    2010-01-01

    Propagation equations for optical pulses are needed to assist in describing applications in ever more extreme situations—including those in metamaterials with linear and nonlinear magnetic responses. Here I show how to derive a single first-order propagation equation using a minimum of approximations and a straightforward “factorization” mathematical scheme. The approach generates exact coupled bidirectional equations, after which it is clear that the description can be reduced to a single unidirectional first-order wave equation by means of a simple “slow evolution” approximation, where the optical pulse changes little over the distance of one wavelength. It also allows a direct term-to-term comparison of an exact bidirectional theory with the approximate unidirectional theory.

  4. Rough interfaces beyond the Gaussian approximation

    CERN Document Server

    Caselle, M; Gliozzi, F; Hasenbusch, M; Pinn, K; Vinti, S; Caselle, M; Gliozzi, F; Fiore, R; Hasenbusch, M; Pinn, K; Vinti, S

    1994-01-01

    We compare predictions of the Capillary Wave Model beyond its Gaussian approximation with Monte Carlo results for the energy gap and the surface energy of the 3D Ising model in the scaling region. Our study reveals that the finite size effects of these quantities are well described by the Capillary Wave Model, expanded to two--loop order ( one order beyond the Gaussian approximation). We compare predictions of the Capillary Wave Model with Monte Carlo results for the energy gap and the interface energy of the 3D Ising model in the scaling region. Our study reveals that the finite size effects of these quantities are well described by the Capillary Wave Model, expanded to two-loop order (one order beyond the Gaussian approximation).

  5. Implementing regularization implicitly via approximate eigenvector computation

    CERN Document Server

    Mahoney, Michael W

    2010-01-01

    Regularization is a powerful technique for extracting useful information from noisy data. Typically, it is implemented by adding some sort of norm constraint to an objective function and then exactly optimizing the modified objective function. This procedure typically leads to optimization problems that are computationally more expensive than the original problem, a fact that is clearly problematic if one is interested in large-scale applications. On the other hand, a large body of empirical work has demonstrated that heuristics, and in some cases approximation algorithms, developed to speed up computations sometimes have the side-effect of performing regularization implicitly. Thus, we consider the question: What is the regularized optimization objective that an approximation algorithm is exactly optimizing? We address this question in the context of computing approximations to the smallest nontrivial eigenvector of a graph Laplacian; and we consider three random-walk-based procedures: one based on the heat ...

  6. On approximation of Markov binomial distributions

    CERN Document Server

    Xia, Aihua; 10.3150/09-BEJ194

    2010-01-01

    For a Markov chain $\\mathbf{X}=\\{X_i,i=1,2,...,n\\}$ with the state space $\\{0,1\\}$, the random variable $S:=\\sum_{i=1}^nX_i$ is said to follow a Markov binomial distribution. The exact distribution of $S$, denoted $\\mathcal{L}S$, is very computationally intensive for large $n$ (see Gabriel [Biometrika 46 (1959) 454--460] and Bhat and Lal [Adv. in Appl. Probab. 20 (1988) 677--680]) and this paper concerns suitable approximate distributions for $\\mathcal{L}S$ when $\\mathbf{X}$ is stationary. We conclude that the negative binomial and binomial distributions are appropriate approximations for $\\mathcal{L}S$ when $\\operatorname {Var}S$ is greater than and less than $\\mathbb{E}S$, respectively. Also, due to the unique structure of the distribution, we are able to derive explicit error estimates for these approximations.

  7. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  8. Numerical approximation of partial differential equations

    CERN Document Server

    Bartels, Sören

    2016-01-01

    Finite element methods for approximating partial differential equations have reached a high degree of maturity, and are an indispensible tool in science and technology. This textbook aims at providing a thorough introduction to the construction, analysis, and implementation of finite element methods for model problems arising in continuum mechanics. The first part of the book discusses elementary properties of linear partial differential equations along with their basic numerical approximation, the functional-analytical framework for rigorously establishing existence of solutions, and the construction and analysis of basic finite element methods. The second part is devoted to the optimal adaptive approximation of singularities and the fast iterative solution of linear systems of equations arising from finite element discretizations. In the third part, the mathematical framework for analyzing and discretizing saddle-point problems is formulated, corresponding finte element methods are analyzed, and particular ...

  9. Time Stamps for Fixed-Point Approximation

    DEFF Research Database (Denmark)

    Damian, Daniela

    2001-01-01

    Time stamps were introduced in Shivers's PhD thesis for approximating the result of a control-flow analysis. We show them to be suitable for computing program analyses where the space of results (e.g., control-flow graphs) is large. We formalize time-stamping as a top-down, fixed-point approximat......Time stamps were introduced in Shivers's PhD thesis for approximating the result of a control-flow analysis. We show them to be suitable for computing program analyses where the space of results (e.g., control-flow graphs) is large. We formalize time-stamping as a top-down, fixed......-point approximation algorithm which maintains a single copy of intermediate results. We then prove the correctness of this algorithm....

  10. Wasted Food, Wasted Energy: The Embedded Energy in Food Waste in the United States

    OpenAIRE

    Cuéllar, Amanda D.; Michael E. Webber

    2010-01-01

    This work estimates the energy embedded in wasted food annually in the United States. We calculated the energy intensity of food production from agriculture, transportation, processing, food sales, storage, and preparation for 2007 as 8080 ± 760 trillion BTU. In 1995 approximately 27% of edible food was wasted. Synthesizing these food loss figures with our estimate of energy consumption for different food categories and food production steps, while normalizing for different production volumes...

  11. Statistical normalization techniques for magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Russell T. Shinohara

    2014-01-01

    Full Text Available While computed tomography and other imaging techniques are measured in absolute units with physical meaning, magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects. Much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques, with little emphasis on normalizing images to have biologically interpretable units. Furthermore, there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects. To address this, we propose a set of criteria necessary for the normalization of images. We further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria. We compare the performance of different normalization methods in thousands of images of patients with Alzheimer's disease, hundreds of patients with multiple sclerosis, and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers.

  12. Exponential Approximations Using Fourier Series Partial Sums

    Science.gov (United States)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  13. Dispersion in unit disks

    CERN Document Server

    Dumitrescu, Adrian

    2009-01-01

    We present two new approximation algorithms with (improved) constant ratios for selecting $n$ points in $n$ unit disks such that the minimum pairwise distance among the points is maximized. (I) A very simple $O(n \\log{n})$-time algorithm with ratio 0.5110 for disjoint unit disks. In combination with an algorithm of Cabello \\cite{Ca07}, it yields a $O(n^2)$-time algorithm with ratio of 0.4487 for dispersion in $n$ not necessarily disjoint unit disks. (II) A more sophisticated LP-based algorithm with ratio 0.6495 for disjoint unit disks that uses a linear number of variables and constraints, and runs in polynomial time. The algorithm introduces a novel technique which combines linear programming and projections for approximating distances. The previous best approximation ratio for disjoint unit disks was 1/2. Our results give a partial answer to an open question raised by Cabello \\cite{Ca07}, who asked whether 1/2 could be improved.

  14. Extending the Eikonal Approximation to Low Energy

    CERN Document Server

    Capel, Pierre; Ogata, Kazuyuki

    2014-01-01

    E-CDCC and DEA, two eikonal-based reaction models are compared to CDCC at low energy (e.g. 20AMeV) to study their behaviour in the regime at which the eikonal approximation is supposed to fail. We confirm that these models lack the Coulomb deflection of the projectile by the target. We show that a hybrid model, built on the CDCC framework at low angular momenta and the eikonal approximation at larger angular momenta gives a perfect agreement with CDCC. An empirical shift in impact parameter can also be used reliably to simulate this missing Coulomb deflection.

  15. Approximately-Balanced Drilling in Daqing Oilfield

    Institute of Scientific and Technical Information of China (English)

    Xia Bairu; Zheng Xiuhua; Li Guoqing; Tian Tuo

    2004-01-01

    The Daqing oilfield is a multilayered heterogeneous oil field where the pressure are different in the same vertical profile causing many troubles to the adjustment well drillings. The approximately-balanced drilling technique has been developed and proved to be efficient and successful in Daqing oilfield. This paper discusses the application of approximately-balanced drilling technique under the condition of multilayered pressure in Daqing oilfield, including the prediction of formation pressure, the pressure discharge technique for the drilling well and the control of the density of drilling fluid.

  16. Faddeev Random Phase Approximation for Molecules

    CERN Document Server

    Degroote, Matthias; Barbieri, Carlo

    2010-01-01

    The Faddeev Random Phase Approximation is a Green's function technique that makes use of Faddeev-equations to couple the motion of a single electron to the two-particle--one-hole and two-hole--one-particle excitations. This method goes beyond the frequently used third-order Algebraic Diagrammatic Construction method: all diagrams involving the exchange of phonons in the particle-hole and particle-particle channel are retained, but the phonons are described at the level of the Random Phase Approximation. This paper presents the first results for diatomic molecules at equilibrium geometry. The behavior of the method in the dissociation limit is also investigated.

  17. An Approximate Bayesian Fundamental Frequency Estimator

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2012-01-01

    Joint fundamental frequency and model order estimation is an important problem in several applications such as speech and music processing. In this paper, we develop an approximate estimation algorithm of these quantities using Bayesian inference. The inference about the fundamental frequency...... and the model order is based on a probability model which corresponds to a minimum of prior information. From this probability model, we give the exact posterior distributions on the fundamental frequency and the model order, and we also present analytical approximations of these distributions which lower...

  18. Approximate Controllability of Fractional Integrodifferential Evolution Equations

    Directory of Open Access Journals (Sweden)

    R. Ganesh

    2013-01-01

    Full Text Available This paper addresses the issue of approximate controllability for a class of control system which is represented by nonlinear fractional integrodifferential equations with nonlocal conditions. By using semigroup theory, p-mean continuity and fractional calculations, a set of sufficient conditions, are formulated and proved for the nonlinear fractional control systems. More precisely, the results are established under the assumption that the corresponding linear system is approximately controllable and functions satisfy non-Lipschitz conditions. The results generalize and improve some known results.

  19. Excluded-Volume Approximation for Supernova Matter

    CERN Document Server

    Yudin, A V

    2014-01-01

    A general scheme of the excluded-volume approximation as applied to multicomponent systems with an arbitrary degree of degeneracy has been developed. This scheme also admits an allowance for additional interactions between the components of a system. A specific form of the excluded-volume approximation for investigating supernova matter at subnuclear densities has been found from comparison with the hard-sphere model. The possibility of describing the phase transition to uniform nuclear matter in terms of the formalism under consideration is discussed.

  20. Generalized companion matrix for approximate GCD

    CERN Document Server

    Boito, Paola

    2011-01-01

    We study a variant of the univariate approximate GCD problem, where the coe?- cients of one polynomial f(x)are known exactly, whereas the coe?cients of the second polynomial g(x)may be perturbed. Our approach relies on the properties of the matrix which describes the operator of multiplication by gin the quotient ring C[x]=(f). In particular, the structure of the null space of the multiplication matrix contains all the essential information about GCD(f; g). Moreover, the multiplication matrix exhibits a displacement structure that allows us to design a fast algorithm for approximate GCD computation with quadratic complexity w.r.t. polynomial degrees.

  1. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...

  2. Static correlation beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian Sommer

    2014-01-01

    We investigate various approximations to the correlation energy of a H2 molecule in the dissociation limit, where the ground state is poorly described by a single Slater determinant. The correlation energies are derived from the density response function and it is shown that response functions...... derived from Hedin's equations (Random Phase Approximation (RPA), Time-dependent Hartree-Fock (TDHF), Bethe-Salpeter equation (BSE), and Time-Dependent GW) all reproduce the correct dissociation limit. We also show that the BSE improves the correlation energies obtained within RPA and TDHF significantly...

  3. Approximate formulas for moderately small eikonal amplitudes

    Science.gov (United States)

    Kisselev, A. V.

    2016-08-01

    We consider the eikonal approximation for moderately small scattering amplitudes. To find numerical estimates of these approximations, we derive formulas that contain no Bessel functions and consequently no rapidly oscillating integrands. To obtain these formulas, we study improper integrals of the first kind containing products of the Bessel functions J0(z). We generalize the expression with four functions J0(z) and also find expressions for the integrals with the product of five and six Bessel functions. We generalize a known formula for the improper integral with two functions Jυ (az) to the case with noninteger υ and complex a.

  4. The exact renormalization group and approximation solutions

    CERN Document Server

    Morris, T R

    1994-01-01

    We investigate the structure of Polchinski's formulation of the flow equations for the continuum Wilson effective action. Reinterpretations in terms of I.R. cutoff greens functions are given. A promising non-perturbative approximation scheme is derived by carefully taking the sharp cutoff limit and expanding in `irrelevancy' of operators. We illustrate with two simple models of four dimensional $\\lambda \\varphi^4$ theory: the cactus approximation, and a model incorporating the first irrelevant correction to the renormalized coupling. The qualitative and quantitative behaviour give confidence in a fuller use of this method for obtaining accurate results.

  5. Approximating W projection as a separable kernel

    Science.gov (United States)

    Merry, Bruce

    2016-02-01

    W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.

  6. BEST APPROXIMATION BY DOWNWARD SETS WITH APPLICATIONS

    Institute of Scientific and Technical Information of China (English)

    H.Mohebi; A. M. Rubinov

    2006-01-01

    We develop a theory of downward sets for a class of normed ordered spaces. We study best approximation in a normed ordered space X by elements of downward sets, and give necessary and sufficient conditions for any element of best approximation by a closed downward subset of X. We also characterize strictly downward subsets of X, and prove that a downward subset of X is strictly downward if and only if each its boundary point is Chebyshev. The results obtained are used for examination of some Chebyshev pairs (W,x), where x ∈ X and W is a closed downward subset of X.

  7. Local density approximations from finite systems

    CERN Document Server

    Entwistle, Mike; Wetherell, Jack; Longstaff, Bradley; Ramsden, James; Godby, Rex

    2016-01-01

    The local density approximation (LDA) constructed through quantum Monte Carlo calculations of the homogeneous electron gas (HEG) is the most common approximation to the exchange-correlation functional in density functional theory. We introduce an alternative set of LDAs constructed from slab-like systems of one, two and three electrons that resemble the HEG within a finite region, and illustrate the concept in one dimension. Comparing with the exact densities and Kohn-Sham potentials for various test systems, we find that the LDAs give a good account of the self-interaction correction, but are less reliable when correlation is stronger or currents flow.

  8. Two simple approximations to the distributions of quadratic forms.

    Science.gov (United States)

    Yuan, Ke-Hai; Bentler, Peter M

    2010-05-01

    Many test statistics are asymptotically equivalent to quadratic forms of normal variables, which are further equivalent to T = sigma(d)(i=1) lambda(i)z(i)(2) with z(i) being independent and following N(0,1). Two approximations to the distribution of T have been implemented in popular software and are widely used in evaluating various models. It is important to know how accurate these approximations are when compared to each other and to the exact distribution of T. The paper systematically studies the quality of the two approximations and examines the effect of the lambda(i) and the degrees of freedom d by analysis and Monte Carlo. The results imply that the adjusted distribution for T can be as good as knowing its exact distribution. When the coefficient of variation of the lambda(i) is small, the rescaled statistic T(R) = dT/(sigma(d)(i=1) lambda(i)) is also adequate for practical model inference. But comparing T(R) against chi2(d) will inflate type I errors when substantial differences exist among the lambda(i), especially, when d is also large.

  9. Rational approximations and quantum algorithms with postselection

    NARCIS (Netherlands)

    Mahadev, U.; de Wolf, R.

    2015-01-01

    We study the close connection between rational functions that approximate a given Boolean function, and quantum algorithms that compute the same function using post-selection. We show that the minimal degree of the former equals (up to a factor of 2) the minimal query complexity of the latter. We gi

  10. Kravchuk functions for the finite oscillator approximation

    Science.gov (United States)

    Atakishiyev, Natig M.; Wolf, Kurt Bernardo

    1995-01-01

    Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.

  11. Optical bistability without the rotating wave approximation

    Energy Technology Data Exchange (ETDEWEB)

    Sharaby, Yasser A., E-mail: Yasser_Sharaby@hotmail.co [Physics Department, Faculty of Applied Sciences, Suez Canal University, Suez (Egypt); Joshi, Amitabh, E-mail: ajoshi@eiu.ed [Department of Physics, Eastern Illinois University, Charleston, IL 61920 (United States); Hassan, Shoukry S., E-mail: Shoukryhassan@hotmail.co [Mathematics Department, College of Science, University of Bahrain, P.O. Box 32038 (Bahrain)

    2010-04-26

    Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.

  12. Markov operators, positive semigroups and approximation processes

    CERN Document Server

    Altomare, Francesco; Leonessa, Vita; Rasa, Ioan

    2015-01-01

    In recent years several investigations have been devoted to the study of large classes of (mainly degenerate) initial-boundary value evolution problems in connection with the possibility to obtain a constructive approximation of the associated positive C_0-semigroups. In this research monograph we present the main lines of a theory which finds its root in the above-mentioned research field.

  13. Image Compression Via a Fast DCT Approximation

    NARCIS (Netherlands)

    Bayer, F. M.; Cintra, R. J.

    2010-01-01

    Discrete transforms play an important role in digital signal processing. In particular, due to its transform domain energy compaction properties, the discrete cosine transform (DCT) is pivotal in many image processing problems. This paper introduces a numerical approximation method for the DCT based

  14. Approximation algorithms for planning and control

    Science.gov (United States)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  15. Large hierarchies from approximate R symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Kappl, Rolf; Ratz, Michael [Technische Univ. Muenchen, Garching (Germany). Physik Dept. T30; Nilles, Hans Peter [Bonn Univ. (Germany). Bethe Zentrum fuer Theoretische Physik und Physikalisches Inst.; Ramos-Sanchez, Saul; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Vaudrevange, Patrick K.S. [Ludwig-Maximilians-Univ. Muenchen (Germany). Arnold Sommerfeld Zentrum fuer Theoretische Physik

    2008-12-15

    We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales. (orig.)

  16. Strong washout approximation to resonant leptogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Garbrecht, Bjoern; Gautier, Florian; Klaric, Juraj [Physik Department T70, James-Franck-Strasse, Techniche Universitaet Muenchen, 85748 Garching (Germany)

    2016-07-01

    We study resonant Leptogenesis with two sterile neutrinos with masses M{sub 1} and M{sub 2}, Yukawa couplings Y{sub 1} and Y{sub 2}, and a single active flavor. Specifically, we focus on the strong washout regime, where the decay width dominates the mass splitting of the two sterile neutrinos. We show that one can approximate the effective decay asymmetry by it's late time limit ε = X sin(2 φ)/(X{sup 2}+sin{sup 2}φ), where X=8 π Δ/(vertical stroke Y{sub 1} vertical stroke {sup 2}+ vertical stroke Y{sub 2} vertical stroke {sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), and φ=arg(Y{sub 2}/Y{sub 1}), and establish criteria for the validity of this approximation. We compare the approximate results with numerical ones, obtained by solving the mixing and oscillations of the sterile neutrinos. We generalize the formula to the case of several active flavors, and demonstrate how it can be used to calculate the lepton asymmetry in phenomenological scenarios which are in agreement with the neutrino oscillation data. We find that that using the late time limit is an applicable approximation throughout the phenomenologically viable parameter space.

  17. Lower Bound Approximation for Elastic Buckling Loads

    NARCIS (Netherlands)

    Vrouwenvelder, A.; Witteveen, J.

    1975-01-01

    An approximate method for the elastic buckling analysis of two-dimensional frames is introduced. The method can conveniently be explained with reference to a physical interpretation: In the frame every member is replaced by two new members: - a flexural member without extensional rigidity to transmi

  18. Approximate Equilibrium Problems and Fixed Points

    Directory of Open Access Journals (Sweden)

    H. Mazaheri

    2013-01-01

    Full Text Available We find a common element of the set of fixed points of a map and the set of solutions of an approximate equilibrium problem in a Hilbert space. Then, we show that one of the sequences weakly converges. Also we obtain some theorems about equilibrium problems and fixed points.

  19. Approximations in diagnosis: motivations and techniques

    NARCIS (Netherlands)

    Harmelen, van F.A.H.; Teije, A. ten

    1995-01-01

    We argue that diagnosis should not be seen as solving a problem with a unique definition, but rather that there exists a whole space of reasonable notions of diagnosis. These notions can be seen as mutual approximations. We present a number of reasons for choosing among different notions of diagnos

  20. Eignets for function approximation on manifolds

    CERN Document Server

    Mhaskar, H N

    2009-01-01

    Let $\\XX$ be a compact, smooth, connected, Riemannian manifold without boundary, $G:\\XX\\times\\XX\\to \\RR$ be a kernel. Analogous to a radial basis function network, an eignet is an expression of the form $\\sum_{j=1}^M a_jG(\\circ,y_j)$, where $a_j\\in\\RR$, $y_j\\in\\XX$, $1\\le j\\le M$. We describe a deterministic, universal algorithm for constructing an eignet for approximating functions in $L^p(\\mu;\\XX)$ for a general class of measures $\\mu$ and kernels $G$. Our algorithm yields linear operators. Using the minimal separation amongst the centers $y_j$ as the cost of approximation, we give modulus of smoothness estimates for the degree of approximation by our eignets, and show by means of a converse theorem that these are the best possible for every \\emph{individual function}. We also give estimates on the coefficients $a_j$ in terms of the norm of the eignet. Finally, we demonstrate that if any sequence of eignets satisfies the optimal estimates for the degree of approximation of a smooth function, measured in ter...

  1. Approximations in diagnosis: motivations and techniques

    NARCIS (Netherlands)

    Harmelen, van F.A.H.; Teije, A. ten

    1995-01-01

    We argue that diagnosis should not be seen as solving a problem with a unique definition, but rather that there exists a whole space of reasonable notions of diagnosis. These notions can be seen as mutual approximations. We present a number of reasons for choosing among different notions of

  2. Empirical progress and nomic truth approximation revisited

    NARCIS (Netherlands)

    Kuipers, Theodorus

    2014-01-01

    In my From Instrumentalism to Constructive Realism (2000) I have shown how an instrumentalist account of empirical progress can be related to nomic truth approximation. However, it was assumed that a strong notion of nomic theories was needed for that analysis. In this paper it is shown, in terms of

  3. Faddeev Random Phase Approximation applied to molecules

    CERN Document Server

    Degroote, Matthias

    2012-01-01

    This Ph.D. thesis derives the equations of the Faddeev Random Phase Approximation (FRPA) and applies the method to a set of small atoms and molecules. The occurence of RPA instabilities in the dissociation limit is addressed in molecules and by the study of the Hubbard molecule as a test system with reduced dimensionality.

  4. Fostering Formal Commutativity Knowledge with Approximate Arithmetic.

    Directory of Open Access Journals (Sweden)

    Sonja Maria Hansen

    Full Text Available How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2 and third graders (Experiment 3. Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school.

  5. Approximate fixed point of Reich operator

    Directory of Open Access Journals (Sweden)

    M. Saha

    2013-01-01

    Full Text Available In the present paper, we study the existence of approximate fixed pointfor Reich operator together with the property that the ε-fixed points are concentrated in a set with the diameter tends to zero if ε $to$ > 0.

  6. Approximation of Aggregate Losses Using Simulation

    Directory of Open Access Journals (Sweden)

    Mohamed A. Mohamed

    2010-01-01

    Full Text Available Problem statement: The modeling of aggregate losses is one of the main objectives in actuarial theory and practice, especially in the process of making important business decisions regarding various aspects of insurance contracts. The aggregate losses over a fixed time period is often modeled by mixing the distributions of loss frequency and severity, whereby the distribution resulted from this approach is called a compound distribution. However, in many cases, realistic probability distributions for loss frequency and severity cannot be combined mathematically to derive the compound distribution of aggregate losses. Approach: This study aimed to approximate the aggregate loss distribution using simulation approach. In particular, the approximation of aggregate losses was based on a compound Poisson-Pareto distribution. The effects of deductible and policy limit on the individual loss as well as the aggregate losses were also investigated. Results: Based on the results, the approximation of compound Poisson-Pareto distribution via simulation approach agreed with the theoretical mean and variance of each of the loss frequency, loss severity and aggregate losses. Conclusion: This study approximated the compound distribution of aggregate losses using simulation approach. The investigation on retained losses and insurance claims allowed an insured or a company to select an insurance contract that fulfills its requirement. In particular, if a company wants to have an additional risk reduction, it can compare alternative policies by considering the worthiness of the additional expected total cost which can be estimated via simulation approach.

  7. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-11-30

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  8. Approximations in the PE-method

    DEFF Research Database (Denmark)

    Arranz, Marta Galindo

    1996-01-01

    Two differenct sources of errors may occur in the implementation of the PE methods; a phase error introduced in the approximation of a pseudo-differential operator and an amplitude error generated from the starting field. First, the inherent phase errors introduced in the solution are analyzed...

  9. Approximating the DGP of China's Quarterly GDP

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); H. Mees (Heleen)

    2010-01-01

    textabstractWe demonstrate that the data generating process (DGP) of China’s cumulated quarterly Gross Domestic Product (GDP, current prices), as it is reported by the National Bureau of Statistics of China, can be (very closely) approximated by a simple rule. This rule is that annual growth in any

  10. OPTICAL QUANTIFICATION OF APPROXIMAL CARIES INVITRO

    NARCIS (Netherlands)

    VANDERIJKE, JW; HERKSTROTER, FM; TENBOSCH, JJ

    1991-01-01

    A fluorescent dye was applied to extracted premolars with either early artificial lesions or natural white-spot lesions. The teeth were placed in an approximal geometry, and with a specially designed fibre-optic probe the fluorescence of the dye was measured in the lesions. The same fibre-optic

  11. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  12. An Approximation of Ultra-Parabolic Equations

    Directory of Open Access Journals (Sweden)

    Allaberen Ashyralyev

    2012-01-01

    Full Text Available The first and second order of accuracy difference schemes for the approximate solution of the initial boundary value problem for ultra-parabolic equations are presented. Stability of these difference schemes is established. Theoretical results are supported by the result of numerical examples.

  13. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  14. Approximating the DGP of China's Quarterly GDP

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); H. Mees (Heleen)

    2010-01-01

    textabstractWe demonstrate that the data generating process (DGP) of China’s cumulated quarterly Gross Domestic Product (GDP, current prices), as it is reported by the National Bureau of Statistics of China, can be (very closely) approximated by a simple rule. This rule is that annual growth in any

  15. $\\Phi$-derivable approximations in gauge theories

    CERN Document Server

    Arrizabalaga, A

    2003-01-01

    We discuss the method of $\\Phi$-derivable approximations in gauge theories. There, two complications arise, namely the violation of Bose symmetry in correlation functions and the gauge dependence. For the latter we argue that the error introduced by the gauge dependent terms is controlled, therefore not invalidating the method.

  16. Approximations of Two-Attribute Utility Functions

    Science.gov (United States)

    1976-09-01

    Introduction to Approximation Theory, McGraw-Hill, New York, 1966. Faber, G., Uber die interpolatorische Darstellung stetiger Funktionen, Deutsche...Management Review 14 (1972b) 37-50. Keeney, R. L., A decision analysis with multiple objectives: the Mexico City airport, Bell Journal of Economics

  17. Approximate Furthest Neighbor in High Dimensions

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Silvestri, Francesco; Sivertsen, Johan von Tangen

    2015-01-01

    -dimensional Euclidean space. We build on the technique of Indyk (SODA 2003), storing random projections to provide sublinear query time for AFN. However, we introduce a different query algorithm, improving on Indyk’s approximation factor and reducing the running time by a logarithmic factor. We also present a variation...

  18. Virial expansion coefficients in the harmonic approximation

    DEFF Research Database (Denmark)

    R. Armstrong, J.; Zinner, Nikolaj Thomas; V. Fedorov, D.

    2012-01-01

    The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated...

  19. Nonlinear approximation with dictionaries,.. II: Inverse estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    In this paper we study inverse estimates of the Bernstein type for nonlinear approximation with structured redundant dictionaries in a Banach space. The main results are for separated decomposable dictionaries in Hilbert spaces, which generalize the notion of joint block-diagonal mutually...

  20. Intrinsic Diophantine approximation on general polynomial surfaces

    DEFF Research Database (Denmark)

    Tiljeset, Morten Hein

    2017-01-01

    We study the Hausdorff measure and dimension of the set of intrinsically simultaneously -approximable points on a curve, surface, etc, given as a graph of integer polynomials. We obtain complete answers to these questions for algebraically “nice” manifolds. This generalizes earlier work done...

  1. Turbo Equalization Using Partial Gaussian Approximation

    DEFF Research Database (Denmark)

    Zhang, Chuanzong; Wang, Zhongyong; Manchón, Carles Navarro

    2016-01-01

    returned by the equalizer by using a partial Gaussian approximation (PGA). We exploit the specific structure of the ISI channel model to compute the latter messages from the beliefs obtained using a Kalman smoother/equalizer. Doing so leads to a significant complexity reduction compared to the initial PGA...

  2. Subset Selection by Local Convex Approximation

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Sadegh, Payman; Madsen, Henrik

    1999-01-01

    least squares criterion. We propose an optimization technique for the posed probelm based on a modified version of the Newton-Raphson iterations, combined with a backward elimination type algorithm. THe Newton-Raphson modification concerns iterative approximations to the non-convex cost function...

  3. Nonlinear approximation with dictionaries. II. Inverse Estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2006-01-01

    In this paper, which is the sequel to [16], we study inverse estimates of the Bernstein type for nonlinear approximation with structured redundant dictionaries in a Banach space. The main results are for blockwise incoherent dictionaries in Hilbert spaces, which generalize the notion of joint block-diagonal...

  4. Normal origamis of Mumford curves

    CERN Document Server

    Kremer, Karsten

    2010-01-01

    An origami (also known as square-tiled surface) is a Riemann surface covering a torus with at most one branch point. Lifting two generators of the fundamental group of the punctured torus decomposes the surface into finitely many unit squares. By varying the complex structure of the torus one obtains easily accessible examples of Teichm\\"uller curves in the moduli space of Riemann surfaces. The p-adic analogues of Riemann surfaces are Mumford curves. A p-adic origami is defined as a covering of Mumford curves with at most one branch point, where the bottom curve has genus one. A classification of all normal non-trivial p-adic origamis is presented and used to calculate some invariants. These can be used to describe p-adic origamis in terms of glueing squares.

  5. Normalized Information Distance is Not Semicomputable

    CERN Document Server

    Terwijn, Sebastiaan A; Vitanyi, Paul M B

    2010-01-01

    Normalized information distance (NID) uses the theoretical notion of Kolmogorov complexity, which for practical purposes is approximated by the length of the compressed version of the file involved, using a real-world compression program. This practical application is called 'normalized compression distance' and it is trivially computable. It is a parameter-free similarity measure based on compression, and is used in pattern recognition, data mining, phylogeny, clustering, and classification. The complexity properties of its theoretical precursor, the NID, have been open. We show that the NID is neither upper semicomputable nor lower semicomputable.

  6. Nonapproximablity of the Normalized Information Distance

    CERN Document Server

    Terwijn, Sebastiaan A; Vitanyi, Paul M B

    2009-01-01

    Normalized information distance (NID) uses the theoretical notion of Kolmogorov complexity, which for practical purposes is approximated by the length of the compressed version of the file involved, using a real-world compression program. This practical application is called `normalized compression distance' and it is trivially computable. It is a parameter-free similarity measure based on compression, and is used in pattern recognition, data mining, phylogeny, clustering, and classification. The complexity properties of its theoretical precursor, the NID, have been open. We show that the NID is neither upper semicomputable nor lower semicomputable up to any reasonable precision.

  7. Versatility of Approximating Single-Particle Electron Microscopy Density Maps Using Pseudoatoms and Approximation-Accuracy Control

    Directory of Open Access Journals (Sweden)

    Slavica Jonić

    2016-01-01

    Full Text Available Three-dimensional Gaussian functions have been shown useful in representing electron microscopy (EM density maps for studying macromolecular structure and dynamics. Methods that require setting a desired number of Gaussian functions or a maximum number of iterations may result in suboptimal representations of the structure. An alternative is to set a desired error of approximation of the given EM map and then optimize the number of Gaussian functions to achieve this approximation error. In this article, we review different applications of such an approach that uses spherical Gaussian functions of fixed standard deviation, referred to as pseudoatoms. Some of these applications use EM-map normal mode analysis (NMA with elastic network model (ENM (applications such as predicting conformational changes of macromolecular complexes or exploring actual conformational changes by normal-mode-based analysis of experimental data while some other do not use NMA (denoising of EM density maps. In applications based on NMA and ENM, the advantage of using pseudoatoms in EM-map coarse-grain models is that the ENM springs are easily assigned among neighboring grains thanks to their spherical shape and uniformed size. EM-map denoising based on the map coarse-graining was so far only shown using pseudoatoms as grains.

  8. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  9. Counting independent sets using the Bethe approximation

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chandrasekaran, V [MIT; Gamarmik, D [MIT; Shah, D [MIT; Sin, J [MIT

    2009-01-01

    The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

  10. Hybrid diffusion approximation in highly absorbing media and its effects of source approximation

    Institute of Scientific and Technical Information of China (English)

    Huijuan Tian; Ying Liu; Lijun Wang; Yuhui Zhang; Lifeng Xiao

    2009-01-01

    A modified diffusion approximation model called the hybrid diffusion approximation that can be used for highly absorbing media is investigated.The analytic solution of the hybrid diffusion approximation for reflectance in two-source approximation and steady-state case with extrapolated boundary is obtained.The effects of source approximation on the analytic solution are investigated,and it is validated that two-source approximation in highly absorbing media to describe the optical properties of biological tissue is necessary.Monte Carlo simulation of recovering optical parameters from reflectant data is done with the use of this model.The errors of recovering μa and μ's are smaller than 15% for the reduced albedo between 0.77 and 0.5 with the source-detector separation of 0.4-3 ram.

  11. Normalization of satellite imagery

    Science.gov (United States)

    Kim, Hongsuk H.; Elman, Gregory C.

    1990-01-01

    Sets of Thematic Mapper (TM) imagery taken over the Washington, DC metropolitan area during the months of November, March and May were converted into a form of ground reflectance imagery. This conversion was accomplished by adjusting the incident sunlight and view angles and by applying a pixel-by-pixel correction for atmospheric effects. Seasonal color changes of the area can be better observed when such normalization is applied to space imagery taken in time series. In normalized imagery, the grey scale depicts variations in surface reflectance and tonal signature of multi-band color imagery can be directly interpreted for quantitative information of the target.

  12. Sickle Cell Unit.

    Science.gov (United States)

    Canipe, Stephen L.

    Included in this high school biology unit on sickle cell anemia are the following materials: a synopsis of the history of the discovery and the genetic qualities of the disease; electrophoresis diagrams comparing normal, homozygous and heterozygous conditions of the disease; and biochemical characteristics and population genetics of the disease. A…

  13. Traveltime approximations for transversely isotropic media with an inhomogeneous background

    KAUST Repository

    Alkhalifah, Tariq

    2011-05-01

    A transversely isotropic (TI) model with a tilted symmetry axis is regarded as one of the most effective approximations to the Earth subsurface, especially for imaging purposes. However, we commonly utilize this model by setting the axis of symmetry normal to the reflector. This assumption may be accurate in many places, but deviations from this assumption will cause errors in the wavefield description. Using perturbation theory and Taylor\\'s series, I expand the solutions of the eikonal equation for 2D TI media with respect to the independent parameter θ, the angle the tilt of the axis of symmetry makes with the vertical, in a generally inhomogeneous TI background with a vertical axis of symmetry. I do an additional expansion in terms of the independent (anellipticity) parameter in a generally inhomogeneous elliptically anisotropic background medium. These new TI traveltime solutions are given by expansions in and θ with coefficients extracted from solving linear first-order partial differential equations. Pade approximations are used to enhance the accuracy of the representation by predicting the behavior of the higher-order terms of the expansion. A simplification of the expansion for homogenous media provides nonhyperbolic moveout descriptions of the traveltime for TI models that are more accurate than other recently derived approximations. In addition, for 3D media, I develop traveltime approximations using Taylor\\'s series type of expansions in the azimuth of the axis of symmetry. The coefficients of all these expansions can also provide us with the medium sensitivity gradients (Jacobian) for nonlinear tomographic-based inversion for the tilt in the symmetry axis. © 2011 Society of Exploration Geophysicists.

  14. Normals to a Parabola

    Science.gov (United States)

    Srinivasan, V. K.

    2013-01-01

    Given a parabola in the standard form y[superscript 2] = 4ax, corresponding to three points on the parabola, such that the normals at these three points P, Q, R concur at a point M = (h, k), the equation of the circumscribing circle through the three points P, Q, and R provides a tremendous opportunity to illustrate "The Art of Algebraic…

  15. Back to Normal

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Xinjiang officials speed up the investigation of July 5 riot suspects and restore social order Life in Urumqi has gone back to normal one month after the July 5 riot that killed nearly 200 people in the capital city of China’s northwestern

  16. Normality in Analytical Psychology

    Directory of Open Access Journals (Sweden)

    Steve Myers

    2013-11-01

    Full Text Available Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity.

  17. Normal modal preferential consequence

    CSIR Research Space (South Africa)

    Britz, K

    2012-12-01

    Full Text Available of necessitation holds for the corresponding consequence relations, as one would expect it to. We present a representation result for this tightened framework, and investigate appropriate notions of entailment in this context|normal entailment, and a rational...

  18. Subdifferentials of Distance Functions, Approximations and Enlargements

    Institute of Scientific and Technical Information of China (English)

    Jean-Paul PENOT; Robert RATSIMAHALO

    2007-01-01

    In this work, we study some subdifferentials of the distance function to a nonempty non-convex closed subset of a general Banach space. We relate them to the normal cone of the enlargements of the set which can be considered as regularizations of the set.

  19. Perturbative calculation of quasi-normal modes

    CERN Document Server

    Siopsis, G

    2005-01-01

    I discuss a systematic method of analytically calculating the asymptotic form of quasi-normal frequencies. In the case of a four-dimensional Schwarzschild black hole, I expand around the zeroth-order approximation to the wave equation proposed by Motl and Neitzke. In the case of a five-dimensional AdS black hole, I discuss a perturbative solution of the Heun equation. The analytical results are in agreement with the results from numerical analysis.

  20. Statokinesigram normalization method.

    Science.gov (United States)

    de Oliveira, José Magalhães

    2017-02-01

    Stabilometry is a technique that aims to study the body sway of human subjects, employing a force platform. The signal obtained from this technique refers to the position of the foot base ground-reaction vector, known as the center of pressure (CoP). The parameters calculated from the signal are used to quantify the displacement of the CoP over time; there is a large variability, both between and within subjects, which prevents the definition of normative values. The intersubject variability is related to differences between subjects in terms of their anthropometry, in conjunction with their muscle activation patterns (biomechanics); and the intrasubject variability can be caused by a learning effect or fatigue. Age and foot placement on the platform are also known to influence variability. Normalization is the main method used to decrease this variability and to bring distributions of adjusted values into alignment. In 1996, O'Malley proposed three normalization techniques to eliminate the effect of age and anthropometric factors from temporal-distance parameters of gait. These techniques were adopted to normalize the stabilometric signal by some authors. This paper proposes a new method of normalization of stabilometric signals to be applied in balance studies. The method was applied to a data set collected in a previous study, and the results of normalized and nonnormalized signals were compared. The results showed that the new method, if used in a well-designed experiment, can eliminate undesirable correlations between the analyzed parameters and the subjects' characteristics and show only the experimental conditions' effects.

  1. Traveltime approximations for inhomogeneous HTI media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Traveltimes information is convenient for parameter estimation especially if the medium is described by an anisotropic set of parameters. This is especially true if we could relate traveltimes analytically to these medium parameters, which is generally hard to do in inhomogeneous media. As a result, I develop traveltimes approximations for horizontaly transversely isotropic (HTI) media as simplified and even linear functions of the anisotropic parameters. This is accomplished by perturbing the solution of the HTI eikonal equation with respect to η and the azimuthal symmetry direction (usually used to describe the fracture direction) from a generally inhomogeneous elliptically anisotropic background medium. The resulting approximations can provide accurate analytical description of the traveltime in a homogenous background compared to other published moveout equations out there. These equations will allow us to readily extend the inhomogenous background elliptical anisotropic model to an HTI with a variable, but smoothly varying, η and horizontal symmetry direction values. © 2011 Society of Exploration Geophysicists.

  2. Approximate inverse preconditioners for general sparse matrices

    Energy Technology Data Exchange (ETDEWEB)

    Chow, E.; Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  3. Approximate Lesion Localization in Dermoscopy Images

    CERN Document Server

    Celebi, M Emre; Schaefer, Gerald; Stoecker, William V; 10.1111/j.1600-0846.2009.00357.x

    2010-01-01

    Background: Dermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty and subjectivity of human interpretation, automated analysis of dermoscopy images has become an important research area. Border detection is often the first step in this analysis. Methods: In this article, we present an approximate lesion localization method that serves as a preprocessing step for detecting borders in dermoscopy images. In this method, first the black frame around the image is removed using an iterative algorithm. The approximate location of the lesion is then determined using an ensemble of thresholding algorithms. Results: The method is tested on a set of 428 dermoscopy images. The localization error is quantified by a metric that uses dermatologist determined borders as the ground truth. Conclusion: The results demonstrate that the method presented here achieves both fast and accurate localization of lesions in dermoscopy images.

  4. Performance of a Distributed Stochastic Approximation Algorithm

    CERN Document Server

    Bianchi, Pascal; Hachem, Walid

    2012-01-01

    In this paper, a distributed stochastic approximation algorithm is studied. Applications of such algorithms include decentralized estimation, optimization, control or computing. The algorithm consists in two steps: a local step, where each node in a network updates a local estimate using a stochastic approximation algorithm with decreasing step size, and a gossip step, where a node computes a local weighted average between its estimates and those of its neighbors. Convergence of the estimates toward a consensus is established under weak assumptions. The approach relies on two main ingredients: the existence of a Lyapunov function for the mean field in the agreement subspace, and a contraction property of the random matrices of weights in the subspace orthogonal to the agreement subspace. A second order analysis of the algorithm is also performed under the form of a Central Limit Theorem. The Polyak-averaged version of the algorithm is also considered.

  5. Approximate gauge symemtry of composite vector bosons

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Mahiko

    2010-06-01

    It can be shown in a solvable field theory model that the couplings of the composite vector mesons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in more an intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  6. Quasi-chemical approximation for polyatomic mixtures

    CERN Document Server

    Dávila, M V; Matoz-Fernandez, D A; Ramirez-Pastor, A J

    2016-01-01

    The statistical thermodynamics of binary mixtures of polyatomic species was developed on a generalization in the spirit of the lattice-gas model and the quasi-chemical approximation (QCA). The new theoretical framework is obtained by combining: (i) the exact analytical expression for the partition function of non-interacting mixtures of linear $k$-mers and $l$-mers (species occupying $k$ sites and $l$ sites, respectively) adsorbed in one dimension, and its extension to higher dimensions; and (ii) a generalization of the classical QCA for multicomponent adsorbates and multisite-occupancy adsorption. The process is analyzed through the partial adsorption isotherms corresponding to both species of the mixture. Comparisons with analytical data from Bragg-Williams approximation (BWA) and Monte Carlo simulations are performed in order to test the validity of the theoretical model. Even though a good fitting is obtained from BWA, it is found that QCA provides a more accurate description of the phenomenon of adsorpti...

  7. Improved Approximations for Some Polymer Extension Models

    CERN Document Server

    Petrosyan, Rafayel

    2016-01-01

    We propose approximations for force-extension dependencies for the freely jointed chain (FJC) and worm-like chain (WLC) models as well as for extension-force dependence for the WLC model. Proposed expressions show less than 1% relative error in the useful range of the corresponding variables. These results can be applied for fitting force-extension curves obtained in molecular force spectroscopy experiments. Particularly they can be useful for cases where one has geometries of springs in series and/or in parallel where particular combination of expressions should be used for fitting the data. All approximations have been obtained following the same procedure of determining the asymptotes and then reducing the relative error of that expression by adding an appropriate term obtained from fitting its absolute error.

  8. Regularized Laplacian Estimation and Fast Eigenvector Approximation

    CERN Document Server

    Perry, Patrick O

    2011-01-01

    Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick \\emph{approximation} to the first nontrivial eigenvector of a data graph Laplacian \\emph{exactly} solve certain regularized Semi-Definite Programs (SDPs). In this paper, we extend that result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which $\\ell_2$-regularized or $\\ell_1$-regularized $\\ell_2$-regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. Our framework will imply that the solutions to the Mahoney-Orecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e.g., the fast diffusion-base...

  9. On approximative solutions of multistopping problems

    CERN Document Server

    Faller, Andreas; 10.1214/10-AAP747

    2012-01-01

    In this paper, we consider multistopping problems for finite discrete time sequences $X_1,...,X_n$. $m$-stops are allowed and the aim is to maximize the expected value of the best of these $m$ stops. The random variables are neither assumed to be independent not to be identically distributed. The basic assumption is convergence of a related imbedded point process to a continuous time Poisson process in the plane, which serves as a limiting model for the stopping problem. The optimal $m$-stopping curves for this limiting model are determined by differential equations of first order. A general approximation result is established which ensures convergence of the finite discrete time $m$-stopping problem to that in the limit model. This allows the construction of approximative solutions of the discrete time $m$-stopping problem. In detail, the case of i.i.d. sequences with discount and observation costs is discussed and explicit results are obtained.

  10. Numerical and approximate solutions for plume rise

    Science.gov (United States)

    Krishnamurthy, Ramesh; Gordon Hall, J.

    Numerical and approximate analytical solutions are compared for turbulent plume rise in a crosswind. The numerical solutions were calculated using the plume rise model of Hoult, Fay and Forney (1969, J. Air Pollut. Control Ass.19, 585-590), over a wide range of pertinent parameters. Some wind shear and elevated inversion effects are included. The numerical solutions are seen to agree with the approximate solutions over a fairly wide range of the parameters. For the conditions considered in the study, wind shear effects are seen to be quite small. A limited study was made of the penetration of elevated inversions by plumes. The results indicate the adequacy of a simple criterion proposed by Briggs (1969, AEC Critical Review Series, USAEC Division of Technical Information extension, Oak Ridge, Tennesse).

  11. Approximated solutions to Born-Infeld dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ferraro, Rafael [Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA),Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina); Nigro, Mauro [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina)

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  12. SOME CONVERSE RESULTS ON ONESIDED APPROXIMATION: JUSTIFICATIONS

    Institute of Scientific and Technical Information of China (English)

    Wang Jianli; Zhou Songping

    2003-01-01

    The present paper deals with best onesided approximation rate in Lp spaces ~En (f)Lp of f ∈ C2π. Although it is clear that the estimate ~En(f)Lp≤C ‖f‖ Lp cannot be correct for all f ∈ Lp2π in case p<∞, the question whether ~En (f)Lp ≤Cω (f, n-1 )Lp or ~En(f)Lp ≤CEn(f)Lp holds for f ∈ C2π remains totally untouched.Therefore it forms a basic problem to justify onesided approximation. The present paper will provide an answer to settle down the basis.

  13. An Origami Approximation to the Cosmic Web

    CERN Document Server

    Neyrinck, Mark C

    2014-01-01

    The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in 'polygonal' or 'polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls ...

  14. Improved Approximation for Orienting Mixed Graphs

    CERN Document Server

    Gamzu, Iftah

    2012-01-01

    An instance of the maximum mixed graph orientation problem consists of a mixed graph and a collection of source-target vertex pairs. The objective is to orient the undirected edges of the graph so as to maximize the number of pairs that admit a directed source-target path. This problem has recently arisen in the study of biological networks, and it also has applications in communication networks. In this paper, we identify an interesting local-to-global orientation property. This property enables us to modify the best known algorithms for maximum mixed graph orientation and some of its special structured instances, due to Elberfeld et al. (CPM '11), and obtain improved approximation ratios. We further proceed by developing an algorithm that achieves an even better approximation guarantee for the general setting of the problem. Finally, we study several well-motivated variants of this orientation problem.

  15. Nonlinear analysis approximation theory, optimization and applications

    CERN Document Server

    2014-01-01

    Many of our daily-life problems can be written in the form of an optimization problem. Therefore, solution methods are needed to solve such problems. Due to the complexity of the problems, it is not always easy to find the exact solution. However, approximate solutions can be found. The theory of the best approximation is applicable in a variety of problems arising in nonlinear functional analysis and optimization. This book highlights interesting aspects of nonlinear analysis and optimization together with many applications in the areas of physical and social sciences including engineering. It is immensely helpful for young graduates and researchers who are pursuing research in this field, as it provides abundant research resources for researchers and post-doctoral fellows. This will be a valuable addition to the library of anyone who works in the field of applied mathematics, economics and engineering.

  16. Rough Sets in Approximate Solution Space

    Institute of Scientific and Technical Information of China (English)

    Hui Sun; Wei Tian; Qing Liu

    2006-01-01

    As a new mathematical theory, Rough sets have been applied to processing imprecise, uncertain and in complete data. It has been fruitful in finite and non-empty set. Rough sets, however, are only served as the theoretic tool to discretize the real function. As far as the real function research is concerned, the research to define rough sets in the real function is infrequent. In this paper, we exploit a new method to extend the rough set in normed linear space, in which we establish a rough set,put forward an upper and lower approximation definition, and make a preliminary research on the property of the rough set. A new tool is provided to study the approximation solutions of differential equation and functional variation in normed linear space. This research is significant in that it extends the application of rough sets to a new field.

  17. Second derivatives for approximate spin projection methods.

    Science.gov (United States)

    Thompson, Lee M; Hratchian, Hrant P

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.

  18. Analytical Ballistic Trajectories with Approximately Linear Drag

    Directory of Open Access Journals (Sweden)

    Giliam J. P. de Carpentier

    2014-01-01

    Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.

  19. Pade approximants of random Stieltjes series

    CERN Document Server

    Marklof, Jens; Wolowski, Lech

    2007-01-01

    We consider the random continued fraction S(t) := 1/(s_1 + t/(s_2 + t/(s_3 + >...))) where the s_n are independent random variables with the same gamma distribution. For every realisation of the sequence, S(t) defines a Stieltjes function. We study the convergence of the finite truncations of the continued fraction or, equivalently, of the diagonal Pade approximants of the function S(t). By using the Dyson--Schmidt method for an equivalent one-dimensional disordered system, and the results of Marklof et al. (2005), we obtain explicit formulae (in terms of modified Bessel functions) for the almost-sure rate of convergence of these approximants, and for the almost-sure distribution of their poles.

  20. Finite State Transducers Approximating Hidden Markov Models

    CERN Document Server

    Kempe, A

    1999-01-01

    This paper describes the conversion of a Hidden Markov Model into a sequential transducer that closely approximates the behavior of the stochastic model. This transformation is especially advantageous for part-of-speech tagging because the resulting transducer can be composed with other transducers that encode correction rules for the most frequent tagging errors. The speed of tagging is also improved. The described methods have been implemented and successfully tested on six languages.

  1. On approximation of functions by product operators

    Directory of Open Access Journals (Sweden)

    Hare Krishna Nigam

    2013-12-01

    Full Text Available In the present paper, two quite new reults on the degree of approximation of a function f belonging to the class Lip(α,r, 1≤ r <∞ and the weighted class W(Lr,ξ(t, 1≤ r <∞ by (C,2(E,1 product operators have been obtained. The results obtained in the present paper generalize various known results on single operators.

  2. Variational Bayesian Approximation methods for inverse problems

    Science.gov (United States)

    Mohammad-Djafari, Ali

    2012-09-01

    Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.

  3. Neutrino Mass Matrix with Approximate Flavor Symmetry

    CERN Document Server

    Riazuddin, M

    2003-01-01

    Phenomenological implications of neutrino oscillations implied by recent experimental data on pattern of neutrino mass matrix are disscussed. It is shown that it is possible to have a neutrino mass matrix which shows approximate flavor symmetry; the neutrino mass differences arise from flavor violation in off-diagonal Yukawa couplings. Two modest extensions of the standard model, which can embed the resulting neutrino mass matix have also been discussed.

  4. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika

    2013-02-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  5. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design

  6. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-05

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.

  7. Approximation of pressure perturbations by FEM

    CERN Document Server

    Bichir, Cătălin - Liviu

    2011-01-01

    In the mathematical problem of linear hydrodynamic stability for shear flows against Tollmien-Schlichting perturbations, the continuity equation for the perturbation of the velocity is replaced by a Poisson equation for the pressure perturbation. The resulting eigenvalue problem, an alternative form for the two - point eigenvalue problem for the Orr - Sommerfeld equation, is formulated in a variational form and this one is approximated by finite element method (FEM). Possible applications to concrete cases are revealed.

  8. Onsager principle as a tool for approximation

    Institute of Scientific and Technical Information of China (English)

    Masao Doi

    2015-01-01

    Onsager principle is the variational principle proposed by Onsager in his celebrated paper on the reciprocal relation. The principle has been shown to be useful in deriving many evolution equations in soft matter physics. Here the principle is shown to be useful in solving such equations approximately. Two examples are discussed: the diffusion dynamics and gel dynamics. Both examples show that the present method is novel and gives new results which capture the essential dynamics in the system.

  9. APPROXIMATE OUTPUT REGULATION FOR AFFINE NONLINEAR SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Yali DONG; Daizhan CHENG; Huashu QIN

    2003-01-01

    Output regulation for affine nonlinear systems driven by an exogenous signal is investigated in this paper. In the absence of the standard exosystem hypothesis, we assume availability of the instantaneous values of the exogenous signal and its first time-derivative for use in the control law.For affine nonlinear systems, the necessary and sufficient conditions of the solvability of approximate output regulation problem are obtained. The precise form of the control law is presented under some suitable assumptions.

  10. Space-Time Approximation with Sparse Grids

    Energy Technology Data Exchange (ETDEWEB)

    Griebel, M; Oeltz, D; Vassilevski, P S

    2005-04-14

    In this article we introduce approximation spaces for parabolic problems which are based on the tensor product construction of a multiscale basis in space and a multiscale basis in time. Proper truncation then leads to so-called space-time sparse grid spaces. For a uniform discretization of the spatial space of dimension d with O(N{sup d}) degrees of freedom, these spaces involve for d > 1 also only O(N{sup d}) degrees of freedom for the discretization of the whole space-time problem. But they provide the same approximation rate as classical space-time Finite Element spaces which need O(N{sup d+1}) degrees of freedoms. This makes these approximation spaces well suited for conventional parabolic and for time-dependent optimization problems. We analyze the approximation properties and the dimension of these sparse grid space-time spaces for general stable multiscale bases. We then restrict ourselves to an interpolatory multiscale basis, i.e. a hierarchical basis. Here, to be able to handle also complicated spatial domains {Omega}, we construct the hierarchical basis from a given spatial Finite Element basis as follows: First we determine coarse grid points recursively over the levels by the coarsening step of the algebraic multigrid method. Then, we derive interpolatory prolongation operators between the respective coarse and fine grid points by a least squares approach. This way we obtain an algebraic hierarchical basis for the spatial domain which we then use in our space-time sparse grid approach. We give numerical results on the convergence rate of the interpolation error of these spaces for various space-time problems with two spatial dimensions. Also implementational issues, data structures and questions of adaptivity are addressed to some extent.

  11. Local characterisation of approximately finite operator algebras

    OpenAIRE

    Haworth, P. A.

    2000-01-01

    We show that the family of nest algebras with $r$ non-zero nest projections is stable, in the sense that an approximate containment of one such algebra within another is close to an exact containment. We use this result to give a local characterisation of limits formed from this family. We then consider quite general regular limit algebras and characterise these algebras using a local condition which reflects the assumed regularity of the system.

  12. APPROXIMATION MULTIDIMENSION FUCTION WITH FUNCTIONAL NETWORK

    Institute of Scientific and Technical Information of China (English)

    Li Weibin; Liu Fang; Jiao Licheng; Zhang Shuling; Li Zongling

    2006-01-01

    The functional network was introduced by E.Catillo, which extended the neural network. Not only can it solve the problems solved, but also it can formulate the ones that cannot be solved by traditional network.This paper applies functional network to approximate the multidimension function under the ridgelet theory.The method performs more stable and faster than the traditional neural network. The numerical examples demonstrate the performance.

  13. Development of New Density Functional Approximations

    Science.gov (United States)

    Su, Neil Qiang; Xu, Xin

    2017-05-01

    Kohn-Sham density functional theory has become the leading electronic structure method for atoms, molecules, and extended systems. It is in principle exact, but any practical application must rely on density functional approximations (DFAs) for the exchange-correlation energy. Here we emphasize four aspects of the subject: (a) philosophies and strategies for developing DFAs; (b) classification of DFAs; (c) major sources of error in existing DFAs; and (d) some recent developments and future directions.

  14. Solving Math Problems Approximately: A Developmental Perspective.

    Directory of Open Access Journals (Sweden)

    Dana Ganor-Stern

    Full Text Available Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger than the exact answer and when it was far (vs. close from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.

  15. Additive Approximation Algorithms for Modularity Maximization

    OpenAIRE

    Kawase, Yasushi; Matsui, Tomomi; Miyauchi, Atsushi

    2016-01-01

    The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph $G=(V,E)$, we are asked to find a partition $\\mathcal{C}$ of $V$ that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity max...

  16. Approximate Revenue Maximization in Interdependent Value Settings

    OpenAIRE

    Chawla, Shuchi; Fu, Hu; Karlin, Anna

    2014-01-01

    We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, ...

  17. Approximate Graph Edit Distance in Quadratic Time.

    Science.gov (United States)

    Riesen, Kaspar; Ferrer, Miquel; Bunke, Horst

    2015-09-14

    Graph edit distance is one of the most flexible and general graph matching models available. The major drawback of graph edit distance, however, is its computational complexity that restricts its applicability to graphs of rather small size. Recently the authors of the present paper introduced a general approximation framework for the graph edit distance problem. The basic idea of this specific algorithm is to first compute an optimal assignment of independent local graph structures (including substitutions, deletions, and insertions of nodes and edges). This optimal assignment is complete and consistent with respect to the involved nodes of both graphs and can thus be used to instantly derive an admissible (yet suboptimal) solution for the original graph edit distance problem in O(n3) time. For large scale graphs or graph sets, however, the cubic time complexity may still be too high. Therefore, we propose to use suboptimal algorithms with quadratic rather than cubic time for solving the basic assignment problem. In particular, the present paper introduces five different greedy assignment algorithms in the context of graph edit distance approximation. In an experimental evaluation we show that these methods have great potential for further speeding up the computation of graph edit distance while the approximated distances remain sufficiently accurate for graph based pattern classification.

  18. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  19. Approximation Algorithms for Directed Width Parameters

    CERN Document Server

    Kintali, Shiva; Kumar, Akash

    2011-01-01

    Treewidth of an undirected graph measures how close the graph is to being a tree. Several problems that are NP-hard on general graphs are solvable in polynomial time on graphs with bounded treewidth. Motivated by the success of treewidth, several directed analogues of treewidth have been introduced to measure the similarity of a directed graph to a directed acyclic graph (DAG). Directed treewidth, D-width, DAG-width, Kelly-width and directed pathwidth are some such parameters. In this paper, we present the first approximation algorithms for all these five directed width parameters. For directed treewidth and D-width we achieve an approximation factor of O(sqrt{logn}). For DAG-width, Kelly-width and directed pathwidth we achieve an O({\\log}^{3/2}{n}) approximation factor. Our algorithms are constructive, i.e., they construct the decompositions associated with these parameters. The width of these decompositions are within the above mentioned factor of the corresponding optimal width.

  20. Conference on Abstract Spaces and Approximation

    CERN Document Server

    Szökefalvi-Nagy, B; Abstrakte Räume und Approximation; Abstract spaces and approximation

    1969-01-01

    The present conference took place at Oberwolfach, July 18-27, 1968, as a direct follow-up on a meeting on Approximation Theory [1] held there from August 4-10, 1963. The emphasis was on theoretical aspects of approximation, rather than the numerical side. Particular importance was placed on the related fields of functional analysis and operator theory. Thirty-nine papers were presented at the conference and one more was subsequently submitted in writing. All of these are included in these proceedings. In addition there is areport on new and unsolved problems based upon a special problem session and later communications from the partici­ pants. A special role is played by the survey papers also presented in full. They cover a broad range of topics, including invariant subspaces, scattering theory, Wiener-Hopf equations, interpolation theorems, contraction operators, approximation in Banach spaces, etc. The papers have been classified according to subject matter into five chapters, but it needs littl...

  1. Symmetry and approximability of submodular maximization problems

    CERN Document Server

    Vondrak, Jan

    2011-01-01

    A number of recent results on optimization problems involving submodular functions have made use of the multilinear relaxation of the problem. These results hold typically in the value oracle model, where the objective function is accessible via a black box returning f(S) for a given S. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of symmetry gap. Our main result is that for any fixed instance that exhibits a certain symmetry gap in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap would require exponentially many oracle queries. This unifies several known hardness results for submodular maximization, and implies several new ones. In particular, we prove that there is no constant-factor approximation for the problem of maximizing a non-negative submodular function over the bases of a matroid. We also provide a closely matching approximation algorithm for...

  2. CMB-lensing beyond the Born approximation

    Science.gov (United States)

    Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth

    2016-09-01

    We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles l lesssim 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussian nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.

  3. Green-Ampt approximations: A comprehensive analysis

    Science.gov (United States)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  4. The Complexity of Approximately Counting Stable Matchings

    CERN Document Server

    Chebolu, Prasad; Martin, Russell

    2010-01-01

    We investigate the complexity of approximately counting stable matchings in the $k$-attribute model, where the preference lists are determined by dot products of "preference vectors" with "attribute vectors", or by Euclidean distances between "preference points" and "attribute points". Irving and Leather proved that counting the number of stable matchings in the general case is $#P$-complete. Counting the number of stable matchings is reducible to counting the number of downsets in a (related) partial order and is interreducible, in an approximation-preserving sense, to a class of problems that includes counting the number of independent sets in a bipartite graph ($#BIS$). It is conjectured that no FPRAS exists for this class of problems. We show this approximation-preserving interreducibilty remains even in the restricted $k$-attribute setting when $k \\geq 3$ (dot products) or $k \\geq 2$ (Euclidean distances). Finally, we show it is easy to count the number of stable matchings in the 1-attribute dot-product ...

  5. An Origami Approximation to the Cosmic Web

    Science.gov (United States)

    Neyrinck, Mark C.

    2016-10-01

    The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.

  6. Generalized Quasilinear Approximation: Application to Zonal Jets

    Science.gov (United States)

    Marston, J. B.; Chini, G. P.; Tobias, S. M.

    2016-05-01

    Quasilinear theory is often utilized to approximate the dynamics of fluids exhibiting significant interactions between mean flows and eddies. We present a generalization of quasilinear theory to include dynamic mode interactions on the large scales. This generalized quasilinear (GQL) approximation is achieved by separating the state variables into large and small zonal scales via a spectral filter rather than by a decomposition into a formal mean and fluctuations. Nonlinear interactions involving only small zonal scales are then removed. The approximation is conservative and allows for scattering of energy between small-scale modes via the large scale (through nonlocal spectral interactions). We evaluate GQL for the paradigmatic problems of the driving of large-scale jets on a spherical surface and on the beta plane and show that it is accurate even for a small number of large-scale modes. As GQL is formally linear in the small zonal scales, it allows for the closure of the system and can be utilized in direct statistical simulation schemes that have proved an attractive alternative to direct numerical simulation for many geophysical and astrophysical problems.

  7. Approximating Low-Dimensional Coverage Problems

    CERN Document Server

    Badanidiyuru, Ashwinkumar; Lee, Hooyeon

    2011-01-01

    We study the complexity of the maximum coverage problem, restricted to set systems of bounded VC-dimension. Our main result is a fixed-parameter tractable approximation scheme: an algorithm that outputs a $(1-\\eps)$-approximation to the maximum-cardinality union of $k$ sets, in running time $O(f(\\eps,k,d)\\cdot poly(n))$ where $n$ is the problem size, $d$ is the VC-dimension of the set system, and $f(\\eps,k,d)$ is exponential in $(kd/\\eps)^c$ for some constant $c$. We complement this positive result by showing that the function $f(\\eps,k,d)$ in the running-time bound cannot be replaced by a function depending only on $(\\eps,d)$ or on $(k,d)$, under standard complexity assumptions. We also present an improved upper bound on the approximation ratio of the greedy algorithm in special cases of the problem, including when the sets have bounded cardinality and when they are two-dimensional halfspaces. Complementing these positive results, we show that when the sets are four-dimensional halfspaces neither the greedy ...

  8. Simultaneous perturbation stochastic approximation for tidal models

    KAUST Repository

    Altaf, M.U.

    2011-05-12

    The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.

  9. CMB-lensing beyond the Born approximation

    CERN Document Server

    Marozzi, Giovanni; Di Dio, Enea; Durrer, Ruth

    2016-01-01

    We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles $\\ell\\lesssim 2500$, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussian nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum...

  10. An approximation algorithm for square packing

    NARCIS (Netherlands)

    R. van Stee (Rob)

    2004-01-01

    textabstractWe consider the problem of packing squares into bins which are unit squares, where the goal is to minimize the number of bins used. We present an algorithm for this problem with an absolute worst-case ratio of 2, which is optimal provided P != NP.

  11. Approximate Equalities on Rough Intuitionistic Fuzzy Sets and an Analysis of Approximate Equalities

    Directory of Open Access Journals (Sweden)

    B. K. Tripathy

    2012-03-01

    Full Text Available In order to involve user knowledge in determining equality of sets, which may not be equal in the mathematical sense, three types of approximate (rough equalities were introduced by Novotny and Pawlak ([8, 9, 10]. These notions were generalized by Tripathy, Mitra and Ojha ([13], who introduced the concepts of approximate (rough equivalences of sets. Rough equivalences capture equality of sets at a higher level than rough equalities. More properties of these concepts were established in [14]. Combining the conditions for the two types of approximate equalities, two more approximate equalities were introduced by Tripathy [12] and a comparative analysis of their relative efficiency was provided. In [15], the four types of approximate equalities were extended by considering rough fuzzy sets instead of only rough sets. In fact the concepts of leveled approximate equalities were introduced and properties were studied. In this paper we proceed further by introducing and studying the approximate equalities based on rough intuitionistic fuzzy sets instead of rough fuzzy sets. That is we introduce the concepts of approximate (rough equalities of intuitionistic fuzzy sets and study their properties. We provide some real life examples to show the applications of rough equalities of fuzzy sets and rough equalities of intuitionistic fuzzy sets.

  12. Approximate Marginalization of Absorption and Scattering in Fluorescence Diffuse Optical Tomography

    CERN Document Server

    Mozumder, Meghdoot; Arridge, Simon; Kaipio, Jari P; d'Andrea, Cosimo; Kolehmainen, Ville

    2015-01-01

    In fluorescence diffuse optical tomography (fDOT), the reconstruction of the fluorophore concentration inside the target body is usually carried out using a normalized Born approximation model where the measured fluorescent emission data is scaled by measured excitation data. One of the benefits of the model is that it can tolerate inaccuracy in the absorption and scattering distributions that are used in the construction of the forward model to some extent. In this paper, we employ the recently proposed Bayesian approximation error approach to fDOT for compensating for the modeling errors caused by the inaccurately known optical properties of the target in combination with the normalized Born approximation model. The approach is evaluated using a simulated test case with different amount of error in the optical properties. The results show that the Bayesian approximation error approach improves the tolerance of fDOT imaging against modeling errors caused by inaccurately known absorption and scattering of the...

  13. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    Science.gov (United States)

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics.

  14. On the Use of Approximations in Statistical Physics

    CERN Document Server

    Hoffmann, C

    2003-01-01

    Two approximations are frequently used in statistical physics: the first one, which we shall name the mean values approximation, is generally (and improperly) named as "maximum term approximation". The second is the "Stirling approximation". In this paper we demonstrate that the error introduced by the first approximation is exactly compensated by the second approximation in the calculation of mean values of multinomial distributions.

  15. Idiopathic Normal Pressure Hydrocephalus

    Directory of Open Access Journals (Sweden)

    Basant R. Nassar BS

    2016-04-01

    Full Text Available Idiopathic normal pressure hydrocephalus (iNPH is a potentially reversible neurodegenerative disease commonly characterized by a triad of dementia, gait, and urinary disturbance. Advancements in diagnosis and treatment have aided in properly identifying and improving symptoms in patients. However, a large proportion of iNPH patients remain either undiagnosed or misdiagnosed. Using PubMed search engine of keywords “normal pressure hydrocephalus,” “diagnosis,” “shunt treatment,” “biomarkers,” “gait disturbances,” “cognitive function,” “neuropsychology,” “imaging,” and “pathogenesis,” articles were obtained for this review. The majority of the articles were retrieved from the past 10 years. The purpose of this review article is to aid general practitioners in further understanding current findings on the pathogenesis, diagnosis, and treatment of iNPH.

  16. Idiopathic Normal Pressure Hydrocephalus

    Directory of Open Access Journals (Sweden)

    Basant R. Nassar BS

    2016-04-01

    Full Text Available Idiopathic normal pressure hydrocephalus (iNPH is a potentially reversible neurodegenerative disease commonly characterized by a triad of dementia, gait, and urinary disturbance. Advancements in diagnosis and treatment have aided in properly identifying and improving symptoms in patients. However, a large proportion of iNPH patients remain either undiagnosed or misdiagnosed. Using PubMed search engine of keywords “normal pressure hydrocephalus,” “diagnosis,” “shunt treatment,” “biomarkers,” “gait disturbances,” “cognitive function,” “neuropsychology,” “imaging,” and “pathogenesis,” articles were obtained for this review. The majority of the articles were retrieved from the past 10 years. The purpose of this review article is to aid general practitioners in further understanding current findings on the pathogenesis, diagnosis, and treatment of iNPH.

  17. Monitoring the normal body

    DEFF Research Database (Denmark)

    Nissen, Nina Konstantin; Holm, Lotte; Baarts, Charlotte

    2015-01-01

    Introduction : An extensive body of literature is concerned with obese people, risk, and weight management. However, little is known about weight management among people not belonging to the extreme BMI categories. Management of weight among normal-weight and moderately overweight individuals...... provides us with knowledge about how to prevent future overweight or obesity. This paper investigates body size ideals and monitoring practices among normal-weight and moderately overweight people. Methods : The study is based on in-depth interviews combined with observations. 24 participants were...... of practices for monitoring their bodies based on different kinds of calculations of weight and body size, observations of body shape, and measurements of bodily firmness. Biometric measurements are familiar to them as are health authorities' recommendations. Despite not belonging to an extreme BMI category...

  18. Normal Order: Combinatorial Graphs

    CERN Document Server

    Solomon, A I; Blasiak, P; Horzela, A; Penson, K A; Solomon, Allan I.; Duchamp, Gerard; Blasiak, Pawel; Horzela, Andrzej; Penson, Karol A.

    2004-01-01

    A conventional context for supersymmetric problems arises when we consider systems containing both boson and fermion operators. In this note we consider the normal ordering problem for a string of such operators. In the general case, upon which we touch briefly, this problem leads to combinatorial numbers, the so-called Rook numbers. Since we assume that the two species, bosons and fermions, commute, we subsequently restrict ourselves to consideration of a single species, single-mode boson monomials. This problem leads to elegant generalisations of well-known combinatorial numbers, specifically Bell and Stirling numbers. We explicitly give the generating functions for some classes of these numbers. In this note we concentrate on the combinatorial graph approach, showing how some important classical results of graph theory lead to transparent representations of the combinatorial numbers associated with the boson normal ordering problem.

  19. Neuroethics beyond Normal.

    Science.gov (United States)

    Shook, John R; Giordano, James

    2016-01-01

    An integrated and principled neuroethics offers ethical guidelines able to transcend conventional and medical reliance on normality standards. Elsewhere we have proposed four principles for wise guidance on human transformations. Principles like these are already urgently needed, as bio- and cyberenhancements are rapidly emerging. Context matters. Neither "treatments" nor "enhancements" are objectively identifiable apart from performance expectations, social contexts, and civic orders. Lessons learned from disability studies about enablement and inclusion suggest a fresh way to categorize modifications to the body and its performance. The term "enhancement" should be broken apart to permit recognition of enablements and augmentations, and kinds of radical augmentation for specialized performance. Augmentations affecting the self, self-worth, and self-identity of persons require heightened ethical scrutiny. Reversibility becomes the core problem, not the easy answer, as augmented persons may not cooperate with either decommissioning or displacement into unaccommodating societies. We conclude by indicating how our four principles of self-creativity, nonobsolescence, empowerment, and citizenship establish a neuroethics beyond normal that is better prepared for a future in which humans and their societies are going so far beyond normal.

  20. Belief-Propagation-Approximated Decoding of Low-Density Parity-Check Codes

    Institute of Scientific and Technical Information of China (English)

    SONG Hui-shi; ZHANG Ping

    2004-01-01

    In this paper, we propose a new reduced-complexity decoding algorithm of Low-Density Parity-Check (LDPC) codes, called Belief-Propagation-Approximated (BPA) algorithm, which utilizes the idea of normalization and translates approximately the intricate nonlinear operation in the check nodes of the original BP algorithm to only one operation of looking up the table. The normalization factors can be obtained by simulation, or theoretically. Simulation results demonstrate that BPA algorithm exhibits fairly satisfactory bit error performance on the Additive White Gaussian Noise (AWGN) channel.