WorldWideScience

Sample records for unit normal approximation

  1. Spherical Approximation on Unit Sphere

    Directory of Open Access Journals (Sweden)

    Eman Samir Bhaya

    2018-01-01

    Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of  functions in  spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in    spaces for  by modulus of smoothness of functions.

  2. On the approximative normal values of multivalued operators in topological vector space

    International Nuclear Information System (INIS)

    Nguyen Minh Chuong; Khuat van Ninh

    1989-09-01

    In this paper the problem of approximation of normal values of multivalued linear closed operators from topological vector Mackey space into E-space is considered. Existence of normal value and convergence of approximative values to normal value are proved. (author). 4 refs

  3. The approximation of the normal distribution by means of chaotic expression

    International Nuclear Information System (INIS)

    Lawnik, M

    2014-01-01

    The approximation of the normal distribution by means of a chaotic expression is achieved by means of Weierstrass function, where, for a certain set of parameters, the density of the derived recurrence renders good approximation of the bell curve

  4. The triangular density to approximate the normal density: decision rules-of-thumb

    International Nuclear Information System (INIS)

    Scherer, William T.; Pomroy, Thomas A.; Fuller, Douglas N.

    2003-01-01

    In this paper we explore the approximation of the normal density function with the triangular density function, a density function that has extensive use in risk analysis. Such an approximation generates a simple piecewise-linear density function and a piecewise-quadratic distribution function that can be easily manipulated mathematically and that produces surprisingly accurate performance under many instances. This mathematical tractability proves useful when it enables closed-form solutions not otherwise possible, as with problems involving the embedded use of the normal density. For benchmarking purposes we compare the basic triangular approximation with two flared triangular distributions and with two simple uniform approximations; however, throughout the paper our focus is on using the triangular density to approximate the normal for reasons of parsimony. We also investigate the logical extensions of using a non-symmetric triangular density to approximate a lognormal density. Several issues associated with using a triangular density as a substitute for the normal and lognormal densities are discussed, and we explore the resulting numerical approximation errors for the normal case. Finally, we present several examples that highlight simple decision rules-of-thumb that the use of the approximation generates. Such rules-of-thumb, which are useful in risk and reliability analysis and general business analysis, can be difficult or impossible to extract without the use of approximations. These examples include uses of the approximation in generating random deviates, uses in mixture models for risk analysis, and an illustrative decision analysis problem. It is our belief that this exploratory look at the triangular approximation to the normal will provoke other practitioners to explore its possible use in various domains and applications

  5. A simple approximation to the bivariate normal distribution with large correlation coefficient

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.

    1994-01-01

    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the

  6. Padé approximant for normal stress differences in large-amplitude oscillatory shear flow

    Science.gov (United States)

    Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.

    2018-04-01

    Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.

  7. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  8. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-01

    log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible

  9. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2014-01-01

    Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.

  10. Normal Approximations to the Distributions of the Wilcoxon Statistics: Accurate to What "N"? Graphical Insights

    Science.gov (United States)

    Bellera, Carine A.; Julien, Marilyse; Hanley, James A.

    2010-01-01

    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…

  11. Approximating Multivariate Normal Orthant Probabilities. ONR Technical Report. [Biometric Lab Report No. 90-1.

    Science.gov (United States)

    Gibbons, Robert D.; And Others

    The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…

  12. Design of reciprocal unit based on the Newton-Raphson approximation

    DEFF Research Database (Denmark)

    Gundersen, Anders Torp; Winther-Almstrup, Rasmus; Boesen, Michael

    A design of a reciprocal unit based on Newton-Raphson approximation is described and implemented. We present two different designs for single precisions where one of them is extremely fast but the trade-off is an increase in area. The solution behind the fast design is that the design is fully...

  13. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  14. Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.

    Directory of Open Access Journals (Sweden)

    Umair Khalil

    Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.

  15. Compact quantum group C*-algebras as Hopf algebras with approximate unit

    International Nuclear Information System (INIS)

    Do Ngoc Diep; Phung Ho Hai; Kuku, A.O.

    1999-04-01

    In this paper, we construct and study the representation theory of a Hopf C*-algebra with approximate unit, which constitutes quantum analogue of a compact group C*-algebra. The construction is done by first introducing a convolution-product on an arbitrary Hopf algebra H with integral, and then constructing the L 2 and C*-envelopes of H (with the new convolution-product) when H is a compact Hopf *-algebra. (author)

  16. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  17. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  18. Simulation of mineral dust aerosol with Piecewise Log-normal Approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2012-08-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Model (CanAM4-PAM. The total simulated annual global dust emission is 2500 Tg yr−1, and the dust mass load is 19.3 Tg for year 2000. Both are consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Biases in long-range transport are also contributing. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with satellite and surface remote sensing measurements and shows general agreement in terms of the dust distribution around sources. The model yields a dust AOD of 0.042 and dust aerosol direct radiative forcing (ADRF of −1.24 W m−2 respectively, which show good consistency with model estimates from other studies.

  19. The United States and Iran: Prospects for Normalization

    National Research Council Canada - National Science Library

    Harris, William

    1999-01-01

    .... Even then, normalization will not come quickly or easily. It will require steady, long-term US effort and will be complicated by two decades of hostility and by domestic political dynamics in both countries that hinder rational policy debate.

  20. 77 FR 38857 - Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units of Normal...

    Science.gov (United States)

    2012-06-29

    ... Filtration and Adsorption Units of Normal Atmosphere Cleanup Systems in Light-Water- Cooled Nuclear Power... Criteria for Air Filtration and Adsorption Units of Normal Atmosphere Cleanup Systems in Light-Water-Cooled... draft regulatory guide (DG), DG-1280, ``Design, Inspection, and Testing Criteria for Air Filtration and...

  1. Persistence and failure of mean-field approximations adapted to a class of systems of delay-coupled excitable units

    Science.gov (United States)

    Franović, Igor; Todorović, Kristina; Vasović, Nebojša; Burić, Nikola

    2014-02-01

    We consider the approximations behind the typical mean-field model derived for a class of systems made up of type II excitable units influenced by noise and coupling delays. The formulation of the two approximations, referred to as the Gaussian and the quasi-independence approximation, as well as the fashion in which their validity is verified, are adapted to reflect the essential properties of the underlying system. It is demonstrated that the failure of the mean-field model associated with the breakdown of the quasi-independence approximation can be predicted by the noise-induced bistability in the dynamics of the mean-field system. As for the Gaussian approximation, its violation is related to the increase of noise intensity, but the actual condition for failure can be cast in qualitative, rather than quantitative terms. We also discuss how the fulfillment of the mean-field approximations affects the statistics of the first return times for the local and global variables, further exploring the link between the fulfillment of the quasi-independence approximation and certain forms of synchronization between the individual units.

  2. Borders and border representations: Comparative approximations among the United States and Latin America

    Directory of Open Access Journals (Sweden)

    Marcos Cueva Perus

    2005-01-01

    Full Text Available This article uses a comparative approach regarding frontier symbols and myths among the United States, Latin America and the Caribbean. Although wars fought over frontiers have greatly diminished throughout the world, the conception of frontier still held by the United States is that of a nationalist myth which embodies a semi-religious faith in the free market and democracy. On the other hand, Latin American and Caribbean countries, whose frontiers are far more complex, have shown extraordinary stability for several decades. This paper points out the risks involved in the spread of United States´ notions of frontier which, in addition, go hand-in-hand with the problem of multicultural segmentation. Although Latin American and Caribbean frontiers may be stable, they are vulnerable to the infiltration of foreing frontier representations.

  3. Environmental assessment: Transfer of normal and low-enriched uranium billets to the United Kingdom, Hanford Site, Richland, Washington

    International Nuclear Information System (INIS)

    1995-11-01

    Under the auspices of an agreement between the U.S. and the United Kingdom, the U.S. Department of Energy (DOE) has an opportunity to transfer approximately 710,000 kilograms (1,562,000 pounds) of unneeded normal and low-enriched uranium (LEU) to the United Kingdom; thus, reducing long-term surveillance and maintenance burdens at the Hanford Site. The material, in the form of billets, is controlled by DOE's Defense Programs, and is presently stored as surplus material in the 300 Area of the Hanford Site. The United Kingdom has expressed a need for the billets. The surplus uranium billets are currently stored in wooden shipping containers in secured facilities in the 300 Area at the Hanford Site (the 303-B and 303-G storage facilities). There are 482 billets at an enrichment level (based on uranium-235 content) of 0.71 weight-percent. This enrichment level is normal uranium; that is, uranium having 0.711 as the percentage by weight of uranium-235 as occurring in nature. There are 3,242 billets at an enrichment level of 0.95 weight-percent (i.e., low-enriched uranium). This inventory represents a total of approximately 532 curies. The facilities are routinely monitored. The dose rate on contact of a uranium billet is approximately 8 millirem per hour. The dose rate on contact of a wooden shipping container containing 4 billets is approximately 4 millirem per hour. The dose rate at the exterior of the storage facilities is indistinguishable from background levels

  4. A parallel approximate string matching under Levenshtein distance on graphics processing units using warp-shuffle operations.

    Directory of Open Access Journals (Sweden)

    ThienLuan Ho

    Full Text Available Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs. In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively.

  5. [Statistical (Poisson) motor unit number estimation. Methodological aspects and normal results in the extensor digitorum brevis muscle of healthy subjects].

    Science.gov (United States)

    Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J

    Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (page than CMAP amplitude ( 0.5002 and 0.4142, respectively pphisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does.

  6. The application of the piecewise linear approximation to the spectral neighborhood of soil line for the analysis of the quality of normalization of remote sensing materials

    Science.gov (United States)

    Kulyanitsa, A. L.; Rukhovich, A. D.; Rukhovich, D. D.; Koroleva, P. V.; Rukhovich, D. I.; Simakova, M. S.

    2017-04-01

    The concept of soil line can be to describe the temporal distribution of spectral characteristics of the bare soil surface. In this case, the soil line can be referred to as the multi-temporal soil line, or simply temporal soil line (TSL). In order to create TSL for 8000 regular lattice points for the territory of three regions of Tula oblast, we used 34 Landsat images obtained in the period from 1985 to 2014 after their certain transformation. As Landsat images are the matrices of the values of spectral brightness, this transformation is the normalization of matrices. There are several methods of normalization that move, rotate, and scale the spectral plane. In our study, we applied the method of piecewise linear approximation to the spectral neighborhood of soil line in order to assess the quality of normalization mathematically. This approach allowed us to range normalization methods according to their quality as follows: classic normalization > successive application of the turn and shift > successive application of the atmospheric correction and shift > atmospheric correction > shift > turn > raw data. The normalized data allowed us to create the maps of the distribution of a and b coefficients of the TSL. The map of b coefficient is characterized by the high correlation with the ground-truth data obtained from 1899 soil pits described during the soil surveys performed by the local institute for land management (GIPROZEM).

  7. Midwives' experiences of facilitating normal birth in an obstetric-led unit: a feminist perspective.

    LENUS (Irish Health Repository)

    Keating, Annette

    2012-01-31

    OBJECTIVE: to explore midwives\\' experiences of facilitating normal birth in an obstetric-led unit. DESIGN: a feminist approach using semi-structured interviews focusing on midwives\\' perceptions of normal birth and their ability to facilitate this birth option in an obstetric-led unit. SETTING: Ireland. PARTICIPATION: a purposeful sample of 10 midwives with 6-30 years of midwifery experience. All participants had worked for a minimum of 6 years in a labour ward setting, and had been in their current setting for the previous 2 years. FINDINGS: the midwives\\' narratives related to the following four concepts of patriarchy: \\'hierarchical thinking\\

  8. NIMROD: a program for inference via a normal approximation of the posterior in models with random effects based on ordinary differential equations.

    Science.gov (United States)

    Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe

    2013-08-01

    Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. The approximate number system and domain-general abilities as predictors of math ability in children with normal hearing and hearing loss.

    Science.gov (United States)

    Bull, Rebecca; Marschark, Marc; Nordmann, Emily; Sapere, Patricia; Skene, Wendy A

    2018-06-01

    Many children with hearing loss (CHL) show a delay in mathematical achievement compared to children with normal hearing (CNH). This study examined whether there are differences in acuity of the approximate number system (ANS) between CHL and CNH, and whether ANS acuity is related to math achievement. Working memory (WM), short-term memory (STM), and inhibition were considered as mediators of any relationship between ANS acuity and math achievement. Seventy-five CHL were compared with 75 age- and gender-matched CNH. ANS acuity, mathematical reasoning, WM, and STM of CHL were significantly poorer compared to CNH. Group differences in math ability were no longer significant when ANS acuity, WM, or STM was controlled. For CNH, WM and STM fully mediated the relationship of ANS acuity to math ability; for CHL, WM and STM only partially mediated this relationship. ANS acuity, WM, and STM are significant contributors to hearing status differences in math achievement, and to individual differences within the group of CHL. Statement of contribution What is already known on this subject? Children with hearing loss often perform poorly on measures of math achievement, although there have been few studies focusing on basic numerical cognition in these children. In typically developing children, the approximate number system predicts math skills concurrently and longitudinally, although there have been some contradictory findings. Recent studies suggest that domain-general skills, such as inhibition, may account for the relationship found between the approximate number system and math achievement. What does this study adds? This is the first robust examination of the approximate number system in children with hearing loss, and the findings suggest poorer acuity of the approximate number system in these children compared to hearing children. The study addresses recent issues regarding the contradictory findings of the relationship of the approximate number system to math ability

  10. Pore size determination using normalized J-function for different hydraulic flow units

    Directory of Open Access Journals (Sweden)

    Ali Abedini

    2015-06-01

    Full Text Available Pore size determination of hydrocarbon reservoirs is one of the main challenging areas in reservoir studies. Precise estimation of this parameter leads to enhance the reservoir simulation, process evaluation, and further forecasting of reservoir behavior. Hence, it is of great importance to estimate the pore size of reservoir rocks with an appropriate accuracy. In the present study, a modified J-function was developed and applied to determine the pore radius in one of the hydrocarbon reservoir rocks located in the Middle East. The capillary pressure data vs. water saturation (Pc–Sw as well as routine reservoir core analysis include porosity (φ and permeability (k were used to develop the J-function. First, the normalized porosity (φz, the rock quality index (RQI, and the flow zone indicator (FZI concepts were used to categorize all data into discrete hydraulic flow units (HFU containing unique pore geometry and bedding characteristics. Thereafter, the modified J-function was used to normalize all capillary pressure curves corresponding to each of predetermined HFU. The results showed that the reservoir rock was classified into five separate rock types with the definite HFU and reservoir pore geometry. Eventually, the pore radius for each of these HFUs was determined using a developed equation obtained by normalized J-function corresponding to each HFU. The proposed equation is a function of reservoir rock characteristics including φz, FZI, lithology index (J*, and pore size distribution index (ɛ. This methodology used, the reservoir under study was classified into five discrete HFU with unique equations for permeability, normalized J-function and pore size. The proposed technique is able to apply on any reservoir to determine the pore size of the reservoir rock, specially the one with high range of heterogeneity in the reservoir rock properties.

  11. Review of the geochemistry and metallogeny of approximately 1.4 Ga granitoid intrusions of the conterminous United States

    Science.gov (United States)

    du Bray, Edward A.; Holm-Denoma, Christopher S.; Lund, Karen; Premo, Wayne R.

    2018-03-27

    The conterminous United States hosts numerous volumetrically significant and geographically dispersed granitoid intrusions that range in age from 1.50 to 1.32 billion years before present (Ga). Although previously referred to as A-type granites, most are better described as ferroan granites. These granitoid intrusions are distributed in the northern and central Rocky Mountains, the Southwest, the northern midcontinent, and a swath largely buried beneath Phanerozoic cover across the Great Plains and into the southern midcontinent. These intrusions, with ages that are bimodally distributed between about 1.455–1.405 Ga and 1.405–1.320 Ga, are dispersed nonsystematically with respect to age across their spatial extents. Globally, although A-type or ferroan granites are genetically associated with rare-metal deposits, most U.S. 1.4 Ga granitoid intrusions do not contain significant deposits. Exceptions are the light rare-earth element deposit at Mountain Pass, California, and the iron oxide-apatite and iron oxide-copper-gold deposits in southeast Missouri.Most of the U.S. 1.4 Ga granitoid intrusions are composed of hornblende ± biotite or biotite ± muscovite monzogranite, commonly with prominent alkali feldspar megacrysts; however, modal compositions vary widely. These intrusions include six of the eight commonly identified subtypes of ferroan granite: alkali-calcic and calc-alkalic peraluminous subtypes; alkalic, alkali-calcic, and calc-alkalic metaluminous subtypes; and the alkalic peralkaline subtype. The U.S. 1.4 Ga granitoid intrusions also include variants of these subtypes that have weakly magnesian compositions. Extreme large-ion lithophile element enrichments typical of ferroan granites elsewhere are absent among these intrusions. Chondrite-normalized rare-earth element patterns for these intrusions have modest negative slopes and moderately developed negative europium anomalies. Their radiogenic isotopic compositions are consistent with mixing involving

  12. Fructose intake at current levels in the United States may cause gastrointestinal distress in normal adults.

    Science.gov (United States)

    Beyer, Peter L; Caviar, Elena M; McCallum, Richard W

    2005-10-01

    Fructose intake has increased considerably in the United States, primarily as a result of increased consumption of high-fructose corn syrup, fruits and juices, and crystalline fructose. The purpose was to determine how often fructose, in amounts commonly consumed, would result in malabsorption and/or symptoms in healthy persons. Fructose absorption was measured using 3-hour breath hydrogen tests and symptom scores were used to rate subjective responses for gas, borborygmus, abdominal pain, and loose stools. The study included 15 normal, free-living volunteers from a medical center community and was performed in a gastrointestinal specialty clinic. Subjects consumed 25- and 50-g doses of crystalline fructose with water after an overnight fast on separate test days. Mean peak breath hydrogen, time of peak, area under the curve (AUC) for breath hydrogen and gastrointestinal symptoms were measured during a 3-hour period after subjects consumed both 25- and 50-g doses of fructose. Differences in mean breath hydrogen, AUC, and symptom scores between doses were analyzed using paired t tests. Correlations among peak breath hydrogen, AUC, and symptoms were also evaluated. More than half of the 15 adults tested showed evidence of fructose malabsorption after 25 g fructose and greater than two thirds showed malabsorption after 50 g fructose. AUC, representing overall breath hydrogen response, was significantly greater after the 50-g dose. Overall symptom scores were significantly greater than baseline after each dose, but scores were only marginally greater after 50 g than 25 g. Peak hydrogen levels and AUC were highly correlated, but neither was significantly related to symptoms. Fructose, in amounts commonly consumed, may result in mild gastrointestinal distress in normal people. Additional study is warranted to evaluate the response to fructose-glucose mixtures (as in high-fructose corn syrup) and fructose taken with food in both normal people and those with

  13. Circulating sex hormones and terminal duct lobular unit involution of the normal breast.

    Science.gov (United States)

    Khodr, Zeina G; Sherman, Mark E; Pfeiffer, Ruth M; Gierach, Gretchen L; Brinton, Louise A; Falk, Roni T; Patel, Deesha A; Linville, Laura M; Papathomas, Daphne; Clare, Susan E; Visscher, Daniel W; Mies, Carolyn; Hewitt, Stephen M; Storniolo, Anna Maria V; Rosebrock, Adrian; Caban, Jesus J; Figueroa, Jonine D

    2014-12-01

    Terminal duct lobular units (TDLU) are the predominant source of breast cancers. Lesser degrees of age-related TDLU involution have been associated with increased breast cancer risk, but factors that influence involution are largely unknown. We assessed whether circulating hormones, implicated in breast cancer risk, are associated with levels of TDLU involution using data from the Susan G. Komen Tissue Bank (KTB) at the Indiana University Simon Cancer Center (2009-2011). We evaluated three highly reproducible measures of TDLU involution, using normal breast tissue samples from the KTB (n = 390): TDLU counts, median TDLU span, and median acini counts per TDLU. RRs (for continuous measures), ORs (for categorical measures), 95% confidence intervals (95% CI), and Ptrends were calculated to assess the association between tertiles of estradiol, testosterone, sex hormone-binding globulin (SHBG), progesterone, and prolactin with TDLU measures. All models were stratified by menopausal status and adjusted for confounders. Among premenopausal women, higher prolactin levels were associated with higher TDLU counts (RRT3vsT1:1.18; 95% CI: 1.07-1.31; Ptrend = 0.0005), but higher progesterone was associated with lower TDLU counts (RRT3vsT1: 0.80; 95% CI: 0.72-0.89; Ptrend < 0.0001). Among postmenopausal women, higher levels of estradiol (RRT3vsT1:1.61; 95% CI: 1.32-1.97; Ptrend < 0.0001) and testosterone (RRT3vsT1: 1.32; 95% CI: 1.09-1.59; Ptrend = 0.0043) were associated with higher TDLU counts. These data suggest that select hormones may influence breast cancer risk potentially through delaying TDLU involution. Increased understanding of the relationship between circulating markers and TDLU involution may offer new insights into breast carcinogenesis. Cancer Epidemiol Biomarkers Prev; 23(12); 2765-73. ©2014 AACR. ©2014 American Association for Cancer Research.

  14. Mixed Inter Second Order Cone Programming Taking Appropriate Approximation for the Unit Commitment in Hybrid AC-DC Grid

    DEFF Research Database (Denmark)

    Zhou, Bo; Ai, Xiaomeng; Fang, Jiakun

    2017-01-01

    With the rapid development and deployment of voltage source converter (VSC) based HVDC, the traditional power system is evolving to the hybrid AC-DC grid. New optimization methods are urgently needed for these hybrid AC-DC power systems. In this paper, mixed-integer second order cone programming...... (MISOCP) for the hybrid AC-DC power systems is proposed. The second order cone (SOC) relaxation is adopted to transform the AC and DC power flow constraints to MISOCP. Several IEEE test systems are used to validate the proposed MISCOP formulation of the optimal power flow (OPF) and unit commitment (UC...

  15. Currently used dosage regimens of vancomycin fail to achieve therapeutic levels in approximately 40% of intensive care unit patients.

    Science.gov (United States)

    Obara, Vitor Yuzo; Zacas, Carolina Petrus; Carrilho, Claudia Maria Dantas de Maio; Delfino, Vinicius Daher Alvares

    2016-01-01

    This study aimed to assess whether currently used dosages of vancomycin for treatment of serious gram-positive bacterial infections in intensive care unit patients provided initial therapeutic vancomycin trough levels and to examine possible factors associated with the presence of adequate initial vancomycin trough levels in these patients. A prospective descriptive study with convenience sampling was performed. Nursing note and medical record data were collected from September 2013 to July 2014 for patients who met inclusion criteria. Eighty-three patients were included. Initial vancomycin trough levels were obtained immediately before vancomycin fourth dose. Acute kidney injury was defined as an increase of at least 0.3mg/dL in serum creatinine within 48 hours. Considering vancomycin trough levels recommended for serious gram-positive infection treatment (15 - 20µg/mL), patients were categorized as presenting with low, adequate, and high vancomycin trough levels (35 [42.2%], 18 [21.7%], and 30 [36.1%] patients, respectively). Acute kidney injury patients had significantly greater vancomycin trough levels (p = 0.0055, with significance for a trend, p = 0.0023). Surprisingly, more than 40% of the patients did not reach an effective initial vancomycin trough level. Studies on pharmacokinetic and dosage regimens of vancomycin in intensive care unit patients are necessary to circumvent this high proportion of failures to obtain adequate initial vancomycin trough levels. Vancomycin use without trough serum level monitoring in critically ill patients should be discouraged.

  16. Development of Normalization Factors for Canada and the United States and Comparison with European Factors

    Science.gov (United States)

    In Life Cycle Assessment (LCA), normalization calculates the magnitude of an impact (midpoint or endpoint) relative to the total effect of a given reference. Using a country or a continent as a reference system is a first step towards global normalization. The goal of this wor...

  17. Estrogen Receptor and Progesterone Receptor Expression in Normal Terminal Duct Lobular Units Surrounding Invasive Breast Cancer

    Science.gov (United States)

    Yang, Xiaohong R.; Figueroa, Jonine D.; Hewitt, Stephen M.; Falk, Roni T.; Pfeiffer, Ruth M.; Lissowska, Jolanta; Peplonska, Beata; Brinton, Louise A.; Garcia-Closas, Montserrat; Sherman, Mark E.

    2014-01-01

    Introduction Molecular and morphological alterations related to carcinogenesis have been found in terminal duct lobular units (TDLUs), the microscopic structures from which most breast cancer precursors and cancers develop, and therefore, analysis of these structures may reveal early changes in breast carcinogenesis and etiologic heterogeneity. Accordingly, we evaluated relationships of breast cancer risk factors and tumor pathology to estrogen receptor (ER) and progesterone receptor (PR) expression in TDLUs surrounding breast cancers. Methods We analyzed 270 breast cancer cases included in a population-based breast cancer case-control study conducted in Poland. TDLUs were mapped in relation to breast cancer: within the same block as the tumor (TDLU-T), proximal to tumor (TDLU-PT), or distant from (TDLU-DT). ER/PR was quantitated using image analysis of immunohistochemically stained TDLUs prepared as tissue microarrays. Results In surgical specimens containing ER-positive breast cancers, ER and PR levels were significantly higher in breast cancer cells than in normal TDLUs, and higher in TDLU-T than in TDLU-DT or TDLU-PT, which showed similar results. Analyses combining DT-/PT TDLUs within subjects demonstrated that ER levels were significantly lower in premenopausal women vs. postmenopausal women (odds ratio [OR]=0.38, 95% confidence interval [CI]=0.19, 0.76, P=0.0064) and among recent or current menopausal hormone therapy users compared with never users (OR=0.14, 95% CI=0.046–0.43, Ptrend=0.0006). Compared with premenopausal women, TDLUs of postmenopausal women showed lower levels of PR (OR=0.90, 95% CI=0.83–0.97, Ptrend=0.007). ER and PR expression in TDLUs was associated with epidermal growth factor receptor (EGFR) expression in invasive tumors (P=0.019 for ER and P=0.03 for PR), but not with other tumor features. Conclusions Our data suggest that TDLUs near breast cancers reflect field effects, whereas those at a distance demonstrate influences of breast

  18. Complementary-relationship-based 30 year normals (1981-2010) of monthly latent heat fluxes across the contiguous United States

    Science.gov (United States)

    Szilagyi, Jozsef

    2015-11-01

    Thirty year normal (1981-2010) monthly latent heat fluxes (ET) over the conterminous United States were estimated by a modified Advection-Aridity model from North American Regional Reanalysis (NARR) radiation and wind as well as Parameter-Elevation Regressions on Independent Slopes Model (PRISM) air and dew-point temperature data. Mean annual ET values were calibrated with PRISM precipitation (P) and validated against United States Geological Survey runoff (Q) data. At the six-digit Hydrologic Unit Code level (sample size of 334) the estimated 30 year normal runoff (P - ET) had a bias of 18 mm yr-1, a root-mean-square error of 96 mm yr-1, and a linear correlation coefficient value of 0.95, making the estimates on par with the latest Land Surface Model results but without the need for soil and vegetation information or any soil moisture budgeting.

  19. Development of Normalization Factors for Canada and the United States and Comparison with European Factors

    DEFF Research Database (Denmark)

    Lautier, Anne; Rosenbaum, Ralph K.; Margni, Manuele

    2010-01-01

    In Life Cycle Assessment (LCA), normalization calculates the magnitude of an impact (midpoint or endpoint) relative to the total effect of a given reference. The goal of this work is to calculate normalization factors for Canada and the US and to compare them with existing European normalization...... factors. The differences between geographical areas were highlighted by identifying and comparing the main contributors to a given impact category in Canada, the US and Europe. This comparison verified that the main contributors in Europe and in the US are also present in the Canadian inventory. It also...

  20. Motor unit firing intervals and other parameters of electrical activity in normal and pathological muscle

    DEFF Research Database (Denmark)

    Fuglsang-Frederiksen, Anders; Smith, T; Høgenhaven, H

    1987-01-01

    The analysis of the firing intervals of motor units has been suggested as a diagnostic tool in patients with neuromuscular disorders. Part of the increase in number of turns seen in patients with myopathy could be secondary to the decrease in motor unit firing intervals at threshold force...

  1. Performance of HESCO Bastion Units Under Combined Normal and Cyclic Lateral Loading

    Science.gov (United States)

    2017-02-01

    4 vii Figures and Tables Figures Figure 1. HESCO bastion welded wire mesh (WWM) panels. .................................................. 3...in the testing program. The Commander of ERDC was COL Bryan S. Green and the Director was Dr. Jeffery P. Holland. ERDC/CERL TR-17-4 x Unit... welded wire mesh that is used with a geotextile liner. HESCO units are set up onsite and filled with soil or sand, as available at the construction

  2. Prevalence of overweight misperception and weight control behaviors among normal weight adolescents in the United States

    Directory of Open Access Journals (Sweden)

    Kathleen S. Talamayan

    2006-01-01

    Full Text Available Weight perceptions and weight control behaviors have been documented with underweight and overweight adolescents, yet limited information is available on normal weight adolescents. This study investigates the prevalence of overweight misperceptions and weight control behaviors among normal weight adolescents in the U.S. by sociodemographic and geographic characteristics. We examined data from the 2003 Youth Risk Behavior Survey (YRBS. A total of 9,714 normal weight U.S. high school students were included in this study. Outcome measures included self-reported height and weight measurements, overweight misperceptions, and weight control behaviors. Weighted prevalence estimates and odds ratios were computed. There were 16.2% of normal weight students who perceived themselves as overweight. Females (25.3% were more likely to perceive themselves as overweight than males (6.7% (p < 0.05. Misperceptions of overweight were highest among white (18.3% and Hispanic students (15.2% and lowest among black students (5.8%. Females (16.8% outnumbered males (6.8% in practicing at least one unhealthy weight control behavior (use of diet pills, laxatives, and fasting in the past 30 days. The percentage of students who practiced at least one weight control behavior was similar by ethnicity. There were no significant differences in overweight misperception and weight control behaviors by grade level, geographic region, or metropolitan status. A significant portion of normal weight adolescents misperceive themselves as overweight and are engaging in unhealthy weight control behaviors. These data suggest that obesity prevention programs should address weight misperceptions and the harmful effects of unhealthy weight control methods even among normal weight adolescents.

  3. Dosimetry of normal and wedge fields for a cobalt-60 teletherapy unit

    International Nuclear Information System (INIS)

    Tripathi, U.B.; Kelkar, N.Y.

    1980-01-01

    A simple analytical method for computation of dose distributions for normal and wedge fields is described and the use of the method in planning radiation treatment is outlined. Formulas has been given to compute: (1) depth dose along central axis of cobalt-60 beam, (2) dose to off-axis points, and (3) dose distribution for a wedge field. Good agreement has been found between theoretical and experimental values. With the help of these formulae, the dose at any point can be easily and accurately calculated and radiotherapy can be planned for tumours of very odd shape and sizes. The limitation of the method is that the formulae have been derived for 50% field definition. For cobalt-60 machine having any other field definition, appropriate correction factors have to be applied. (M.G.B.)

  4. Approximate symmetries of Hamiltonians

    Science.gov (United States)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  5. Clinical outcomes of the first midwife-led normal birth unit in China: a retrospective cohort study.

    Science.gov (United States)

    Cheung, Ngai Fen; Mander, Rosemary; Wang, Xiaoli; Fu, Wei; Zhou, Hong; Zhang, Liping

    2011-10-01

    to report the clinical outcomes of the first six months of operation of an innovative midwife-led normal birth unit (MNBU) in China in 2008, aiming to facilitate normal birth and enhance midwifery practice. an urban hospital with 2000-3000 deliveries per year. this study was part of a major action research project that led to implementation of the MNBU. A retrospective cohort and a questionnaire survey were used. The data were analysed thematically. the outcomes of the first 226 women accessing the MNBU were compared with a matched retrospective cohort of 226 women accessing standard care. In total, 128 participants completed a satisfaction questionnaire before discharge. mode of birth and model of care. the vaginal birth rate was 87.6% in the MNBU compared with 58.8% in the standard care unit. All women who accessed the MNBU were supported by both a midwife and a birth companion, referred to as 'two-to-one' care. None of the women labouring in the standard care unit were identified as having a birth companion. the concept of 'two-to-one' care emerged as fundamental to women's experiences and utilisation of midwives' skills to promote normal birth and decrease the likelihood of a caesarean section. the MNBU provides an environment where midwives can practice to the full extent of their role. The high vaginal birth rate in the MNBU indicates the potential of this model of care to reduce obstetric intervention and increase women's satisfaction with care within a context of extraordinary high caesarean section rates. midwife-led care implies a separation of obstetric care from maternity care, which has been advocated in many European countries. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Assessment of four shadow band correction models using beam normal irradiance data from the United Kingdom and Israel

    International Nuclear Information System (INIS)

    Lopez, G.; Muneer, T.; Claywell, R.

    2004-01-01

    Diffuse irradiance is a fundamental factor for all solar resource considerations. Diffuse irradiance is accurately determined by calculation from global and beam normal (direct) measurements. However, beam solar measurements and related support can be very expensive, and therefore, shadow bands are often used, along with pyranometers, to block the solar disk. The errors that result from the use of shadow bands are well known and have been studied by many authors. The thrust of this article is to examine four recognized techniques for correcting shadow band based, diffuse irradiance and statistically evaluate their individual performance using data culled from two contrasting sites within the United Kingdom and Israel

  7. Approximate Likelihood

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  8. Diophantine approximation

    CERN Document Server

    Schmidt, Wolfgang M

    1980-01-01

    "In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)

  9. Birkhoff normalization

    NARCIS (Netherlands)

    Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.

    2003-01-01

    The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for

  10. EnviroAtlas - Average Direct Normal Solar resources kWh/m2/Day by 12-Digit HUC for the Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — The annual average direct normal solar resources by 12-Digit Hydrologic Unit (HUC) was estimated from maps produced by the National Renewable Energy Laboratory for...

  11. Examination of muscle composition and motor unit behavior of the first dorsal interosseous of normal and overweight children.

    Science.gov (United States)

    Miller, Jonathan D; Sterczala, Adam J; Trevino, Michael A; Herda, Trent J

    2018-05-01

    We examined differences between normal weight (NW) and overweight (OW) children aged 8-10 yr in strength, muscle composition, and motor unit (MU) behavior of the first dorsal interosseous. Ultrasonography was used to determine muscle cross-sectional area (CSA), subcutaneous fat (sFAT), and echo intensity (EI). MU behavior was assessed during isometric muscle actions at 20% and 50% of maximal voluntary contraction (MVC) by analyzing electromyography amplitude (EMG RMS ) and relationships between mean firing rates (MFR), recruitment thresholds (RT), and MU action potential amplitudes (MUAP size ) and durations (MUAP time ). The OW group had significantly greater EI than the NW group ( P = 0.002; NW, 47.99 ± 6.01 AU; OW, 58.90 ± 10.63 AU, where AU is arbitrary units) with no differences between groups for CSA ( P = 0.688) or MVC force ( P = 0.790). MUAP size was larger for NW than OW in relation to RT ( P = 0.002) and for MUs expressing similar MFRs ( P = 0.011). There were no significant differences ( P = 0.279-0.969) between groups for slopes or y-intercepts from the MFR vs. RT relationships. MUAP time was larger in OW ( P = 0.015) and EMG RMS was attenuated in OW compared with NW ( P = 0.034); however, there were no significant correlations ( P = 0.133-0.164, r = 0.270-0.291) between sFAT and EMG RMS . In a muscle that does not support body mass, the OW children had smaller MUAP size as well as greater EI, although anatomical CSA was similar. This contradicts previous studies examining larger limb muscles. Despite evidence of smaller MUs, the OW children had similar isometric strength compared with NW children. NEW & NOTEWORTHY Ultrasound data and motor unit action potential sizes suggest that overweight children have poorer muscle composition and smaller motor units in the first dorsal interosseous than normal weight children. Evidence is presented that suggests differences in action potential size cannot be explained

  12. Public exposure from environmental release of radioactive material under normal operation of unit-1 Bushehr nuclear power plant

    International Nuclear Information System (INIS)

    Sohrabi, M.; Parsouzi, Z.; Amrollahi, R.; Khamooshy, C.; Ghasemi, M.

    2013-01-01

    Highlights: ► The unit-1 Bushehr nuclear power plant is a VVER type reactor with 1000 MWe power. ► Doses of public critical groups living around the plant were assessed under normal reactor operation conditions. ► PC-CREAM 98 computer code developed by the HPA was applied to assess the public doses. ► Doses are comparable with those in the FSAR, in the ER and doses monitored. ► The doses assessed are lower than the dose constraint of 0.1 mSv/y associated with the plant. - Abstract: The Unit-1 Bushehr Nuclear Power Plant (BNPP-1), constructed at the Hallileh site near Bushehr located at the coast of the Persian Gulf, Iran, is a VVER type reactor with 1000 MWe power. According to standard practices, under normal operation conditions of the plant, radiological assessment of atmospheric and aquatic releases to the environment and assessment of public exposures are considered essential. In order to assess the individual and collective doses of the critical groups of population who receive the highest dose from radioactive discharges into the environment (atmosphere and aquatic) under normal operation conditions, this study was conducted. To assess the doses, the PC-CREAM 98 computer code developed by the Radiation Protection Division of the Health Protection Agency (HPA; formerly called NRPB) was applied. It uses a standard Gaussian plume dispersion model and comprises a suite of models and data for estimation of the radiological impact assessments of routine and continuous discharges from an NPP. The input data include a stack height of 100 m annual radionuclides release of gaseous effluents from the stack and liquid effluents that are released from heat removal system, meteorological data from the Bushehr local meteorological station, and the data for agricultural products. To assess doses from marine discharges, consumption of sea fish, crustacean and mollusca were considered. According to calculation by PC-CREAM 98 computer code, the highest individual

  13. THE FEATURES OF CONNEXINS EXPRESSION IN THE CELLS OF NEUROVASCLAR UNIT IN NORMAL CONDITIONS AND HYPOXIA IN VITRO

    Directory of Open Access Journals (Sweden)

    A. V. Morgun

    2014-01-01

    Full Text Available The aim of this research was to assess a role of connexin 43 (Cx43 and associated molecule CD38 in the regulation of cell-cell interactions in the neurovascular unit (NVU in vitro in physiological conditions and in hypoxia.Materials and methods. The study was done using the original neurovascular unit model in vitro. The NVU consisted of three cell types: neurons, astrocytes, and cerebral endothelial cells derived from rats. Hypoxia was induced by incubating cells with sodium iodoacetate for 30 min at37 °C in standard culture conditions.Results. We investigated the role of connexin 43 in the regulation of cell interactions within the NVU in normal and hypoxic injury in vitro. We found that astrocytes were characterized by high levels of expression of Cx43 and low level of CD38 expression, neurons demonstrated high levels of CD38 and low levels of Cx43. In hypoxic conditions, the expression of Cx43 and CD38 in astrocytes markedly increased while CD38 expression in neurons decreased, however no changes were found in endothelial cells. Suppression of Cx43 activity resulted in down-regulation of CD38 in NVU cells, both in physiological conditions and at chemical hypoxia.Conclusion. Thus, the Cx-regulated intercellular NAD+-dependent communication and secretory phenotype of astroglial cells that are the part of the blood-brain barrier is markedly changed in hypoxia.

  14. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  15. The impact of normal saline on the incidence of exposure keratopathy in patients hospitalized in intensive care units

    Directory of Open Access Journals (Sweden)

    Zohreh Davoodabady

    2018-01-01

    Full Text Available Background: Patients in the intensive care unit (ICU have impaired ocular protective mechanisms that lead to an increased risk of ocular surface diseases including exposure keratopathy (EK. This study was designed to evaluate the effect of normal saline (NS on the incidence and severity of EK in critically ill patients. Materials and Methods: This single-blind randomized controlled trial was conducted on 50 patients admitted to ICUs. The participants were selected through purposive sampling. One eye of each patient, randomly was allocated to intervention group (standard care with NS and the other eye to control group (standard care. In each patient, one eye (control group randomly received standard care and the other eye (intervention group received NS every 6 h in addition to standard care. The presence and severity of keratopathy was assessed daily until day 7 of hospitalization using fluorescein and an ophthalmoscope with cobalt blue filter. Chi-square test was used for statistical analysis in SPSS software. Results: Before the study ( first day there were no statistically significant differences in the incidence and severity of EK between groups. Although, the incidence and severity of EK after the study (7th day was higher in the intervention group compared to the control group, their differences were not statistically significant. Although, the incidence and severity of EK, from the 1st day until the 7th, increased within both groups, this increase was statistically significant only in the intervention (NS group. Conclusions: The use of NS as eye care in patients hospitalized in ICUs can increase the incidence and severity of EK and is not recommended.

  16. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  17. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  18. Summary of Time Period-Based and Other Approximation Methods for Determining the Capacity Value of Wind and Solar in the United States: September 2010 - February 2012

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, J.; Porter, K.

    2012-03-01

    This paper updates previous work that describes time period-based and other approximation methods for estimating the capacity value of wind power and extends it to include solar power. The paper summarizes various methods presented in utility integrated resource plans, regional transmission organization methodologies, regional stakeholder initiatives, regulatory proceedings, and academic and industry studies. Time period-based approximation methods typically measure the contribution of a wind or solar plant at the time of system peak - sometimes over a period of months or the average of multiple years.

  19. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  20. Two-year outcome of normal-birth-weight infants admitted to a Singapore neonatal intensive care unit.

    Science.gov (United States)

    Lian, W B; Yeo, C L; Ho, L Y

    2002-03-01

    To describe the characteristics, the immediate and short-term outcome and predictors of mortality in normal-birth-weight (NBW) infants admitted to a tertiary neonatal intensive care unit (NICU) in Singapore. We retrospectively reviewed the medical records of 137 consecutive NBW infants admitted to the NICU of the Singapore General Hospital from January 1991 to December 1992. Data on the diagnoses, clinical presentation of illness, intervention received, complications and outcome as well as follow-up patterns for the first 2 years of life, were collected and analysed. NBW NICU infants comprised 1.8% of births in our hospital and 40.8% of all NICU admissions. The main reasons for NICU admissions were respiratory disorders (61.3%), congenital anomalies (15.3%) and asphyxia neonatorum (11.7%). Respiratory support was necessary in 81.8%. Among those ventilated, the only predictive factor contributing to mortality was the mean inspired oxygen concentration. The mortality rate was 11.7%. Causes of death included congenital anomalies (43.75%), asphyxia neonatorum (31.25%) and pulmonary failure secondary to meconium aspiration syndrome (12.5%). The median hospital stay among survivors (88.3%) was 11.0 (range, 4 to 70) days. Of 42 patients (out of 117 survivors) who received follow-up for at least 6 months, 39 infants did not have evidence of any major neurodevelopmental abnormalities at their last follow-up visit, prior to or at 2 years of age. Despite their short hospital stay (compared to very-low-birth-weight infants), the high volume of NBW admissions make the care of this population an important area for review to enhance advances in and hence, reduce the cost of NICU care. With improved antenatal diagnostic techniques (allowing earlier and more accurate diagnosis of congenital malformations) and better antenatal and perinatal care (allowing better management of at-risk pregnancies), it is anticipated that there should be a reduction in such admissions with better

  1. Approximation of Resting Energy Expenditure in Intensive Care Unit Patients Using the SenseWear Bracelet: A Comparison With Indirect Calorimetry.

    Science.gov (United States)

    Sundström, Martin; Mehrabi, Mahboubeh; Tjäder, Inga; Rooyackers, Olav; Hammarqvist, Folke

    2017-08-01

    Indirect calorimetry (IC) is the gold standard for determining energy expenditure in patients requiring mechanical ventilation. Metabolic armbands using data derived from dermal measurements have been proposed as an alternative to IC in healthy subjects, but their utility during critical illness is unclear. The aim of this study was to determine the level of agreement between the SenseWear armband and the Deltatrac Metabolic Monitor in mechanically ventilated intensive care unit (ICU) patients. Adult ICU patients requiring invasive ventilator therapy were eligible for inclusion. Simultaneous measurements were performed with the SenseWear Armband and Deltatrac under stable conditions. Resting energy expenditure (REE) values were registered for both instruments and compared with Bland-Altman plots. Forty-two measurements were performed in 30 patients. The SenseWear Armband measured significantly higher REE values as compared with IC (mean bias, 85 kcal/24 h; P = .027). Less variability was noted between individual SenseWear measurements and REE as predicted by the Harris-Benedict equation (2 SD, ±327 kcal/24 h) than when IC was compared with SenseWear and Harris-Benedict (2 SD, ±473 and ±543 kcal/24 h, respectively). The systematic bias and large variability of the SenseWear armband when compared with gas exchange measurements confer limited benefits over the Harris Benedict equation in determining caloric requirements of ICU patients.

  2. Differing effects of attention in single-units and populations are well predicted by heterogeneous tuning and the normalization model of attention.

    Science.gov (United States)

    Hara, Yuko; Pestilli, Franco; Gardner, Justin L

    2014-01-01

    Single-unit measurements have reported many different effects of attention on contrast-response (e.g., contrast-gain, response-gain, additive-offset dependent on visibility), while functional imaging measurements have more uniformly reported increases in response across all contrasts (additive-offset). The normalization model of attention elegantly predicts the diversity of effects of attention reported in single-units well-tuned to the stimulus, but what predictions does it make for more realistic populations of neurons with heterogeneous tuning? Are predictions in accordance with population-scale measurements? We used functional imaging data from humans to determine a realistic ratio of attention-field to stimulus-drive size (a key parameter for the model) and predicted effects of attention in a population of model neurons with heterogeneous tuning. We found that within the population, neurons well-tuned to the stimulus showed a response-gain effect, while less-well-tuned neurons showed a contrast-gain effect. Averaged across the population, these disparate effects of attention gave rise to additive-offsets in contrast-response, similar to reports in human functional imaging as well as population averages of single-units. Differences in predictions for single-units and populations were observed across a wide range of model parameters (ratios of attention-field to stimulus-drive size and the amount of baseline response modifiable by attention), offering an explanation for disparity in physiological reports. Thus, by accounting for heterogeneity in tuning of realistic neuronal populations, the normalization model of attention can not only predict responses of well-tuned neurons, but also the activity of large populations of neurons. More generally, computational models can unify physiological findings across different scales of measurement, and make links to behavior, but only if factors such as heterogeneous tuning within a population are properly accounted for.

  3. The proximal hamstring muscle–tendon–bone unit: A review of the normal anatomy, biomechanics, and pathophysiology

    International Nuclear Information System (INIS)

    Beltran, Luis; Ghazikhanian, Varand; Padron, Mario; Beltran, Javier

    2012-01-01

    Proximal hamstring injuries occur during eccentric contraction with the hip and the knee on extension; hence they are relatively frequent lesions in specific sports such as water skiing and hurdle jumping. Additionally, the trend toward increasing activity and fitness training in the general population has resulted in similar injuries. Myotendinous strains are more frequent than avulsion injuries. Discrimination between the two types of lesions is relevant for patient management, since the former is treated conservatively and the latter surgically. MRI and Ultrasonography are both well suited techniques for the diagnosis and evaluation of hamstring tendon injuries. Each one has its advantages and disadvantages. The purpose of this article is to provide a comprehensive review of the anatomy and biomechanics of the proximal hamstring muscle–tendon–bone unit and the varied imaging appearances of hamstring injury, which is vital for optimizing patient care. This will enable the musculoskeletal radiologist to contribute accurate and useful information in the treatment of athletes at all levels of participation.

  4. The proximal hamstring muscle–tendon–bone unit: A review of the normal anatomy, biomechanics, and pathophysiology

    Energy Technology Data Exchange (ETDEWEB)

    Beltran, Luis, E-mail: luisbeltran@mac.com [Department of Radiology, Hospital for Joint Diseases, NYU, New York, NY (United States); Ghazikhanian, Varand, E-mail: varandg@aol.com [Department of Radiology, Maimonides Medical Center, Brooklyn, NY (United States); Padron, Mario, E-mail: mario.padron@cemtro.es [Clinica CEMTRO, Avenida del Ventisquero de la Condesa 42, 28035 Madrid (Spain); Beltran, Javier, E-mail: Jbeltran46@msn.com [Department of Radiology, Maimonides Medical Center, Brooklyn, NY (United States)

    2012-12-15

    Proximal hamstring injuries occur during eccentric contraction with the hip and the knee on extension; hence they are relatively frequent lesions in specific sports such as water skiing and hurdle jumping. Additionally, the trend toward increasing activity and fitness training in the general population has resulted in similar injuries. Myotendinous strains are more frequent than avulsion injuries. Discrimination between the two types of lesions is relevant for patient management, since the former is treated conservatively and the latter surgically. MRI and Ultrasonography are both well suited techniques for the diagnosis and evaluation of hamstring tendon injuries. Each one has its advantages and disadvantages. The purpose of this article is to provide a comprehensive review of the anatomy and biomechanics of the proximal hamstring muscle–tendon–bone unit and the varied imaging appearances of hamstring injury, which is vital for optimizing patient care. This will enable the musculoskeletal radiologist to contribute accurate and useful information in the treatment of athletes at all levels of participation.

  5. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  6. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  7. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Normalized Atmospheric Deposition for 2002, Ammonium (NH4)

    Science.gov (United States)

    Wieczorek, Michael; LaMotte, Andrew E.

    2010-01-01

    This data set represents the average normalized atmospheric (wet) deposition, in kilograms, of Ammonium (NH4) for the year 2002 compiled for every catchment of NHDPlus for the conterminous United States. Estimates of NH4 deposition are based on National Atmospheric Deposition Program (NADP) measurements (B. Larsen, U.S. Geological Survey, written commun., 2007). De-trending methods applied to the year 2002 are described in Alexander and others, 2001. NADP site selection met the following criteria: stations must have records from 1995 to 2002 and have a minimum of 30 observations. The NHDPlus Version 1.1 is an integrated suite of application-ready geospatial datasets that incorporates many of the best features of the National Hydrography Dataset (NHD) and the National Elevation Dataset (NED). The NHDPlus includes a stream network (based on the 1:100,00-scale NHD), improved networking, naming, and value-added attributes (VAAs). NHDPlus also includes elevation-derived catchments (drainage areas) produced using a drainage enforcement technique first widely used in New England, and thus referred to as "the New England Method." This technique involves "burning in" the 1:100,000-scale NHD and when available building "walls" using the National Watershed Boundary Dataset (WBD). The resulting modified digital elevation model (HydroDEM) is used to produce hydrologic derivatives that agree with the NHD and WBD. Over the past two years, an interdisciplinary team from the U.S. Geological Survey (USGS), and the U.S. Environmental Protection Agency (USEPA), and contractors, found that this method produces the best quality NHD catchments using an automated process (USEPA, 2007). The NHDPlus dataset is organized by 18 Production Units that cover the conterminous United States. The NHDPlus version 1.1 data are grouped by the U.S. Geologic Survey's Major River Basins (MRBs, Crawford and others, 2006). MRB1, covering the New England and Mid-Atlantic River basins, contains NHDPlus

  8. The Wallner Normal Fault: A new major tectonic structure within the Austroalpine Units south of the Tauern Window (Kreuzeck, Eastern Alps, Austria)

    Science.gov (United States)

    Griesmeier, Gerit E. U.; Schuster, Ralf; Grasemann, Bernhard

    2017-04-01

    The polymetamorphic Austroalpine Units of the Eastern Alps were derived from the northern Adriatic continental margin and have been significantly reworked during the Eoalpine intracontinental subduction. Several major basement/cover nappe systems, which experienced a markedly different tectono-metamorphic history, characterize the complex internal structure of the Austroalpine Units. This work describes a new major tectonic structure in the Kreuzeck Mountains, south of the famous Tauern Window - the Wallner Normal Fault. It separates the so called Koralpe-Wölz Nappe System in the footwall from the Drauzug-Gurktal Nappe System in the hanging wall. The Koralpe-Wölz Nappe System below the Wallner Normal Fault is dominated by monotonous paragneisses and minor mica schists, which are locally garnet bearing. Subordinated amphibolite bodies can be observed. The schistosity is homogeneously dipping steeply to the S and the partly mylonitic stretching lineation is typically moderately dipping to the ESE. The Alpine metamorphic peak reached eclogite facies further in the north and amphibolite facies in the study area. The metamorphic peak occurred in the Late Cretaceous followed by rapid cooling. The Drauzug-Gurktal Nappe System above the Wallner Normal Fault consists of various subunits. (i) Paragneisses and micaschists subunit (Gaugen Complex) with numerous quartz mobilisates are locally intercalated with amphibolites. Several millimeter large garnets together with staurolite and kyanite have been identified in thin sections. Even though the main striking direction is E-W, polyphase refolding resulted in strong local variations of the orientation of the main foliation. (ii) Garnet micaschists subunit (Strieden Complex) with garnets up to 15 mm are intercalated with up to tens of meters thick amphibolites. The lithologies are intensely folded with folding axes dipping moderately to the SSW and axial planes dipping steeply to the NW. (iii) A phyllites-marble subunit

  9. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  10. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    Science.gov (United States)

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  11. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  12. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  13. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  14. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  15. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  16. Approximations of Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Vinai K. Singh

    2013-03-01

    Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions

  17. General Rytov approximation.

    Science.gov (United States)

    Potvin, Guy

    2015-10-01

    We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.

  18. Normal foot and ankle

    International Nuclear Information System (INIS)

    Weissman, S.D.

    1989-01-01

    The foot may be thought of as a bag of bones tied tightly together and functioning as a unit. The bones re expected to maintain their alignment without causing symptomatology to the patient. The author discusses a normal radiograph. The bones must have normal shape and normal alignment. The density of the soft tissues should be normal and there should be no fractures, tumors, or foreign bodies

  19. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  20. INTOR cost approximation

    International Nuclear Information System (INIS)

    Knobloch, A.F.

    1980-01-01

    A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de

  1. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  2. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Approximating Multivariate Normal Orthant Probabilities Using the Clark Algorithm.

    Science.gov (United States)

    1987-07-15

    Kent Eaton Army Research Institute Dr. Hans Crombag 5001 Eisenhower Avenue University of Leyden Alexandria, VA 22333 Education Research Center...Boerhaavelaan 2 Dr. John M. Eddins 2334 EN Leyden University of Illinois The NETHERLANDS 252 Engineering Research Laboratory Mr. Timothy Davey 103 South...Education and Training Ms. Kathleen Moreno Naval Air Station Navy Personnel R&D Center Pensacola, FL 32508 Code 62 San Diego, CA 92152-6800 Dr. Gary Marco

  4. Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Normalized Atmospheric Deposition for 2002, Nitrate (NO3)

    Science.gov (United States)

    Wieczorek, Michael; LaMotte, Andrew E.

    2010-01-01

    This tabular data set represents the average normalized (wet) deposition, in kilograms per square kilometer multiplied by 100, of Nitrate (NO3) for the year 2002 compiled for every MRB_E2RF1 catchment of the Major River Basins (MRBs, Crawford and others, 2006). Estimates of NO3 deposition are based on National Atmospheric Deposition Program (NADP) measurements (B. Larsen, U.S. Geological Survey, written. commun., 2007). De-trending methods applied to the year 2002 are described in Alexander and others, 2001. NADP site selection met the following criteria: stations must have records from 1995 to 2002 and have a minimum of 30 observations. The MRB_E2RF1 catchments are based on a modified version of the U.S. Environmental Protection Agency's (USEPA) ERF1_2 and include enhancements to support national and regional-scale surface-water quality modeling (Nolan and others, 2002; Brakebill and others, 2011). Data were compiled for every MRB_E2RF1 catchment for the conterminous United States covering New England and Mid-Atlantic (MRB1), South Atlantic-Gulf and Tennessee (MRB2), the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy (MRB3), the Missouri (MRB4), the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf (MRB5), the Rio Grande, Colorado, and the Great basin (MRB6), the Pacific Northwest (MRB7) river basins, and California (MRB8).

  5. Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Normalized Atmospheric Deposition for 2002, Total Inorganic Nitrogen

    Science.gov (United States)

    Wieczorek, Michael; LaMotte, Andrew E.

    2010-01-01

    This tabular data set represents the average normalized atmospheric (wet) deposition, in kilograms per square kilometer multiplied by 100, of Total Inorganic Nitrogen for the year 2002 compiled for every MRB_E2RF1 catchment of selected Major River Basins (MRBs, Crawford and others, 2006). Estimates of Total Inorganic Nitrogen deposition are based on National Atmospheric Deposition Program (NADP) measurements (B. Larsen, U.S. Geological Survey, written. commun., 2007). De-trending methods applied to the year 2002 are described in Alexander and others, 2001. NADP site selection met the following criteria: stations must have records from 1995 to 2002 and have a minimum of 30 observations. The MRB_E2RF1 catchments are based on a modified version of the U.S. Environmental Protection Agency's (USEPA) ERF1_2 and include enhancements to support national and regional-scale surface-water quality modeling (Nolan and others, 2002; Brakebill and others, 2011). Data were compiled for every MRB_E2RF1 catchment for the conterminous United States covering New England and Mid-Atlantic (MRB1), South Atlantic-Gulf and Tennessee (MRB2), the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy (MRB3), the Missouri (MRB4), the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf (MRB5), the Rio Grande, Colorado, and the Great basin (MRB6), the Pacific Northwest (MRB7) river basins, and California (MRB8).

  6. Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Normalized Atmospheric Deposition for 2002, Ammonium (NH4)

    Science.gov (United States)

    Wieczorek, Michael; LaMotte, Andrew E.

    2010-01-01

    This tabular data set represents the average normalized (wet) deposition, in kilograms per square kilometer multiplied by 100, of ammonium (NH4) for the year 2002 compiled for every MRB_E2RF1 catchment of the Major River Basins (MRBs, Crawford and others, 2006). Estimates of NH4 deposition are based on National Atmospheric Deposition Program (NADP) measurements (B. Larsen, U.S. Geological Survey, written. commun., 2007). De-trending methods applied to the year 2002 are described in Alexander and others, 2001. NADP site selection met the following criteria: stations must have records from 1995 to 2002 and have a minimum of 30 observations. The MRB_E2RF1 catchments are based on a modified version of the U.S. Environmental Protection Agency's (USEPA) ERF1_2 and include enhancements to support national and regional-scale surface-water quality modeling (Nolan and others, 2002; Brakebill and others, 2011). Data were compiled for every MRB_E2RF1 catchment for the conterminous United States covering New England and Mid-Atlantic (MRB1), South Atlantic-Gulf and Tennessee (MRB2), the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy (MRB3), the Missouri (MRB4), the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf (MRB5), the Rio Grande, Colorado, and the Great basin (MRB6), the Pacific Northwest (MRB7) river basins, and California (MRB8).

  7. On Covering Approximation Subspaces

    Directory of Open Access Journals (Sweden)

    Xun Ge

    2009-06-01

    Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

  8. On Convex Quadratic Approximation

    NARCIS (Netherlands)

    den Hertog, D.; de Klerk, E.; Roos, J.

    2000-01-01

    In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of

  9. Approximating The DCM

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg

    2005-01-01

    The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...

  10. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  11. An improved saddlepoint approximation.

    Science.gov (United States)

    Gillespie, Colin S; Renshaw, Eric

    2007-08-01

    Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.

  12. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  13. Topology, calculus and approximation

    CERN Document Server

    Komornik, Vilmos

    2017-01-01

    Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...

  14. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  15. Approximating Preemptive Stochastic Scheduling

    OpenAIRE

    Megow Nicole; Vredeveld Tjark

    2009-01-01

    We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...

  16. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  17. Cyclic approximation to stasis

    Directory of Open Access Journals (Sweden)

    Stewart D. Johnson

    2009-06-01

    Full Text Available Neighborhoods of points in $mathbb{R}^n$ where a positive linear combination of $C^1$ vector fields sum to zero contain, generically, cyclic trajectories that switch between the vector fields. Such points are called stasis points, and the approximating switching cycle can be chosen so that the timing of the switches exactly matches the positive linear weighting. In the case of two vector fields, the stasis points form one-dimensional $C^1$ manifolds containing nearby families of two-cycles. The generic case of two flows in $mathbb{R}^3$ can be diffeomorphed to a standard form with cubic curves as trajectories.

  18. On the WKBJ approximation

    International Nuclear Information System (INIS)

    El Sawi, M.

    1983-07-01

    A simple approach employing properties of solutions of differential equations is adopted to derive an appropriate extension of the WKBJ method. Some of the earlier techniques that are commonly in use are unified, whereby the general approximate solution to a second-order homogeneous linear differential equation is presented in a standard form that is valid for all orders. In comparison to other methods, the present one is shown to be leading in the order of iteration, and thus possibly has the ability of accelerating the convergence of the solution. The method is also extended for the solution of inhomogeneous equations. (author)

  19. The relaxation time approximation

    International Nuclear Information System (INIS)

    Gairola, R.P.; Indu, B.D.

    1991-01-01

    A plausible approximation has been made to estimate the relaxation time from a knowledge of the transition probability of phonons from one state (r vector, q vector) to other state (r' vector, q' vector), as a result of collision. The relaxation time, thus obtained, shows a strong dependence on temperature and weak dependence on the wave vector. In view of this dependence, relaxation time has been expressed in terms of a temperature Taylor's series in the first Brillouin zone. Consequently, a simple model for estimating the thermal conductivity is suggested. the calculations become much easier than the Callaway model. (author). 14 refs

  20. Polynomial approximation on polytopes

    CERN Document Server

    Totik, Vilmos

    2014-01-01

    Polynomial approximation on convex polytopes in \\mathbf{R}^d is considered in uniform and L^p-norms. For an appropriate modulus of smoothness matching direct and converse estimates are proven. In the L^p-case so called strong direct and converse results are also verified. The equivalence of the moduli of smoothness with an appropriate K-functional follows as a consequence. The results solve a problem that was left open since the mid 1980s when some of the present findings were established for special, so-called simple polytopes.

  1. Finite elements and approximation

    CERN Document Server

    Zienkiewicz, O C

    2006-01-01

    A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o

  2. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  3. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  4. The random phase approximation

    International Nuclear Information System (INIS)

    Schuck, P.

    1985-01-01

    RPA is the adequate theory to describe vibrations of the nucleus of very small amplitudes. These vibrations can either be forced by an external electromagnetic field or can be eigenmodes of the nucleus. In a one dimensional analogue the potential corresponding to such eigenmodes of very small amplitude should be rather stiff otherwise the motion risks to be a large amplitude one and to enter a region where the approximation is not valid. This means that nuclei which are supposedly well described by RPA must have a very stable groundstate configuration (must e.g. be very stiff against deformation). This is usually the case for doubly magic nuclei or close to magic nuclei which are in the middle of proton and neutron shells which develop a very stable groundstate deformation; we take the deformation as an example but there are many other possible degrees of freedom as, for example, compression modes, isovector degrees of freedom, spin degrees of freedom, and many more

  5. The quasilocalized charge approximation

    International Nuclear Information System (INIS)

    Kalman, G J; Golden, K I; Donko, Z; Hartmann, P

    2005-01-01

    The quasilocalized charge approximation (QLCA) has been used for some time as a formalism for the calculation of the dielectric response and for determining the collective mode dispersion in strongly coupled Coulomb and Yukawa liquids. The approach is based on a microscopic model in which the charges are quasilocalized on a short-time scale in local potential fluctuations. We review the conceptual basis and theoretical structure of the QLC approach and together with recent results from molecular dynamics simulations that corroborate and quantify the theoretical concepts. We also summarize the major applications of the QLCA to various physical systems, combined with the corresponding results of the molecular dynamics simulations and point out the general agreement and instances of disagreement between the two

  6. Multidimensional stochastic approximation using locally contractive functions

    Science.gov (United States)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  7. Approximate quantum Markov chains

    CERN Document Server

    Sutter, David

    2018-01-01

    This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...

  8. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2012-05-01

    Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.

  9. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  10. PWL approximation of nonlinear dynamical systems, part I: structural stability

    International Nuclear Information System (INIS)

    Storace, M; De Feo, O

    2005-01-01

    This paper and its companion address the problem of the approximation/identification of nonlinear dynamical systems depending on parameters, with a view to their circuit implementation. The proposed method is based on a piecewise-linear approximation technique. In particular, this paper describes the approximation method and applies it to some particularly significant dynamical systems (topological normal forms). The structural stability of the PWL approximations of such systems is investigated through a bifurcation analysis (via continuation methods)

  11. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  12. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  13. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    Science.gov (United States)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  14. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  15. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  16. The N'ormal Distribution

    Indian Academy of Sciences (India)

    An optimal way of choosing sample size in an opinion poll is indicated using the normal distribution. Introduction. In this article, the ubiquitous normal distribution is intro- duced as a convenient approximation for computing bino- mial probabilities for large values of n. Stirling's formula. • and DeMoivre-Laplace theorem ...

  17. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  18. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  19. On Born approximation in black hole scattering

    Science.gov (United States)

    Batic, D.; Kelkar, N. G.; Nowakowski, M.

    2011-12-01

    A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordström and Reissner-Nordström-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes.

  20. Malware Normalization

    OpenAIRE

    Christodorescu, Mihai; Kinder, Johannes; Jha, Somesh; Katzenbeisser, Stefan; Veith, Helmut

    2005-01-01

    Malware is code designed for a malicious purpose, such as obtaining root privilege on a host. A malware detector identifies malware and thus prevents it from adversely affecting a host. In order to evade detection by malware detectors, malware writers use various obfuscation techniques to transform their malware. There is strong evidence that commercial malware detectors are susceptible to these evasion tactics. In this paper, we describe the design and implementation of a malware normalizer ...

  1. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    Pedersen, Steffen Højris

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...

  2. Limitations of shallow nets approximation.

    Science.gov (United States)

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Low-Dimensional Models of "Neuro-Glio-Vascular Unit" for Describing Neural Dynamics under Normal and Energy-Starved Conditions.

    Science.gov (United States)

    Chhabria, Karishma; Chakravarthy, V Srinivasa

    2016-01-01

    The motivation of developing simple minimal models for neuro-glio-vascular (NGV) system arises from a recent modeling study elucidating the bidirectional information flow within the NGV system having 89 dynamic equations (1). While this was one of the first attempts at formulating a comprehensive model for neuro-glio-vascular system, it poses severe restrictions in scaling up to network levels. On the contrary, low--dimensional models are convenient devices in simulating large networks that also provide an intuitive understanding of the complex interactions occurring within the NGV system. The key idea underlying the proposed models is to describe the glio-vascular system as a lumped system, which takes neural firing rate as input and returns an "energy" variable (analogous to ATP) as output. To this end, we present two models: biophysical neuro-energy (Model 1 with five variables), comprising KATP channel activity governed by neuronal ATP dynamics, and the dynamic threshold (Model 2 with three variables), depicting the dependence of neural firing threshold on the ATP dynamics. Both the models show different firing regimes, such as continuous spiking, phasic, and tonic bursting depending on the ATP production coefficient, ɛp, and external current. We then demonstrate that in a network comprising such energy-dependent neuron units, ɛp could modulate the local field potential (LFP) frequency and amplitude. Interestingly, low-frequency LFP dominates under low ɛp conditions, which is thought to be reminiscent of seizure-like activity observed in epilepsy. The proposed "neuron-energy" unit may be implemented in building models of NGV networks to simulate data obtained from multimodal neuroimaging systems, such as functional near infrared spectroscopy coupled to electroencephalogram and functional magnetic resonance imaging coupled to electroencephalogram. Such models could also provide a theoretical basis for devising optimal neurorehabilitation strategies, such as

  4. Normal accidents

    International Nuclear Information System (INIS)

    Perrow, C.

    1989-01-01

    The author has chosen numerous concrete examples to illustrate the hazardousness inherent in high-risk technologies. Starting with the TMI reactor accident in 1979, he shows that it is not only the nuclear energy sector that bears the risk of 'normal accidents', but also quite a number of other technologies and industrial sectors, or research fields. The author refers to the petrochemical industry, shipping, air traffic, large dams, mining activities, and genetic engineering, showing that due to the complexity of the systems and their manifold, rapidly interacting processes, accidents happen that cannot be thoroughly calculated, and hence are unavoidable. (orig./HP) [de

  5. The normal holonomy group

    International Nuclear Information System (INIS)

    Olmos, C.

    1990-05-01

    The restricted holonomy group of a Riemannian manifold is a compact Lie group and its representation on the tangent space is a product of irreducible representations and a trivial one. Each one of the non-trivial factors is either an orthogonal representation of a connected compact Lie group which acts transitively on the unit sphere or it is the isotropy representation of a single Riemannian symmetric space of rank ≥ 2. We prove that, all these properties are also true for the representation on the normal space of the restricted normal holonomy group of any submanifold of a space of constant curvature. 4 refs

  6. Approximate circuits for increased reliability

    Science.gov (United States)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  7. The StreamCat Dataset: Accumulated Attributes for NHDPlusV2 (Version 2.1) Catchments for the Conterminous United States: PRISM Normals Data

    Science.gov (United States)

    This dataset represents climate observations within individual, local NHDPlusV2 catchments and upstream, contributing watersheds. Attributes of the landscape layer were calculated for every local NHDPlusV2 catchment and accumulated to provide watershed-level metrics. (See Supplementary Info for Glossary of Terms) PRISM is a set of monthly, yearly, and single-event gridded data products of mean temperature and precipitation, max/min temperatures, and dewpoints, primarily for the United States. In-situ point measurements are ingested into the PRISM (Parameter elevation Regression on Independent Slopes Model) statistical mapping system. The PRISM products use a weighted regression scheme to account for complex climate regimes associated with orography, rain shadows, temperature inversions, slope aspect, coastal proximity, and other factors. (see Data Sources for links to NHDPlusV2 data and USGS Data) These data are summarized by local catchment and by watershed to produce local catchment-level and watershed-level metrics as a continuous data type (see Data Structure and Attribute Information for a description).

  8. Dating of major normal fault systems using thermochronology: An example from the Raft River detachment, Basin and Range, western United States

    Science.gov (United States)

    Wells, M.L.; Snee, L.W.; Blythe, A.E.

    2000-01-01

    Application of thermochronological techniques to major normal fault systems can resolve the timing of initiation and duration of extension, rates of motion on detachment faults, timing of ductile mylonite formation and passage of rocks through the crystal-plastic to brittle transition, and multiple events of extensional unroofing. Here we determine the above for the top-to-the-east Raft River detachment fault and shear zone by study of spatial gradients in 40Ar/39Ar and fission track cooling ages of footwall rocks and cooling histories and by comparison of cooling histories with deformation temperatures. Mica 40Ar/39Ar cooling ages indicate that extension-related cooling began at ???25-20 Ma, and apatite fission track ages show that motion on the Raft River detachment proceeded until ???7.4 Ma. Collective cooling curves show acceleration of cooling rates during extension, from 5-10??C/m.y. to rates in excess of 70-100??C/m.y. The apparent slip rate along the Raft River detachment, recorded in spatial gradients of apatite fission track ages, is 7 mm/yr between 13.5 and 7.4 Ma and is interpreted to record the rate of migration of a rolling hinge. Microstructural study of footwall mylonite indicates that deformation conditions were no higher than middle greenschist facies and that deformation occurred during cooling to cataclastic conditions. These data show that the shear zone and detachment fault represent a continuum produced by progressive exhumation and shearing during Miocene extension and preclude the possibility of a Mesozoic age for the ductile shear zone. Moderately rapid cooling in middle Eocene time likely records exhumation resulting from an older, oppositely rooted, extensional shear zone along the west side of the Grouse Creek, Raft River, and Albion Mountains. Copyright 2000 by the American Geophysical Union.

  9. Coefficients Calculation in Pascal Approximation for Passive Filter Design

    Directory of Open Access Journals (Sweden)

    George B. Kasapoglu

    2018-02-01

    Full Text Available The recently modified Pascal function is further exploited in this paper in the design of passive analog filters. The Pascal approximation has non-equiripple magnitude, in contrast of the most well-known approximations, such as the Chebyshev approximation. A novelty of this work is the introduction of a precise method that calculates the coefficients of the Pascal function. Two examples are presented for the passive design to illustrate the advantages and the disadvantages of the Pascal approximation. Moreover, the values of the passive elements can be taken from tables, which are created to define the normalized values of these elements for the Pascal approximation, as Zverev had done for the Chebyshev, Elliptic, and other approximations. Although Pascal approximation can be implemented to both passive and active filter designs, a passive filter design is addressed in this paper, and the benefits and shortcomings of Pascal approximation are presented and discussed.

  10. The modified signed likelihood statistic and saddlepoint approximations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1992-01-01

    SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....

  11. Reconstructing Normality

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Fristed, Peter Billeskov

    2012-01-01

    Forensic psychiatry is an area of priority for the Danish Government. As the field expands, this calls for increased knowledge about mental health nursing practice, as this is part of the forensic psychiatry treatment offered. However, only sparse research exists in this area. The aim of this study...... was to investigate the characteristics of forensic mental health nursing staff interaction with forensic mental health inpatients and to explore how staff give meaning to these interactions. The project included 32 forensic mental health staff members, with over 307 hours of participant observations, 48 informal....... The intention is to establish a trusting relationship to form behaviour and perceptual-corrective care, which is characterized by staff's endeavours to change, halt, or support the patient's behaviour or perception in relation to staff's perception of normality. The intention is to support and teach the patient...

  12. Pursuing Normality

    DEFF Research Database (Denmark)

    Madsen, Louise Sofia; Handberg, Charlotte

    2018-01-01

    implying an influence on whether to participate in cancer survivorship care programs. Because of "pursuing normality," 8 of 9 participants opted out of cancer survivorship care programming due to prospects of "being cured" and perceptions of cancer survivorship care as "a continuation of the disease......BACKGROUND: The present study explored the reflections on cancer survivorship care of lymphoma survivors in active treatment. Lymphoma survivors have survivorship care needs, yet their participation in cancer survivorship care programs is still reported as low. OBJECTIVE: The aim of this study...... was to understand the reflections on cancer survivorship care of lymphoma survivors to aid the future planning of cancer survivorship care and overcome barriers to participation. METHODS: Data were generated in a hematological ward during 4 months of ethnographic fieldwork, including participant observation and 46...

  13. PWL approximation of nonlinear dynamical systems, part II: identification issues

    International Nuclear Information System (INIS)

    De Feo, O; Storace, M

    2005-01-01

    This paper and its companion address the problem of the approximation/identification of nonlinear dynamical systems depending on parameters, with a view to their circuit implementation. The proposed method is based on a piecewise-linear approximation technique. In particular, this paper describes a black-box identification method based on state space reconstruction and PWL approximation, and applies it to some particularly significant dynamical systems (two topological normal forms and the Colpitts oscillator)

  14. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  15. The efficiency of Flory approximation

    International Nuclear Information System (INIS)

    Obukhov, S.P.

    1984-01-01

    The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)

  16. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  17. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  18. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  19. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  20. An Integrable Approximation for the Fermi Pasta Ulam Lattice

    Science.gov (United States)

    Rink, Bob

    This contribution presents a review of results obtained from computations of approximate equations of motion for the Fermi-Pasta-Ulam lattice. These approximate equations are obtained as a finite-dimensional Birkhoff normal form. It turns out that in many cases, the Birkhoff normal form is suitable for application of the KAM theorem. In particular, this proves Nishida's 1971 conjecture stating that almost all low-energetic motions of the anharmonic Fermi-Pasta-Ulam lattice with fixed endpoints are quasi-periodic. The proof is based on the formal Birkhoff normal form computations of Nishida, the KAM theorem and discrete symmetry considerations.

  1. Nuclear Hartree-Fock approximation testing and other related approximations

    International Nuclear Information System (INIS)

    Cohenca, J.M.

    1970-01-01

    Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt

  2. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  3. Diophantine approximation and Dirichlet series

    CERN Document Server

    Queffélec, Hervé

    2013-01-01

    This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...

  4. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  5. Rational approximations for tomographic reconstructions

    International Nuclear Information System (INIS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-01-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)

  6. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  7. Approximate reasoning in physical systems

    International Nuclear Information System (INIS)

    Mutihac, R.

    1991-01-01

    The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)

  8. Face Recognition using Approximate Arithmetic

    DEFF Research Database (Denmark)

    Marso, Karol

    Face recognition is image processing technique which aims to identify human faces and found its use in various different fields for example in security. Throughout the years this field evolved and there are many approaches and many different algorithms which aim to make the face recognition as effective...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....

  9. Approximate Reanalysis in Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole

    2009-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...

  10. Approximate Matching of Hierarchial Data

    DEFF Research Database (Denmark)

    Augsten, Nikolaus

    -grams of a tree are all its subtrees of a particular shape. Intuitively, two trees are similar if they have many pq-grams in common. The pq-gram distance is an efficient and effective approximation of the tree edit distance. We analyze the properties of the pq-gram distance and compare it with the tree edit...

  11. Approximation of Surfaces by Cylinders

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1998-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  12. Approximation properties of haplotype tagging

    Directory of Open Access Journals (Sweden)

    Dreiseitl Stephan

    2006-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.

  13. All-Norm Approximation Algorithms

    NARCIS (Netherlands)

    Azar, Yossi; Epstein, Leah; Richter, Yossi; Woeginger, Gerhard J.; Penttonen, Martti; Meineche Schmidt, Erik

    2002-01-01

    A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different ℓ p norms. We address this problem by introducing the concept of an All-norm ρ-approximation

  14. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...

  15. On badly approximable complex numbers

    DEFF Research Database (Denmark)

    Esdahl-Schou, Rune; Kristensen, S.

    We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...

  16. Approximate reasoning in decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M M; Sanchez, E

    1982-01-01

    The volume aims to incorporate the recent advances in both theory and applications. It contains 44 articles by 74 contributors from 17 different countries. The topics considered include: membership functions; composite fuzzy relations; fuzzy logic and inference; classifications and similarity measures; expert systems and medical diagnosis; psychological measurements and human behaviour; approximate reasoning and decision analysis; and fuzzy clustering algorithms.

  17. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  18. Pythagorean Approximations and Continued Fractions

    Science.gov (United States)

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  19. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  20. Serum insulin-like growth factor (IGF)-I and IGF binding protein-3 in relation to terminal duct lobular unit involution of the normal breast in Caucasian and African American women: The Susan G. Komen Tissue Bank.

    Science.gov (United States)

    Oh, Hannah; Pfeiffer, Ruth M; Falk, Roni T; Horne, Hisani N; Xiang, Jackie; Pollak, Michael; Brinton, Louise A; Storniolo, Anna Maria V; Sherman, Mark E; Gierach, Gretchen L; Figueroa, Jonine D

    2018-02-22

    Lesser degrees of terminal duct lobular unit (TDLU) involution, as reflected by higher numbers of TDLUs and acini/TDLU, are associated with elevated breast cancer risk. In rodent models, the insulin-like growth factor (IGF) system regulates involution of the mammary gland. We examined associations of circulating IGF measures with TDLU involution in normal breast tissues among women without precancerous lesions. Among 715 Caucasian and 283 African American (AA) women who donated normal breast tissue samples to the Komen Tissue Bank between 2009 and 2012 (75% premenopausal), serum concentrations of IGF-I and binding protein (IGFBP)-3 were quantified using enzyme-linked immunosorbent assay. Hematoxilyn and eosin-stained tissue sections were assessed for numbers of TDLUs ("TDLU count"). Zero-inflated Poisson regression models with a robust variance estimator were used to estimate relative risks (RRs) for association of IGF measures (tertiles) with TDLU count by race and menopausal status, adjusting for potential confounders. AA (vs. Caucasian) women had higher age-adjusted mean levels of serum IGF-I (137 vs. 131 ng/mL, p = 0.07) and lower levels of IGFBP-3 (4165 vs. 4684 ng/mL, p IGF-I:IGFBP-3 ratios were associated with higher TDLU count in Caucasian (RR T3vs.T1 =1.33, 95% CI = 1.02-1.75, p-trend = 0.04), but not in AA (RR T3vs.T1 =0.65, 95% CI = 0.42-1.00, p-trend = 0.05), women. Our data suggest a role of the IGF system, particularly IGFBP-3, in TDLU involution of the normal breast, a breast cancer risk factor, among Caucasian and AA women. © 2018 UICC.

  1. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  2. Normal vibrations in gallium arsenide

    International Nuclear Information System (INIS)

    Dolling, G.; Waugh, J.L.T.

    1964-01-01

    The triple axis crystal spectrometer at Chalk River has been used to observe coherent slow neutron scattering from a single crystal of pure gallium arsenide at 296 o K. The frequencies of normal modes of vibration propagating in the [ζ00], (ζζζ], and (0ζζ] crystal directions have been determined with a precision of between 1 and 2·5 per cent. A limited number of normal modes have also been studied at 95 and 184 o K. Considerable difficulty was experienced in obtaining welt resolved neutron peaks corresponding to the two non-degenerate optic modes for very small wave-vector, particularly at 296 o K. However, from a comparison of results obtained under various experimental conditions at several different points in reciprocal space, frequencies (units 10 12 c/s) for these modes (at 296 o K) have been assigned: T 8·02±0·08 and L 8·55±02. Other specific normal modes, with their measured frequencies are (a) (1,0,0): TO 7·56 ± 008, TA 2·36 ± 0·015, LO 7·22 ± 0·15, LA 6·80 ± 0·06; (b) (0·5, 0·5, 0·5): TO 7·84 ± 0·12, TA 1·86 ± 0·02, LO 7·15 ± 0·07, LA 6·26 ± 0·10; (c) (0, 0·65, 0·65): optic 8·08 ±0·13, 7·54 ± 0·12 and 6·57 ± 0·11, acoustic 5·58 ± 0·08, 3·42 · 0·06 and 2·36 ± 004. These results are generally slightly lower than the corresponding frequencies for germanium. An analysis in terms of various modifications of the dipole approximation model has been carried out. A feature of this analysis is that the charge on the gallium atom appears to be very small, about +0·04 e. The frequency distribution function has been derived from one of the force models. (author)

  3. Normal vibrations in gallium arsenide

    Energy Technology Data Exchange (ETDEWEB)

    Dolling, G; Waugh, J L T

    1964-07-01

    The triple axis crystal spectrometer at Chalk River has been used to observe coherent slow neutron scattering from a single crystal of pure gallium arsenide at 296{sup o}K. The frequencies of normal modes of vibration propagating in the [{zeta}00], ({zeta}{zeta}{zeta}], and (0{zeta}{zeta}] crystal directions have been determined with a precision of between 1 and 2{center_dot}5 per cent. A limited number of normal modes have also been studied at 95 and 184{sup o}K. Considerable difficulty was experienced in obtaining welt resolved neutron peaks corresponding to the two non-degenerate optic modes for very small wave-vector, particularly at 296{sup o}K. However, from a comparison of results obtained under various experimental conditions at several different points in reciprocal space, frequencies (units 10{sup 12} c/s) for these modes (at 296{sup o}K) have been assigned: T 8{center_dot}02{+-}0{center_dot}08 and L 8{center_dot}55{+-}02. Other specific normal modes, with their measured frequencies are (a) (1,0,0): TO 7{center_dot}56 {+-} 008, TA 2{center_dot}36 {+-} 0{center_dot}015, LO 7{center_dot}22 {+-} 0{center_dot}15, LA 6{center_dot}80 {+-} 0{center_dot}06; (b) (0{center_dot}5, 0{center_dot}5, 0{center_dot}5): TO 7{center_dot}84 {+-} 0{center_dot}12, TA 1{center_dot}86 {+-} 0{center_dot}02, LO 7{center_dot}15 {+-} 0{center_dot}07, LA 6{center_dot}26 {+-} 0{center_dot}10; (c) (0, 0{center_dot}65, 0{center_dot}65): optic 8{center_dot}08 {+-}0{center_dot}13, 7{center_dot}54 {+-} 0{center_dot}12 and 6{center_dot}57 {+-} 0{center_dot}11, acoustic 5{center_dot}58 {+-} 0{center_dot}08, 3{center_dot}42 {center_dot} 0{center_dot}06 and 2{center_dot}36 {+-} 004. These results are generally slightly lower than the corresponding frequencies for germanium. An analysis in terms of various modifications of the dipole approximation model has been carried out. A feature of this analysis is that the charge on the gallium atom appears to be very small, about +0{center_dot}04 e. The

  4. Beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2013-01-01

    We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...... functional theory and the adiabatic connection fluctuation-dissipation theorem and contains no fitted parameters. The new kernel is shown to preserve the accurate description of dispersive interactions from RPA while significantly improving the description of short-range correlation in molecules, insulators......, and metals. For molecular atomization energies, the rALDA is a factor of 7 better than RPA and a factor of 4 better than the Perdew-Burke-Ernzerhof (PBE) functional when compared to experiments, and a factor of 3 (1.5) better than RPA (PBE) for cohesive energies of solids. For transition metals...

  5. Hydrogen: Beyond the Classic Approximation

    International Nuclear Information System (INIS)

    Scivetti, Ivan

    2003-01-01

    The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position

  6. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  7. WKB approximation in atomic physics

    International Nuclear Information System (INIS)

    Karnakov, Boris Mikhailovich

    2013-01-01

    Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.

  8. Approximate solutions to Mathieu's equation

    Science.gov (United States)

    Wilkinson, Samuel A.; Vogt, Nicolas; Golubev, Dmitry S.; Cole, Jared H.

    2018-06-01

    Mathieu's equation has many applications throughout theoretical physics. It is especially important to the theory of Josephson junctions, where it is equivalent to Schrödinger's equation. Mathieu's equation can be easily solved numerically, however there exists no closed-form analytic solution. Here we collect various approximations which appear throughout the physics and mathematics literature and examine their accuracy and regimes of applicability. Particular attention is paid to quantities relevant to the physics of Josephson junctions, but the arguments and notation are kept general so as to be of use to the broader physics community.

  9. Approximate Inference for Wireless Communications

    DEFF Research Database (Denmark)

    Hansen, Morten

    This thesis investigates signal processing techniques for wireless communication receivers. The aim is to improve the performance or reduce the computationally complexity of these, where the primary focus area is cellular systems such as Global System for Mobile communications (GSM) (and extensions...... to the optimal one, which usually requires an unacceptable high complexity. Some of the treated approximate methods are based on QL-factorization of the channel matrix. In the work presented in this thesis it is proven how the QL-factorization of frequency-selective channels asymptotically provides the minimum...

  10. Quantum tunneling beyond semiclassical approximation

    International Nuclear Information System (INIS)

    Banerjee, Rabin; Majhi, Bibhas Ranjan

    2008-01-01

    Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.

  11. Generalized Gradient Approximation Made Simple

    International Nuclear Information System (INIS)

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-01-01

    Generalized gradient approximations (GGA close-quote s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. copyright 1996 The American Physical Society

  12. Local facet approximation for image stitching

    Science.gov (United States)

    Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun

    2018-01-01

    Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.

  13. Impulse approximation in solid helium

    International Nuclear Information System (INIS)

    Glyde, H.R.

    1985-01-01

    The incoherent dynamic form factor S/sub i/(Q, ω) is evaluated in solid helium for comparison with the impulse approximation (IA). The purpose is to determine the Q values for which the IA is valid for systems such a helium where the atoms interact via a potential having a steeply repulsive but not infinite hard core. For 3 He, S/sub i/(Q, ω) is evaluated from first principles, beginning with the pair potential. The density of states g(ω) is evaluated using the self-consistent phonon theory and S/sub i/(Q,ω) is expressed in terms of g(ω). For solid 4 He resonable models of g(ω) using observed input parameters are used to evaluate S/sub i/(Q,ω). In both cases S/sub i/(Q, ω) is found to approach the impulse approximation S/sub IA/(Q, ω) closely for wave vector transfers Q> or approx. =20 A -1 . The difference between S/sub i/ and S/sub IA/, which is due to final state interactions of the scattering atom with the remainder of the atoms in the solid, is also predominantly antisymmetric in (ω-ω/sub R/), where ω/sub R/ is the recoil frequency. This suggests that the symmetrization procedure proposed by Sears to eliminate final state contributions should work well in solid helium

  14. Finite approximations in fluid mechanics

    International Nuclear Information System (INIS)

    Hirschel, E.H.

    1986-01-01

    This book contains twenty papers on work which was conducted between 1983 and 1985 in the Priority Research Program ''Finite Approximations in Fluid Mechanics'' of the German Research Society (Deutsche Forschungsgemeinschaft). Scientists from numerical mathematics, fluid mechanics, and aerodynamics present their research on boundary-element methods, factorization methods, higher-order panel methods, multigrid methods for elliptical and parabolic problems, two-step schemes for the Euler equations, etc. Applications are made to channel flows, gas dynamical problems, large eddy simulation of turbulence, non-Newtonian flow, turbomachine flow, zonal solutions for viscous flow problems, etc. The contents include: multigrid methods for problems from fluid dynamics, development of a 2D-Transonic Potential Flow Solver; a boundary element spectral method for nonstationary viscous flows in 3 dimensions; navier-stokes computations of two-dimensional laminar flows in a channel with a backward facing step; calculations and experimental investigations of the laminar unsteady flow in a pipe expansion; calculation of the flow-field caused by shock wave and deflagration interaction; a multi-level discretization and solution method for potential flow problems in three dimensions; solutions of the conservation equations with the approximate factorization method; inviscid and viscous flow through rotating meridional contours; zonal solutions for viscous flow problems

  15. Plasma Physics Approximations in Ares

    International Nuclear Information System (INIS)

    Managan, R. A.

    2015-01-01

    Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A α (ζ ),A β (ζ ), ζ, f(ζ ) = (1 + e -μ/θ )F 1/2 (μ/θ), F 1/2 '/F 1/2 , F c α , and F c β . In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

  16. Nonlinear Filtering and Approximation Techniques

    Science.gov (United States)

    1991-09-01

    filtering. UNIT8 Q RECERCE**No 1223 Programme 5 A utomatique, Productique, Traitement dui Signal et des Donnc~es CONSISTENT PARAMETER ESTIMATION FOR...ue’e[71 E C 2.’(Rm x [0,7]; R) is the unique solution of the Hamilton-Jacobi-Bellman equation 9u,’[7](x, t) - EAu "’[ 7](x,t) + He,’[ 7](x,t,Du,[ 7](x,t

  17. Approximating the minimum cycle mean

    Directory of Open Access Journals (Sweden)

    Krishnendu Chatterjee

    2013-07-01

    Full Text Available We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1 First we show that the algorithmic question is reducible in O(n^2 time to the problem of a logarithmic number of min-plus matrix multiplications of n-by-n matrices, where n is the number of vertices of the graph. (2 Second, when the weights are nonnegative, we present the first (1 + ε-approximation algorithm for the problem and the running time of our algorithm is ilde(O(n^ω log^3(nW/ε / ε, where O(n^ω is the time required for the classic n-by-n matrix multiplication and W is the maximum value of the weights.

  18. Polarized constituent quarks in NLO approximation

    International Nuclear Information System (INIS)

    Khorramian, Ali N.; Tehrani, S. Atashbar; Mirjalili, A.

    2006-01-01

    The valon representation provides a basis between hadrons and quarks, in terms of which the bound-state and scattering properties of hadrons can be united and described. We studied polarized valon distributions which have an important role in describing the spin dependence of parton distribution in leading and next-to-leading order approximation. Convolution integral in frame work of valon model as a useful tool, was used in polarized case. To obtain polarized parton distributions in a proton we need to polarized valon distribution in a proton and polarized parton distributions inside the valon. We employed Bernstein polynomial averages to get unknown parameters of polarized valon distributions by fitting to available experimental data

  19. Normal Pressure Hydrocephalus (NPH)

    Science.gov (United States)

    ... local chapter Join our online community Normal Pressure Hydrocephalus (NPH) Normal pressure hydrocephalus is a brain disorder ... Symptoms Diagnosis Causes & risks Treatments About Normal Pressure Hydrocephalus Normal pressure hydrocephalus occurs when excess cerebrospinal fluid ...

  20. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...

  1. Approximate cohomology in Banach algebras | Pourabbas ...

    African Journals Online (AJOL)

    We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...

  2. SU-C-BRC-01: A Monte Carlo Study of Out-Of-Field Doses From Cobalt-60 Teletherapy Units Intended for Historical Correlations of Dose to Normal Tissue

    Energy Technology Data Exchange (ETDEWEB)

    Petroccia, H [University of Florida, Gainesville, FL (United States); Olguin, E [Gainesville, FL (United States); Culberson, W [University of Wisconsin Madison, Madison, WI (United States); Bednarz, B [University of Wisconsin, Madison, WI (United States); Mendenhall, N [UF Health Proton Therapy Institute, Jacksonville, FL (United States); Bolch, W [University Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: Innovations in radiotherapy treatments, such as dynamic IMRT, VMAT, and SBRT/SRS, result in larger proportions of low-dose regions where normal tissues are exposed to low doses levels. Low doses of radiation have been linked to secondary cancers and cardiac toxicities. The AAPM TG Committee No.158 entitled, ‘Measurements and Calculations of Doses outside the Treatment Volume from External-Beam Radiation Therapy’, has been formed to review the dosimetry of non-target and out-of-field exposures using experimental and computational approaches. Studies on historical patients can provide comprehensive information about secondary effects from out-of-field doses when combined with long-term patient follow-up, thus providing significant insight into projecting future outcomes of patients undergoing modern-day treatments. Methods: We present a Monte Carlo model of a Theratron-1000 cobalt-60 teletherapy unit, which historically treated patients at the University of Florida, as a means of determining doses located outside the primary beam. Experimental data for a similar Theratron-1000 was obtained at the University of Wisconsin’s ADCL to benchmark the model for out-of-field dosimetry. An Exradin A12 ion chamber and TLD100 chips were used to measure doses in an extended water phantom to 60 cm outside the primary field at 5 and 10 cm depths. Results: Comparison between simulated and experimental measurements of PDDs and lateral profiles show good agreement for in-field and out-of-field doses. At 10 cm away from the edge of a 6×6, 10×10, and 20×20 cm2 field, relative out-of-field doses were measured in the range of 0.5% to 3% of the dose measured at 5 cm depth along the CAX. Conclusion: Out-of-field doses can be as high as 90 to 180 cGy assuming historical prescription doses of 30 to 60 Gy and should be considered when correlating late effects with normal tissue dose.

  3. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  4. Normalization of Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  5. Slab-diffusion approximation from time-constant-like calculations

    International Nuclear Information System (INIS)

    Johnson, R.W.

    1976-12-01

    Two equations were derived which describe the quantity and any fluid diffused from a slab as a function of time. One equation is applicable to the initial stage of the process; the other to the final stage. Accuracy is 0.2 percent at the one point where both approximations apply and where accuracy of either approximation is the poorest. Characterizing other rate processes might be facilitated by the use of the concept of NOLOR (normal of the logarithm of the rate) and its time dependence

  6. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  7. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  8. Nuclear data processing, analysis, transformation and storage with Pade-approximants

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gay, E.V.; Guseynov, M.A.; Rabotnov, N.S.

    1992-01-01

    A method is described to generate rational approximants of high order with applications to neutron data handling. The problems considered are: The approximations of neutron cross-sections in resonance region producing the parameters for Adler-Adler type formulae; calculations of resulting rational approximants' errors given in analytical form allowing to compute the error at any energy point inside the interval of approximation; calculations of the correlation coefficient of error values in two arbitrary points provided that experimental errors are independent and normally distributed; a method of simultaneous generation of a few rational approximants with identical set of poles; functionals other than LSM; two-dimensional approximation. (orig.)

  9. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  10. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  11. An approximation for kanban controlled assembly systems

    NARCIS (Netherlands)

    Topan, E.; Avsar, Z.M.

    2011-01-01

    An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated

  12. Operator approximant problems arising from quantum theory

    CERN Document Server

    Maher, Philip J

    2017-01-01

    This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.

  13. Symmetries of th-Order Approximate Stochastic Ordinary Differential Equations

    OpenAIRE

    Fredericks, E.; Mahomed, F. M.

    2012-01-01

    Symmetries of $n$ th-order approximate stochastic ordinary differential equations (SODEs) are studied. The determining equations of these SODEs are derived in an Itô calculus context. These determining equations are not stochastic in nature. SODEs are normally used to model nature (e.g., earthquakes) or for testing the safety and reliability of models in construction engineering when looking at the impact of random perturbations.

  14. Group C∗-algebras without the completely bounded approximation property

    DEFF Research Database (Denmark)

    Haagerup, U.

    2016-01-01

    It is proved that: (1) The Fourier algebra A(G) of a simple Lie group G of real rank at least 2 with finite center does not have a multiplier bounded approximate unit. (2) The reduced C∗-algebra C∗ r of any lattice in a non-compact simple Lie group of real rank at least 2 with finite center does...... not have the completely bounded approximation property. Hence, the results obtained by de Canniere and the author for SOe (n, 1), n ≥ 2, and by Cowling for SU(n, 1) do not generalize to simple Lie groups of real rank at least 2. © 2016 Heldermann Verlag....

  15. Normalized modes at selected points without normalization

    Science.gov (United States)

    Kausel, Eduardo

    2018-04-01

    As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á

  16. Analysis of corrections to the eikonal approximation

    Science.gov (United States)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  17. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey; Alkhalifah, Tariq Ali

    2013-01-01

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  18. Analytical approximation of neutron physics data

    International Nuclear Information System (INIS)

    Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.

    1984-01-01

    The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy

  19. A unified approach to the Darwin approximation

    International Nuclear Information System (INIS)

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-01-01

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting

  20. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  1. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  2. Bounded-Degree Approximations of Stochastic Networks

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.

  3. Cosmological applications of Padé approximant

    International Nuclear Information System (INIS)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation

  4. Cosmological applications of Padé approximant

    Science.gov (United States)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

  5. Errors due to the cylindrical cell approximation in lattice calculations

    Energy Technology Data Exchange (ETDEWEB)

    Newmarch, D A [Reactor Development Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1960-06-15

    It is shown that serious errors in fine structure calculations may arise through the use of the cylindrical cell approximation together with transport theory methods. The effect of this approximation is to overestimate the ratio of the flux in the moderator to the flux in the fuel. It is demonstrated that the use of the cylindrical cell approximation gives a flux in the moderator which is considerably higher than in the fuel, even when the cell dimensions in units of mean free path tend to zero; whereas, for the case of real cells (e.g. square or hexagonal), the flux ratio must tend to unity. It is also shown that, for cylindrical cells of any size, the ratio of the flux in the moderator to flux in the fuel tends to infinity as the total neutron cross section in the moderator tends to zero; whereas the ratio remains finite for real cells. (author)

  6. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  7. Uniform analytic approximation of Wigner rotation matrices

    Science.gov (United States)

    Hoffmann, Scott E.

    2018-02-01

    We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.

  8. Exact and approximate multiple diffraction calculations

    International Nuclear Information System (INIS)

    Alexander, Y.; Wallace, S.J.; Sparrow, D.A.

    1976-08-01

    A three-body potential scattering problem is solved in the fixed scatterer model exactly and approximately to test the validity of commonly used assumptions of multiple scattering calculations. The model problem involves two-body amplitudes that show diffraction-like differential scattering similar to high energy hadron-nucleon amplitudes. The exact fixed scatterer calculations are compared to Glauber approximation, eikonal-expansion results and a noneikonal approximation

  9. Approximate dynamic programming solving the curses of dimensionality

    CERN Document Server

    Powell, Warren B

    2007-01-01

    Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. The recipient of the 2004 INFORMS Fellow Award, Dr. Powell has authored over 100 refereed publications on stochastic optimization, approximate dynamic programming, and dynamic resource management.

  10. Bent approximations to synchrotron radiation optics

    International Nuclear Information System (INIS)

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors

  11. Local density approximations for relativistic exchange energies

    International Nuclear Information System (INIS)

    MacDonald, A.H.

    1986-01-01

    The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented

  12. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  13. APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kambo, N. S.

    2012-11-01

    Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.

  14. Diagonal Pade approximations for initial value problems

    International Nuclear Information System (INIS)

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab

  15. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use ... Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, People's Republic of China ...

  16. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2010 Mathematics Subject Classification. 46L07. 1. Introduction. Given a countable discrete group G, some nice approximation properties for the reduced. C∗-algebras C∗ r (G) can give us the approximation properties of G. For example, Lance. [7] proved that the nuclearity of C∗ r (G) is equivalent to the amenability of G; ...

  17. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-01-01

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  18. Simultaneous approximation in scales of Banach spaces

    International Nuclear Information System (INIS)

    Bramble, J.H.; Scott, R.

    1978-01-01

    The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods

  19. Approximation algorithms for guarding holey polygons ...

    African Journals Online (AJOL)

    Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...

  20. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.

    2008-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  1. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.; Holub, J.; Zdárek, J.

    2006-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  2. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  3. Nonlinear approximation with general wave packets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...

  4. Quirks of Stirling's Approximation

    Science.gov (United States)

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  5. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-06-23

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  6. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are

  7. Improved Dutch Roll Approximation for Hypersonic Vehicle

    Directory of Open Access Journals (Sweden)

    Liang-Liang Yin

    2014-06-01

    Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.

  8. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  9. Approximate reflection coefficients for a thin VTI layer

    KAUST Repository

    Hao, Qi

    2017-09-18

    We present an approximate method to derive simple expressions for the reflection coefficients of P- and SV-waves for a thin transversely isotropic layer with a vertical symmetry axis (VTI) embedded in a homogeneous VTI background. The layer thickness is assumed to be much smaller than the wavelengths of P- and SV-waves inside. The exact reflection and transmission coefficients are derived by the propagator matrix method. In the case of normal incidence, the exact reflection and transmission coefficients are expressed in terms of the impedances of vertically propagating P- and S-waves. For subcritical incidence, the approximate reflection coefficients are expressed in terms of the contrast in the VTI parameters between the layer and the background. Numerical examples are designed to analyze the reflection coefficients at normal and oblique incidence, and investigate the influence of transverse isotropy on the reflection coefficients. Despite giving numerical errors, the approximate formulae are sufficiently simple to qualitatively analyze the variation of the reflection coefficients with the angle of incidence.

  10. Major Accidents (Gray Swans) Likelihood Modeling Using Accident Precursors and Approximate Reasoning.

    Science.gov (United States)

    Khakzad, Nima; Khan, Faisal; Amyotte, Paul

    2015-07-01

    Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.

  11. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  12. Conditional Density Approximations with Mixtures of Polynomials

    DEFF Research Database (Denmark)

    Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre

    2015-01-01

    Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...

  13. Hardness and Approximation for Network Flow Interdiction

    OpenAIRE

    Chestnut, Stephen R.; Zenklusen, Rico

    2015-01-01

    In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...

  14. Approximation of the semi-infinite interval

    Directory of Open Access Journals (Sweden)

    A. McD. Mercer

    1980-01-01

    Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.

  15. Mathematical analysis, approximation theory and their applications

    CERN Document Server

    Gupta, Vijay

    2016-01-01

    Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.

  16. Rational Approximations of the Inverse Gaussian Function.

    Science.gov (United States)

    Byars, Jackson A.; Roscoe, John T.

    There are at least two situations in which the behavioral scientist wishes to transform uniformly distributed data into normally distributed data: (1) In studies of sampling distributions where uniformly distributed pseudo-random numbers are generated by a computer but normally distributed numbers are desired; and (2) In measurement applications…

  17. Baby Poop: What's Normal?

    Science.gov (United States)

    ... I'm breast-feeding my newborn and her bowel movements are yellow and mushy. Is this normal for baby poop? Answers from Jay L. Hoecker, M.D. Yellow, mushy bowel movements are perfectly normal for breast-fed babies. Still, ...

  18. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    , obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose

  19. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  20. Nonlinear Ritz approximation for Fredholm functionals

    Directory of Open Access Journals (Sweden)

    Mudhir A. Abdul Hussain

    2015-11-01

    Full Text Available In this article we use the modify Lyapunov-Schmidt reduction to find nonlinear Ritz approximation for a Fredholm functional. This functional corresponds to a nonlinear Fredholm operator defined by a nonlinear fourth-order differential equation.

  1. Euclidean shortest paths exact or approximate algorithms

    CERN Document Server

    Li, Fajie

    2014-01-01

    This book reviews algorithms for the exact or approximate solution of shortest-path problems, with a specific focus on a class of algorithms called rubberband algorithms. The coverage includes mathematical proofs for many of the given statements.

  2. Square well approximation to the optical potential

    International Nuclear Information System (INIS)

    Jain, A.K.; Gupta, M.C.; Marwadi, P.R.

    1976-01-01

    Approximations for obtaining T-matrix elements for a sum of several potentials in terms of T-matrices for individual potentials are studied. Based on model calculations for S-wave for a sum of two separable non-local potentials of Yukawa type form factors and a sum of two delta function potentials, it is shown that the T-matrix for a sum of several potentials can be approximated satisfactorily over all the energy regions by the sum of T-matrices for individual potentials. Based on this, an approximate method for finding T-matrix for any local potential by approximating it by a sum of suitable number of square wells is presented. This provides an interesting way to calculate the T-matrix for any arbitary potential in terms of Bessel functions to a good degree of accuracy. The method is applied to the Saxon-Wood potentials and good agreement with exact results is found. (author)

  3. Approximation for the adjoint neutron spectrum

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)

  4. Saddlepoint approximation methods in financial engineering

    CERN Document Server

    Kwok, Yue Kuen

    2018-01-01

    This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables.  The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...

  5. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  6. Pion-nucleus cross sections approximation

    International Nuclear Information System (INIS)

    Barashenkov, V.S.; Polanski, A.; Sosnin, A.N.

    1990-01-01

    Analytical approximation of pion-nucleus elastic and inelastic interaction cross-section is suggested, with could be applied in the energy range exceeding several dozens of MeV for nuclei heavier than beryllium. 3 refs.; 4 tabs

  7. APPROXIMATE DEVELOPMENTS FOR SURFACES OF REVOLUTION

    Directory of Open Access Journals (Sweden)

    Mădălina Roxana Buneci

    2016-12-01

    Full Text Available The purpose of this paper is provide a set of Maple procedures to construct approximate developments of a general surface of revolution generalizing the well-known gore method for sphere

  8. Steepest descent approximations for accretive operator equations

    International Nuclear Information System (INIS)

    Chidume, C.E.

    1993-03-01

    A necessary and sufficient condition is established for the strong convergence of the steepest descent approximation to a solution of equations involving quasi-accretive operators defined on a uniformly smooth Banach space. (author). 49 refs

  9. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  10. An overview on Approximate Bayesian computation*

    Directory of Open Access Journals (Sweden)

    Baragatti Meïli

    2014-01-01

    Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.

  11. Approximate Computing Techniques for Iterative Graph Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram

    2017-12-18

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.

  12. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  13. Approximative solutions of stochastic optimization problem

    Czech Academy of Sciences Publication Activity Database

    Lachout, Petr

    2010-01-01

    Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf

  14. Lattice quantum chromodynamics with approximately chiral fermions

    Energy Technology Data Exchange (ETDEWEB)

    Hierl, Dieter

    2008-05-15

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  15. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  16. Stochastic quantization and mean field approximation

    International Nuclear Information System (INIS)

    Jengo, R.; Parga, N.

    1983-09-01

    In the context of the stochastic quantization we propose factorized approximate solutions for the Fokker-Planck equation for the XY and Zsub(N) spin systems in D dimensions. The resulting differential equation for a factor can be solved and it is found to give in the limit of t→infinity the mean field or, in the more general case, the Bethe-Peierls approximation. (author)

  17. Polynomial approximation of functions in Sobolev spaces

    International Nuclear Information System (INIS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces

  18. Magnus approximation in the adiabatic picture

    International Nuclear Information System (INIS)

    Klarsfeld, S.; Oteo, J.A.

    1991-01-01

    A simple approximate nonperturbative method is described for treating time-dependent problems that works well in the intermediate regime far from both the sudden and the adiabatic limits. The method consists of applying the Magnus expansion after transforming to the adiabatic basis defined by the eigenstates of the instantaneous Hamiltonian. A few exactly soluble examples are considered in order to assess the domain of validity of the approximation. (author) 32 refs., 4 figs

  19. Lattice quantum chromodynamics with approximately chiral fermions

    International Nuclear Information System (INIS)

    Hierl, Dieter

    2008-05-01

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the Θ + pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  20. Approximating centrality in evolving graphs: toward sublinearity

    Science.gov (United States)

    Priest, Benjamin W.; Cybenko, George

    2017-05-01

    The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.

  1. Approximating perfection a mathematician's journey into the world of mechanics

    CERN Document Server

    Lebedev, Leonid P

    2004-01-01

    This is a book for those who enjoy thinking about how and why Nature can be described using mathematical tools. Approximating Perfection considers the background behind mechanics as well as the mathematical ideas that play key roles in mechanical applications. Concentrating on the models of applied mechanics, the book engages the reader in the types of nuts-and-bolts considerations that are normally avoided in formal engineering courses: how and why models remain imperfect, and the factors that motivated their development. The opening chapter reviews and reconsiders the basics of c

  2. Making nuclear 'normal'

    International Nuclear Information System (INIS)

    Haehlen, Peter; Elmiger, Bruno

    2000-01-01

    The mechanics of the Swiss NPPs' 'come and see' programme 1995-1999 were illustrated in our contributions to all PIME workshops since 1996. Now, after four annual 'waves', all the country has been covered by the NPPs' invitation to dialogue. This makes PIME 2000 the right time to shed some light on one particular objective of this initiative: making nuclear 'normal'. The principal aim of the 'come and see' programme, namely to give the Swiss NPPs 'a voice of their own' by the end of the nuclear moratorium 1990-2000, has clearly been attained and was commented on during earlier PIMEs. It is, however, equally important that Swiss nuclear energy not only made progress in terms of public 'presence', but also in terms of being perceived as a normal part of industry, as a normal branch of the economy. The message that Swiss nuclear energy is nothing but a normal business involving normal people, was stressed by several components of the multi-prong campaign: - The speakers in the TV ads were real - 'normal' - visitors' guides and not actors; - The testimonials in the print ads were all real NPP visitors - 'normal' people - and not models; - The mailings inviting a very large number of associations to 'come and see' activated a typical channel of 'normal' Swiss social life; - Spending money on ads (a new activity for Swiss NPPs) appears to have resulted in being perceived by the media as a normal branch of the economy. Today we feel that the 'normality' message has well been received by the media. In the controversy dealing with antinuclear arguments brought forward by environmental organisations journalists nowadays as a rule give nuclear energy a voice - a normal right to be heard. As in a 'normal' controversy, the media again actively ask themselves questions about specific antinuclear claims, much more than before 1990 when the moratorium started. The result is that in many cases such arguments are discarded by journalists, because they are, e.g., found to be

  3. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  4. 'LTE-diffusion approximation' for arc calculations

    International Nuclear Information System (INIS)

    Lowke, J J; Tanaka, M

    2006-01-01

    This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode

  5. Semiclassical initial value approximation for Green's function.

    Science.gov (United States)

    Kay, Kenneth G

    2010-06-28

    A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.

  6. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  7. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  8. Normal Pressure Hydrocephalus

    Science.gov (United States)

    ... improves the chance of a good recovery. Without treatment, symptoms may worsen and cause death. What research is being done? The NINDS conducts and supports research on neurological disorders, including normal pressure hydrocephalus. Research on disorders such ...

  9. Normality in Analytical Psychology

    Science.gov (United States)

    Myers, Steve

    2013-01-01

    Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity. PMID:25379262

  10. Normal pressure hydrocephalus

    Science.gov (United States)

    Hydrocephalus - occult; Hydrocephalus - idiopathic; Hydrocephalus - adult; Hydrocephalus - communicating; Dementia - hydrocephalus; NPH ... Ferri FF. Normal pressure hydrocephalus. In: Ferri FF, ed. ... Elsevier; 2016:chap 648. Rosenberg GA. Brain edema and disorders ...

  11. Normal Functioning Family

    Science.gov (United States)

    ... Spread the Word Shop AAP Find a Pediatrician Family Life Medical Home Family Dynamics Adoption & Foster Care ... Español Text Size Email Print Share Normal Functioning Family Page Content Article Body Is there any way ...

  12. Normal growth and development

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/002456.htm Normal growth and development To use the sharing features on this page, please enable JavaScript. A child's growth and development can be divided into four periods: ...

  13. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  14. Modified semiclassical approximation for trapped Bose gases

    International Nuclear Information System (INIS)

    Yukalov, V.I.

    2005-01-01

    A generalization of the semiclassical approximation is suggested allowing for an essential extension of its region of applicability. In particular, it becomes possible to describe Bose-Einstein condensation of a trapped gas in low-dimensional traps and in traps of low confining dimensions, for which the standard semiclassical approximation is not applicable. The result of the modified approach is shown to coincide with purely quantum-mechanical calculations for harmonic traps, including the one-dimensional harmonic trap. The advantage of the semiclassical approximation is in its simplicity and generality. Power-law potentials of arbitrary powers are considered. The effective thermodynamic limit is defined for any confining dimension. The behavior of the specific heat, isothermal compressibility, and density fluctuations is analyzed, with an emphasis on low confining dimensions, where the usual semiclassical method fails. The peculiarities of the thermodynamic characteristics in the effective thermodynamic limit are discussed

  15. The binary collision approximation: Background and introduction

    International Nuclear Information System (INIS)

    Robinson, M.T.

    1992-08-01

    The binary collision approximation (BCA) has long been used in computer simulations of the interactions of energetic atoms with solid targets, as well as being the basis of most analytical theory in this area. While mainly a high-energy approximation, the BCA retains qualitative significance at low energies and, with proper formulation, gives useful quantitative information as well. Moreover, computer simulations based on the BCA can achieve good statistics in many situations where those based on full classical dynamical models require the most advanced computer hardware or are even impracticable. The foundations of the BCA in classical scattering are reviewed, including methods of evaluating the scattering integrals, interaction potentials, and electron excitation effects. The explicit evaluation of time at significant points on particle trajectories is discussed, as are scheduling algorithms for ordering the collisions in a developing cascade. An approximate treatment of nearly simultaneous collisions is outlined and the searching algorithms used in MARLOWE are presented

  16. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  17. Ancilla-approximable quantum state transformations

    Energy Technology Data Exchange (ETDEWEB)

    Blass, Andreas [Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 (United States); Gurevich, Yuri [Microsoft Research, Redmond, Washington 98052 (United States)

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  18. Ancilla-approximable quantum state transformations

    International Nuclear Information System (INIS)

    Blass, Andreas; Gurevich, Yuri

    2015-01-01

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation

  19. On transparent potentials: a Born approximation study

    International Nuclear Information System (INIS)

    Coudray, C.

    1980-01-01

    In the frame of the scattering inverse problem at fixed energy, a class of potentials transparent in Born approximation is obtained. All these potentials are spherically symmetric and are oscillating functions of the reduced radial variable. Amongst them, the Born approximation of the transparent potential of the Newton-Sabatier method is found. In the same class, quasi-transparent potentials are exhibited. Very general features of potentials transparent in Born approximation are then stated. And bounds are given for the exact scattering amplitudes corresponding to most of the potentials previously exhibited. These bounds, obtained at fixed energy, and for large values of the angular momentum, are found to be independent on the energy

  20. The adiabatic approximation in multichannel scattering

    International Nuclear Information System (INIS)

    Schulte, A.M.

    1978-01-01

    Using two-dimensional models, an attempt has been made to get an impression of the conditions of validity of the adiabatic approximation. For a nucleon bound to a rotating nucleus the Coriolis coupling is neglected and the relation between this nuclear Coriolis coupling and the classical Coriolis force has been examined. The approximation for particle scattering from an axially symmetric rotating nucleus based on a short duration of the collision, has been combined with an approximation based on the limitation of angular momentum transfer between particle and nucleus. Numerical calculations demonstrate the validity of the new combined method. The concept of time duration for quantum mechanical collisions has also been studied, as has the collective description of permanently deformed nuclei. (C.F.)

  1. Minimal entropy approximation for cellular automata

    International Nuclear Information System (INIS)

    Fukś, Henryk

    2014-01-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)

  2. Resummation of perturbative QCD by pade approximants

    International Nuclear Information System (INIS)

    Gardi, E.

    1997-01-01

    In this lecture I present some of the new developments concerning the use of Pade Approximants (PA's) for resuming perturbative series in QCD. It is shown that PA's tend to reduce the renormalization scale and scheme dependence as compared to truncated series. In particular it is proven that in the limit where the β function is dominated by the 1-loop contribution, there is an exact symmetry that guarantees invariance of diagonal PA's under changing the renormalization scale. In addition it is shown that in the large β 0 approximation diagonal PA's can be interpreted as a systematic method for approximating the flow of momentum in Feynman diagrams. This corresponds to a new multiple scale generalization of the Brodsky-Lepage-Mackenzie (BLM) method to higher orders. I illustrate the method with the Bjorken sum rule and the vacuum polarization function. (author)

  3. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  4. Approximating the r-process on earth with thermonuclear explosions

    International Nuclear Information System (INIS)

    Becker, S.A.

    1992-01-01

    The astrophysical r-process can be approximately simulated in certain types of thermonuclear explosions. Between 1952 and 1969 twenty-three nuclear tests were fielded by the United States which had as one of their objectives the production of heavy transuranic elements. Of these tests, fifteen were at least partially successful. Some of these shots were conducted under the project Plowshare Peaceful Nuclear Explosion Program as scientific research experiments. A review of the program, target nuclei used, and heavy element yields achieved, will be presented as well as discussion of plans for a new experiment in a future nuclear test

  5. Perturbation expansions generated by an approximate propagator

    International Nuclear Information System (INIS)

    Znojil, M.

    1987-01-01

    Starting from a knowledge of an approximate propagator R at some trial energy guess E 0 , a new perturbative prescription for p-plet of bound states and of their energies is proposed. It generalizes the Rayleigh-Schroedinger (RS) degenerate perturbation theory to the nondiagonal operators R (eliminates a RS need of their diagnolisation) and defines an approximate Hamiltonian T by mere inversion. The deviation V of T from the exact Hamiltonian H is assumed small only after a substraction of a further auxiliary Hartree-Fock-like separable ''selfconsistent'' potential U of rank p. The convergence is illustrated numerically on the anharmonic oscillator example

  6. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  7. Unambiguous results from variational matrix Pade approximants

    International Nuclear Information System (INIS)

    Pindor, Maciej.

    1979-10-01

    Variational Matrix Pade Approximants are studied as a nonlinear variational problem. It is shown that although a stationary value of the Schwinger functional is a stationary value of VMPA, the latter has also another stationary value. It is therefore proposed that instead of looking for a stationary point of VMPA, one minimizes some non-negative functional and then one calculates VMPA at the point where the former has the absolute minimum. This approach, which we call the Method of the Variational Gradient (MVG) gives unambiguous results and is also shown to minimize a distance between the approximate and the exact stationary values of the Schwinger functional

  8. Faster and Simpler Approximation of Stable Matchings

    Directory of Open Access Journals (Sweden)

    Katarzyna Paluch

    2014-04-01

    Full Text Available We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.

  9. APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2013-03-01

    Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.

  10. On the dipole approximation with error estimates

    Science.gov (United States)

    Boßmann, Lea; Grummt, Robert; Kolb, Martin

    2018-01-01

    The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.

  11. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    Science.gov (United States)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  12. Hardness of approximation for strip packing

    DEFF Research Database (Denmark)

    Adamaszek, Anna Maria; Kociumaka, Tomasz; Pilipczuk, Marcin

    2017-01-01

    Strip packing is a classical packing problem, where the goal is to pack a set of rectangular objects into a strip of a given width, while minimizing the total height of the packing. The problem has multiple applications, for example, in scheduling and stock-cutting, and has been studied extensively......)-approximation by two independent research groups [FSTTCS 2016,WALCOM 2017]. This raises a questionwhether strip packing with polynomially bounded input data admits a quasi-polynomial time approximation scheme, as is the case for related twodimensional packing problems like maximum independent set of rectangles or two...

  13. Investigation of the vibration spectrum of SbSI crystals in harmonic and in anharmonic approximations

    International Nuclear Information System (INIS)

    Audzijonis, A.; Zigas, L.; Vinokurova, I.V.; Farberovic, O.V.; Zaltauskas, R.; Cijauskas, E.; Pauliukas, A.; Kvedaravicius, A.

    2006-01-01

    The force constants of SbSI crystal have been calculated by the pseudo-potential method. The frequencies and normal coordinates of SbSI vibration modes along the c (z) direction have been determined in harmonic approximation. The potential energies of SbSI normal modes dependence on normal coordinates along the c (z) direction V(z) have been determined in anharmonic approximation, taking into account the interaction between the phonons. It has been found, that in the range of 30-120 cm -1 , the vibrational spectrum is determined by a V(z) double-well normal mode, but in the range of 120-350 cm -1 , it is determined by a V(z) single-well normal mode

  14. Smooth quantile normalization.

    Science.gov (United States)

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  15. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying

    2015-01-01

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  16. Large hierarchies from approximate R symmetries

    International Nuclear Information System (INIS)

    Kappl, Rolf; Ratz, Michael; Vaudrevange, Patrick K.S.

    2008-12-01

    We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales. (orig.)

  17. Approximate Networking for Universal Internet Access

    Directory of Open Access Journals (Sweden)

    Junaid Qadir

    2017-12-01

    Full Text Available Despite the best efforts of networking researchers and practitioners, an ideal Internet experience is inaccessible to an overwhelming majority of people the world over, mainly due to the lack of cost-efficient ways of provisioning high-performance, global Internet. In this paper, we argue that instead of an exclusive focus on a utopian goal of universally accessible “ideal networking” (in which we have a high throughput and quality of service as well as low latency and congestion, we should consider providing “approximate networking” through the adoption of context-appropriate trade-offs. In this regard, we propose to leverage the advances in the emerging trend of “approximate computing” that rely on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power, and performance efficiency of systems by orders of magnitude by embracing output errors in resilient applications. Furthermore, we propose to extend the dimensions of approximate computing towards various knobs available at network layers. Approximate networking can be used to provision “Global Access to the Internet for All” (GAIA in a pragmatically tiered fashion, in which different users around the world are provided a different context-appropriate (but still contextually functional Internet experience.

  18. Uncertainty relations for approximation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2016-05-27

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  19. Uncertainty relations for approximation and estimation

    International Nuclear Information System (INIS)

    Lee, Jaeha; Tsutsui, Izumi

    2016-01-01

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  20. Intrinsic Diophantine approximation on general polynomial surfaces

    DEFF Research Database (Denmark)

    Tiljeset, Morten Hein

    2017-01-01

    We study the Hausdorff measure and dimension of the set of intrinsically simultaneously -approximable points on a curve, surface, etc, given as a graph of integer polynomials. We obtain complete answers to these questions for algebraically “nice” manifolds. This generalizes earlier work done...

  1. Perturbation of operators and approximation of spectrum

    Indian Academy of Sciences (India)

    outside the bounds of essential spectrum of A(x) can be approximated ... some perturbed discrete Schrödinger operators treating them as block ...... particular, one may think of estimating the spectrum and spectral gaps of Schrödinger.

  2. Quasilinear theory without the random phase approximation

    International Nuclear Information System (INIS)

    Weibel, E.S.; Vaclavik, J.

    1980-08-01

    The system of quasilinear equations is derived without making use of the random phase approximation. The fluctuating quantities are described by the autocorrelation function of the electric field using the techniques of Fourier analysis. The resulting equations posses the necessary conservation properties, but comprise new terms which hitherto have been lost in the conventional derivations

  3. Rational approximations and quantum algorithms with postselection

    NARCIS (Netherlands)

    Mahadev, U.; de Wolf, R.

    2015-01-01

    We study the close connection between rational functions that approximate a given Boolean function, and quantum algorithms that compute the same function using post-selection. We show that the minimal degree of the former equals (up to a factor of 2) the minimal query complexity of the latter. We

  4. Padé approximations and diophantine geometry.

    Science.gov (United States)

    Chudnovsky, D V; Chudnovsky, G V

    1985-04-01

    Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves.

  5. Approximate systems with confluent bonding mappings

    OpenAIRE

    Lončar, Ivan

    2001-01-01

    If X = {Xn, pnm, N} is a usual inverse system with confluent (monotone) bonding mappings, then the projections are confluent (monotone). This is not true for approximate inverse system. The main purpose of this paper is to show that the property of Kelley (smoothness) of the space Xn is a sufficient condition for the confluence (monotonicity) of the projections.

  6. Function approximation with polynomial regression slines

    International Nuclear Information System (INIS)

    Urbanski, P.

    1996-01-01

    Principles of the polynomial regression splines as well as algorithms and programs for their computation are presented. The programs prepared using software package MATLAB are generally intended for approximation of the X-ray spectra and can be applied in the multivariate calibration of radiometric gauges. (author)

  7. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation

  8. On the parametric approximation in quantum optics

    Energy Technology Data Exchange (ETDEWEB)

    D' Ariano, G.M.; Paris, M.G.A.; Sacchi, M.F. [Istituto Nazionale di Fisica Nucleare, Pavia (Italy); Pavia Univ. (Italy). Dipt. di Fisica ' Alessandro Volta'

    1999-03-01

    The authors perform the exact numerical diagonalization of Hamiltonians that describe both degenerate and nondegenerate parametric amplifiers, by exploiting the conservation laws pertaining each device. It is clarify the conditions under which the parametric approximation holds, showing that the most relevant requirements is the coherence of the pump after the interaction, rather than its un depletion.

  9. On the parametric approximation in quantum optics

    International Nuclear Information System (INIS)

    D'Ariano, G.M.; Paris, M.G.A.; Sacchi, M.F.; Pavia Univ.

    1999-01-01

    The authors perform the exact numerical diagonalization of Hamiltonians that describe both degenerate and nondegenerate parametric amplifiers, by exploiting the conservation laws pertaining each device. It is clarify the conditions under which the parametric approximation holds, showing that the most relevant requirements is the coherence of the pump after the interaction, rather than its un depletion

  10. Uniform semiclassical approximation for absorptive scattering systems

    International Nuclear Information System (INIS)

    Hussein, M.S.; Pato, M.P.

    1987-07-01

    The uniform semiclassical approximation of the elastic scattering amplitude is generalized to absorptive systems. An integral equation is derived which connects the absorption modified amplitude to the absorption free one. Division of the amplitude into a diffractive and refractive components is then made possible. (Author) [pt

  11. Tension and Approximation in Poetic Translation

    Science.gov (United States)

    Al-Shabab, Omar A. S.; Baka, Farida H.

    2015-01-01

    Simple observation reveals that each language and each culture enjoys specific linguistic features and rhetorical traditions. In poetry translation difference and the resultant linguistic tension create a gap between Source Language and Target language, a gap that needs to be bridged by creating an approximation processed through the translator's…

  12. Variational Gaussian approximation for Poisson data

    Science.gov (United States)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  13. Quasiclassical approximation for ultralocal scalar fields

    International Nuclear Information System (INIS)

    Francisco, G.

    1984-01-01

    It is shown how to obtain the quasiclassical evolution of a class of field theories called ultralocal fields. Coherent states that follow the 'classical' orbit as defined by Klauder's weak corespondence principle and restricted action principle is explicitly shown to approximate the quantum evolutions as (h/2π) → o. (Author) [pt

  14. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-11-30

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  15. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay; Jo, Seongil; Nott, David; Shoemaker, Christine; Tempone, Raul

    2017-01-01

    is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  16. Pade approximant calculations for neutron escape probability

    International Nuclear Information System (INIS)

    El Wakil, S.A.; Saad, E.A.; Hendi, A.A.

    1984-07-01

    The neutron escape probability from a non-multiplying slab containing internal source is defined in terms of a functional relation for the scattering function for the diffuse reflection problem. The Pade approximant technique is used to get numerical results which compare with exact results. (author)

  17. Optical bistability without the rotating wave approximation

    Energy Technology Data Exchange (ETDEWEB)

    Sharaby, Yasser A., E-mail: Yasser_Sharaby@hotmail.co [Physics Department, Faculty of Applied Sciences, Suez Canal University, Suez (Egypt); Joshi, Amitabh, E-mail: ajoshi@eiu.ed [Department of Physics, Eastern Illinois University, Charleston, IL 61920 (United States); Hassan, Shoukry S., E-mail: Shoukryhassan@hotmail.co [Mathematics Department, College of Science, University of Bahrain, P.O. Box 32038 (Bahrain)

    2010-04-26

    Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.

  18. Optical bistability without the rotating wave approximation

    International Nuclear Information System (INIS)

    Sharaby, Yasser A.; Joshi, Amitabh; Hassan, Shoukry S.

    2010-01-01

    Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.

  19. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  20. RATIONAL APPROXIMATIONS TO GENERALIZED HYPERGEOMETRIC FUNCTIONS.

    Science.gov (United States)

    Under weak restrictions on the various free parameters, general theorems for rational representations of the generalized hypergeometric functions...and certain Meijer G-functions are developed. Upon specialization, these theorems yield a sequency of rational approximations which converge to the

  1. A rational approximation of the effectiveness factor

    DEFF Research Database (Denmark)

    Wedel, Stig; Luss, Dan

    1980-01-01

    A fast, approximate method of calculating the effectiveness factor for arbitrary rate expressions is presented. The method does not require any iterative or interpolative calculations. It utilizes the well known asymptotic behavior for small and large Thiele moduli to derive a rational function...

  2. Decision-theoretic troubleshooting: Hardness of approximation

    Czech Academy of Sciences Publication Activity Database

    Lín, Václav

    2014-01-01

    Roč. 55, č. 4 (2014), s. 977-988 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Decision-theoretic troubleshooting * Hardness of approximation * NP-completeness Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.451, year: 2014

  3. Approximate solution methods in engineering mechanics

    International Nuclear Information System (INIS)

    Boresi, A.P.; Cong, K.P.

    1991-01-01

    This is a short book of 147 pages including references and sometimes bibliographies at the end of each chapter, and subject and author indices at the end of the book. The test includes an introduction of 3 pages, 29 pages explaining approximate analysis, 41 pages on finite differences, 36 pages on finite elements, and 17 pages on specialized methods

  4. Monitoring the normal body

    DEFF Research Database (Denmark)

    Nissen, Nina Konstantin; Holm, Lotte; Baarts, Charlotte

    2015-01-01

    of practices for monitoring their bodies based on different kinds of calculations of weight and body size, observations of body shape, and measurements of bodily firmness. Biometric measurements are familiar to them as are health authorities' recommendations. Despite not belonging to an extreme BMI category...... provides us with knowledge about how to prevent future overweight or obesity. This paper investigates body size ideals and monitoring practices among normal-weight and moderately overweight people. Methods : The study is based on in-depth interviews combined with observations. 24 participants were...... recruited by strategic sampling based on self-reported BMI 18.5-29.9 kg/m2 and socio-demographic factors. Inductive analysis was conducted. Results : Normal-weight and moderately overweight people have clear ideals for their body size. Despite being normal weight or close to this, they construct a variety...

  5. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  6. Normal modified stable processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    Gaussian (NGIG) laws. The wider framework thus established provides, in particular, for added flexibility in the modelling of the dynamics of financial time series, of importance especially as regards OU based stochastic volatility models for equities. In the special case of the tempered stable OU process......This paper discusses two classes of distributions, and stochastic processes derived from them: modified stable (MS) laws and normal modified stable (NMS) laws. This extends corresponding results for the generalised inverse Gaussian (GIG) and generalised hyperbolic (GH) or normal generalised inverse...

  7. Normalization of satellite imagery

    Science.gov (United States)

    Kim, Hongsuk H.; Elman, Gregory C.

    1990-01-01

    Sets of Thematic Mapper (TM) imagery taken over the Washington, DC metropolitan area during the months of November, March and May were converted into a form of ground reflectance imagery. This conversion was accomplished by adjusting the incident sunlight and view angles and by applying a pixel-by-pixel correction for atmospheric effects. Seasonal color changes of the area can be better observed when such normalization is applied to space imagery taken in time series. In normalized imagery, the grey scale depicts variations in surface reflectance and tonal signature of multi-band color imagery can be directly interpreted for quantitative information of the target.

  8. Elasto-plastic stress/strain at notches, comparison of test and approximative computations

    International Nuclear Information System (INIS)

    Beste, A.; Seeger, T.

    1979-01-01

    The lifetime of cyclically loaded components is decisively determined by the value of the local load in the notch root. The determination of the elasto-plastic notch-stress and-strain is therefore an important element of recent methods of lifetime determination. These local loads are normally calculated with the help of approximation formulas. Yet there are no details about their accuracy. The basic construction of the approximation formulas is presented, along with some particulars. The use of approximations within the fully plastic range and for material laws which show a non-linear stress-strain (sigma-epsilon-)-behaviour from the beginning is explained. The use of approximation for cyclic loads is particularly discussed. Finally, the approximations are evaluated in terms of their exactness. The test results are compared with the results of the approximation calculations. (orig.) 891 RW/orig. 892 RKD [de

  9. Traveltime approximations for transversely isotropic media with an inhomogeneous background

    KAUST Repository

    Alkhalifah, Tariq

    2011-05-01

    A transversely isotropic (TI) model with a tilted symmetry axis is regarded as one of the most effective approximations to the Earth subsurface, especially for imaging purposes. However, we commonly utilize this model by setting the axis of symmetry normal to the reflector. This assumption may be accurate in many places, but deviations from this assumption will cause errors in the wavefield description. Using perturbation theory and Taylor\\'s series, I expand the solutions of the eikonal equation for 2D TI media with respect to the independent parameter θ, the angle the tilt of the axis of symmetry makes with the vertical, in a generally inhomogeneous TI background with a vertical axis of symmetry. I do an additional expansion in terms of the independent (anellipticity) parameter in a generally inhomogeneous elliptically anisotropic background medium. These new TI traveltime solutions are given by expansions in and θ with coefficients extracted from solving linear first-order partial differential equations. Pade approximations are used to enhance the accuracy of the representation by predicting the behavior of the higher-order terms of the expansion. A simplification of the expansion for homogenous media provides nonhyperbolic moveout descriptions of the traveltime for TI models that are more accurate than other recently derived approximations. In addition, for 3D media, I develop traveltime approximations using Taylor\\'s series type of expansions in the azimuth of the axis of symmetry. The coefficients of all these expansions can also provide us with the medium sensitivity gradients (Jacobian) for nonlinear tomographic-based inversion for the tilt in the symmetry axis. © 2011 Society of Exploration Geophysicists.

  10. Traveltime approximations for transversely isotropic media with an inhomogeneous background

    KAUST Repository

    Alkhalifah, Tariq

    2011-01-01

    A transversely isotropic (TI) model with a tilted symmetry axis is regarded as one of the most effective approximations to the Earth subsurface, especially for imaging purposes. However, we commonly utilize this model by setting the axis of symmetry normal to the reflector. This assumption may be accurate in many places, but deviations from this assumption will cause errors in the wavefield description. Using perturbation theory and Taylor's series, I expand the solutions of the eikonal equation for 2D TI media with respect to the independent parameter θ, the angle the tilt of the axis of symmetry makes with the vertical, in a generally inhomogeneous TI background with a vertical axis of symmetry. I do an additional expansion in terms of the independent (anellipticity) parameter in a generally inhomogeneous elliptically anisotropic background medium. These new TI traveltime solutions are given by expansions in and θ with coefficients extracted from solving linear first-order partial differential equations. Pade approximations are used to enhance the accuracy of the representation by predicting the behavior of the higher-order terms of the expansion. A simplification of the expansion for homogenous media provides nonhyperbolic moveout descriptions of the traveltime for TI models that are more accurate than other recently derived approximations. In addition, for 3D media, I develop traveltime approximations using Taylor's series type of expansions in the azimuth of the axis of symmetry. The coefficients of all these expansions can also provide us with the medium sensitivity gradients (Jacobian) for nonlinear tomographic-based inversion for the tilt in the symmetry axis. © 2011 Society of Exploration Geophysicists.

  11. Approximated solutions to Born-Infeld dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ferraro, Rafael [Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA),Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina); Nigro, Mauro [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina)

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  12. The Hartree-Fock seniority approximation

    International Nuclear Information System (INIS)

    Gomez, J.M.G.; Prieto, C.

    1986-01-01

    A new self-consistent method is used to take into account the mean-field and the pairing correlations in nuclei at the same time. We call it the Hartree-Fock seniority approximation, because the long-range and short-range correlations are treated in the frameworks of Hartree-Fock theory and the seniority scheme. The method is developed in detail for a minimum-seniority variational wave function in the coordinate representation for an effective interaction of the Skyrme type. An advantage of the present approach over the Hartree-Fock-Bogoliubov theory is the exact conservation of angular momentum and particle number. Furthermore, the computational effort required in the Hartree-Fock seniority approximation is similar to that ofthe pure Hartree-Fock picture. Some numerical calculations for Ca isotopes are presented. (orig.)

  13. Analytical Ballistic Trajectories with Approximately Linear Drag

    Directory of Open Access Journals (Sweden)

    Giliam J. P. de Carpentier

    2014-01-01

    Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.

  14. Simple Lie groups without the approximation property

    DEFF Research Database (Denmark)

    Haagerup, Uffe; de Laat, Tim

    2013-01-01

    For a locally compact group G, let A(G) denote its Fourier algebra, and let M0A(G) denote the space of completely bounded Fourier multipliers on G. The group G is said to have the Approximation Property (AP) if the constant function 1 can be approximated by a net in A(G) in the weak-∗ topology...... on the space M0A(G). Recently, Lafforgue and de la Salle proved that SL(3,R) does not have the AP, implying the first example of an exact discrete group without it, namely, SL(3,Z). In this paper we prove that Sp(2,R) does not have the AP. It follows that all connected simple Lie groups with finite center...

  15. The optimal XFEM approximation for fracture analysis

    International Nuclear Information System (INIS)

    Jiang Shouyan; Du Chengbin; Ying Zongquan

    2010-01-01

    The extended finite element method (XFEM) provides an effective tool for analyzing fracture mechanics problems. A XFEM approximation consists of standard finite elements which are used in the major part of the domain and enriched elements in the enriched sub-domain for capturing special solution properties such as discontinuities and singularities. However, two issues in the standard XFEM should specially be concerned: efficient numerical integration methods and an appropriate construction of the blending elements. In the paper, an optimal XFEM approximation is proposed to overcome the disadvantage mentioned above in the standard XFEM. The modified enrichment functions are presented that can reproduced exactly everywhere in the domain. The corresponding FORTRAN program is developed for fracture analysis. A classic problem of fracture mechanics is used to benchmark the program. The results indicate that the optimal XFEM can alleviate the errors and improve numerical precision.

  16. Approximated solutions to Born-Infeld dynamics

    International Nuclear Information System (INIS)

    Ferraro, Rafael; Nigro, Mauro

    2016-01-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  17. Traveltime approximations for inhomogeneous HTI media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Traveltimes information is convenient for parameter estimation especially if the medium is described by an anisotropic set of parameters. This is especially true if we could relate traveltimes analytically to these medium parameters, which is generally hard to do in inhomogeneous media. As a result, I develop traveltimes approximations for horizontaly transversely isotropic (HTI) media as simplified and even linear functions of the anisotropic parameters. This is accomplished by perturbing the solution of the HTI eikonal equation with respect to η and the azimuthal symmetry direction (usually used to describe the fracture direction) from a generally inhomogeneous elliptically anisotropic background medium. The resulting approximations can provide accurate analytical description of the traveltime in a homogenous background compared to other published moveout equations out there. These equations will allow us to readily extend the inhomogenous background elliptical anisotropic model to an HTI with a variable, but smoothly varying, η and horizontal symmetry direction values. © 2011 Society of Exploration Geophysicists.

  18. Approximate radiative solutions of the Einstein equations

    International Nuclear Information System (INIS)

    Kuusk, P.; Unt, V.

    1976-01-01

    In this paper the external field of a bounded source emitting gravitational radiation is considered. A successive approximation method is used to integrate the Einstein equations in Bondi's coordinates (Bondi et al, Proc. R. Soc.; A269:21 (1962)). A method of separation of angular variables is worked out and the approximate Einstein equations are reduced to key equations. The losses of mass, momentum, and angular momentum due to gravitational multipole radiation are found. It is demonstrated that in the case of proper treatment a real mass occurs instead of a mass aspect in a solution of the Einstein equations. In an appendix Bondi's new function is given in terms of sources. (author)

  19. Nonlinear analysis approximation theory, optimization and applications

    CERN Document Server

    2014-01-01

    Many of our daily-life problems can be written in the form of an optimization problem. Therefore, solution methods are needed to solve such problems. Due to the complexity of the problems, it is not always easy to find the exact solution. However, approximate solutions can be found. The theory of the best approximation is applicable in a variety of problems arising in nonlinear functional analysis and optimization. This book highlights interesting aspects of nonlinear analysis and optimization together with many applications in the areas of physical and social sciences including engineering. It is immensely helpful for young graduates and researchers who are pursuing research in this field, as it provides abundant research resources for researchers and post-doctoral fellows. This will be a valuable addition to the library of anyone who works in the field of applied mathematics, economics and engineering.

  20. Analysing organic transistors based on interface approximation

    International Nuclear Information System (INIS)

    Akiyama, Yuto; Mori, Takehiko

    2014-01-01

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region

  1. Normality in Analytical Psychology

    Directory of Open Access Journals (Sweden)

    Steve Myers

    2013-11-01

    Full Text Available Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity.

  2. Medically-enhanced normality

    DEFF Research Database (Denmark)

    Møldrup, Claus; Traulsen, Janine Morgall; Almarsdóttir, Anna Birna

    2003-01-01

    Objective: To consider public perspectives on the use of medicines for non-medical purposes, a usage called medically-enhanced normality (MEN). Method: Examples from the literature were combined with empirical data derived from two Danish research projects: a Delphi internet study and a Telebus...

  3. The Normal Fetal Pancreas.

    Science.gov (United States)

    Kivilevitch, Zvi; Achiron, Reuven; Perlman, Sharon; Gilboa, Yinon

    2017-10-01

    The aim of the study was to assess the sonographic feasibility of measuring the fetal pancreas and its normal development throughout pregnancy. We conducted a cross-sectional prospective study between 19 and 36 weeks' gestation. The study included singleton pregnancies with normal pregnancy follow-up. The pancreas circumference was measured. The first 90 cases were tested to assess feasibility. Two hundred ninety-seven fetuses of nondiabetic mothers were recruited during a 3-year period. The overall satisfactory visualization rate was 61.6%. The intraobserver and interobserver variability had high interclass correlation coefficients of of 0.964 and 0.967, respectively. A cubic polynomial regression described best the correlation of pancreas circumference with gestational age (r = 0.744; P pancreas circumference percentiles for each week of gestation were calculated. During the study period, we detected 2 cases with overgrowth syndrome and 1 case with an annular pancreas. In this study, we assessed the feasibility of sonography for measuring the fetal pancreas and established a normal reference range for the fetal pancreas circumference throughout pregnancy. This database can be helpful when investigating fetomaternal disorders that can involve its normal development. © 2017 by the American Institute of Ultrasound in Medicine.

  4. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming

    2013-01-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  5. Fast Approximate Joint Diagonalization Incorporating Weight Matrices

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Yeredor, A.

    2009-01-01

    Roč. 57, č. 3 (2009), s. 878-891 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : autoregressive processes * blind source separation * nonstationary random processes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.212, year: 2009 http://library.utia.cas.cz/separaty/2009/SI/tichavsky-fast approximate joint diagonalization incorporating weight matrices.pdf

  6. Mean-field approximation minimizes relative entropy

    International Nuclear Information System (INIS)

    Bilbro, G.L.; Snyder, W.E.; Mann, R.C.

    1991-01-01

    The authors derive the mean-field approximation from the information-theoretic principle of minimum relative entropy instead of by minimizing Peierls's inequality for the Weiss free energy of statistical physics theory. They show that information theory leads to the statistical mechanics procedure. As an example, they consider a problem in binary image restoration. They find that mean-field annealing compares favorably with the stochastic approach

  7. On approximation of functions by product operators

    Directory of Open Access Journals (Sweden)

    Hare Krishna Nigam

    2013-12-01

    Full Text Available In the present paper, two quite new reults on the degree of approximation of a function f belonging to the class Lip(α,r, 1≤ r <∞ and the weighted class W(Lr,ξ(t, 1≤ r <∞ by (C,2(E,1 product operators have been obtained. The results obtained in the present paper generalize various known results on single operators.

  8. Markdown Optimization via Approximate Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Cos?gun

    2013-02-01

    Full Text Available We consider the markdown optimization problem faced by the leading apparel retail chain. Because of substitution among products the markdown policy of one product affects the sales of other products. Therefore, markdown policies for product groups having a significant crossprice elasticity among each other should be jointly determined. Since the state space of the problem is very huge, we use Approximate Dynamic Programming. Finally, we provide insights on the behavior of how each product price affects the markdown policy.

  9. Solving Math Problems Approximately: A Developmental Perspective.

    Directory of Open Access Journals (Sweden)

    Dana Ganor-Stern

    Full Text Available Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger than the exact answer and when it was far (vs. close from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.

  10. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design

  11. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-05

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.

  12. Factorized Approximate Inverses With Adaptive Dropping

    Czech Academy of Sciences Publication Activity Database

    Kopal, Jiří; Rozložník, Miroslav; Tůma, Miroslav

    2016-01-01

    Roč. 38, č. 3 (2016), A1807-A1820 ISSN 1064-8275 R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : approximate inverses * incomplete factorization * Gram–Schmidt orthogonalization * preconditioned iterative methods Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016

  13. Semiclassical approximation in Batalin-Vilkovisky formalism

    International Nuclear Information System (INIS)

    Schwarz, A.

    1993-01-01

    The geometry of supermanifolds provided with a Q-structure (i.e. with an odd vector field Q satisfying {Q, Q}=0), a P-structure (odd symplectic structure) and an S-structure (volume element) or with various combinations of these structures is studied. The results are applied to the analysis of the Batalin-Vilkovisky approach to the quantization of gauge theories. In particular the semiclassical approximation in this approach is expressed in terms of Reidemeister torsion. (orig.)

  14. Approximation for limit cycles and their isochrons.

    Science.gov (United States)

    Demongeot, Jacques; Françoise, Jean-Pierre

    2006-12-01

    Local analysis of trajectories of dynamical systems near an attractive periodic orbit displays the notion of asymptotic phase and isochrons. These notions are quite useful in applications to biosciences. In this note, we give an expression for the first approximation of equations of isochrons in the setting of perturbations of polynomial Hamiltonian systems. This method can be generalized to perturbations of systems that have a polynomial integral factor (like the Lotka-Volterra equation).

  15. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul

    2015-01-01

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design

  16. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul

    2015-01-01

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.

  17. Approximate Inverse Preconditioners with Adaptive Dropping

    Czech Academy of Sciences Publication Activity Database

    Kopal, J.; Rozložník, Miroslav; Tůma, Miroslav

    2015-01-01

    Roč. 84, June (2015), s. 13-20 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GAP108/11/0853; GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : approximate inverse * Gram-Schmidt orthogonalization * incomplete decomposition * preconditioned conjugate gradient method * algebraic preconditioning * pivoting Subject RIV: BA - General Mathematics Impact factor: 1.673, year: 2015

  18. Approximations and Implementations of Nonlinear Filtering Schemes.

    Science.gov (United States)

    1988-02-01

    sias k an Ykar repctively the input and the output vectors. Asfold. First, there are intrinsic errors, due to explained in the previous section, the...e.g.[BV,P]). In the above example of a a-algebra, the distributive property SIA (S 2vS3) - (SIAS2)v(SIAS3) holds. A complete orthocomplemented...process can be approximated by a switched Control Systems: Stochastic Stability and parameter process depending on the aggregated slow Dynamic Relaibility

  19. An analytical approximation for resonance integral

    International Nuclear Information System (INIS)

    Magalhaes, C.G. de; Martinez, A.S.

    1985-01-01

    It is developed a method which allows to obtain an analytical solution for the resonance integral. The problem formulation is completely theoretical and based in concepts of physics of general character. The analytical expression for integral does not involve any empiric correlation or parameter. Results of approximation are compared with pattern values for each individual resonance and for sum of all resonances. (M.C.K.) [pt

  20. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika

    2013-02-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  1. Conference on Abstract Spaces and Approximation

    CERN Document Server

    Szökefalvi-Nagy, B; Abstrakte Räume und Approximation; Abstract spaces and approximation

    1969-01-01

    The present conference took place at Oberwolfach, July 18-27, 1968, as a direct follow-up on a meeting on Approximation Theory [1] held there from August 4-10, 1963. The emphasis was on theoretical aspects of approximation, rather than the numerical side. Particular importance was placed on the related fields of functional analysis and operator theory. Thirty-nine papers were presented at the conference and one more was subsequently submitted in writing. All of these are included in these proceedings. In addition there is areport on new and unsolved problems based upon a special problem session and later communications from the partici­ pants. A special role is played by the survey papers also presented in full. They cover a broad range of topics, including invariant subspaces, scattering theory, Wiener-Hopf equations, interpolation theorems, contraction operators, approximation in Banach spaces, etc. The papers have been classified according to subject matter into five chapters, but it needs littl...

  2. Development of the relativistic impulse approximation

    International Nuclear Information System (INIS)

    Wallace, S.J.

    1985-01-01

    This talk contains three parts. Part I reviews the developments which led to the relativistic impulse approximation for proton-nucleus scattering. In Part II, problems with the impulse approximation in its original form - principally the low energy problem - are discussed and traced to pionic contributions. Use of pseudovector covariants in place of pseudoscalar ones in the NN amplitude provides more satisfactory low energy results, however, the difference between pseudovector and pseudoscalar results is ambiguous in the sense that it is not controlled by NN data. Only with further theoretical input can the ambiguity be removed. Part III of the talk presents a new development of the relativistic impulse approximation which is the result of work done in the past year and a half in collaboration with J.A. Tjon. A complete NN amplitude representation is developed and a complete set of Lorentz invariant amplitudes are calculated based on a one-meson exchange model and appropriate integral equations. A meson theoretical basis for the important pair contributions to proton-nucleus scattering is established by the new developments. 28 references

  3. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  4. A Gaussian Approximation Potential for Silicon

    Science.gov (United States)

    Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor

    We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.

  5. Approximate modal analysis using Fourier decomposition

    International Nuclear Information System (INIS)

    Kozar, Ivica; Jericevic, Zeljko; Pecak, Tatjana

    2010-01-01

    The paper presents a novel numerical approach for approximate solution of eigenvalue problem and investigates its suitability for modal analysis of structures with special attention on plate structures. The approach is based on Fourier transformation of the matrix equation into frequency domain and subsequent removal of potentially less significant frequencies. The procedure results in a much reduced problem that is used in eigenvalue calculation. After calculation eigenvectors are expanded and transformed back into time domain. The principles are presented in Jericevic [1]. Fourier transform can be formulated in a way that some parts of the matrix that should not be approximated are not transformed but are fully preserved. In this paper we present formulation that preserves central or edge parts of the matrix and compare it with the formulation that performs transform on the whole matrix. Numerical experiments on transformed structural dynamic matrices describe quality of the approximations obtained in modal analysis of structures. On the basis of the numerical experiments, from the three approaches to matrix reduction one is recommended.

  6. Green-Ampt approximations: A comprehensive analysis

    Science.gov (United States)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  7. An Origami Approximation to the Cosmic Web

    Science.gov (United States)

    Neyrinck, Mark C.

    2016-10-01

    The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.

  8. Function approximation of tasks by neural networks

    International Nuclear Information System (INIS)

    Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.

    2008-01-01

    For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem

  9. Simultaneous perturbation stochastic approximation for tidal models

    KAUST Repository

    Altaf, M.U.

    2011-05-12

    The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.

  10. Blind sensor calibration using approximate message passing

    International Nuclear Information System (INIS)

    Schülke, Christophe; Caltagirone, Francesco; Zdeborová, Lenka

    2015-01-01

    The ubiquity of approximately sparse data has led a variety of communities to take great interest in compressed sensing algorithms. Although these are very successful and well understood for linear measurements with additive noise, applying them to real data can be problematic if imperfect sensing devices introduce deviations from this ideal signal acquisition process, caused by sensor decalibration or failure. We propose a message passing algorithm called calibration approximate message passing (Cal-AMP) that can treat a variety of such sensor-induced imperfections. In addition to deriving the general form of the algorithm, we numerically investigate two particular settings. In the first, a fraction of the sensors is faulty, giving readings unrelated to the signal. In the second, sensors are decalibrated and each one introduces a different multiplicative gain to the measurements. Cal-AMP shares the scalability of approximate message passing, allowing us to treat large sized instances of these problems, and experimentally exhibits a phase transition between domains of success and failure. (paper)

  11. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  12. Simultaneous perturbation stochastic approximation for tidal models

    KAUST Repository

    Altaf, M.U.; Heemink, A.W.; Verlaan, M.; Hoteit, Ibrahim

    2011-01-01

    The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.

  13. Local approximation of a metapopulation's equilibrium.

    Science.gov (United States)

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  14. Approximate particle number projection in hot nuclei

    International Nuclear Information System (INIS)

    Kosov, D.S.; Vdovin, A.I.

    1995-01-01

    Heated finite systems like, e.g., hot atomic nuclei have to be described by the canonical partition function. But this is a quite difficult technical problem and, as a rule, the grand canonical partition function is used in the studies. As a result, some shortcomings of the theoretical description appear because of the thermal fluctuations of the number of particles. Moreover, in nuclei with pairing correlations the quantum number fluctuations are introduced by some approximate methods (e.g., by the standard BCS method). The exact particle number projection is very cumbersome and an approximate number projection method for T ≠ 0 basing on the formalism of thermo field dynamics is proposed. The idea of the Lipkin-Nogami method to perform any operator as a series in the number operator powers is used. The system of equations for the coefficients of this expansion is written and the solution of the system in the next approximation after the BCS one is obtained. The method which is of the 'projection after variation' type is applied to a degenerate single j-shell model. 14 refs., 1 tab

  15. Nonresonant approximations to the optical potential

    International Nuclear Information System (INIS)

    Kowalski, K.L.

    1982-01-01

    A new class of approximations to the optical potential, which includes those of the multiple-scattering variety, is investigated. These approximations are constructed so that the optical potential maintains the correct unitarity properties along with a proper treatment of nucleon identity. The special case of nucleon-nucleus scattering with complete inclusion of Pauli effects is studied in detail. The treatment is such that the optical potential receives contributions only from subsystems embedded in their own physically correct antisymmetrized subspaces. It is found that a systematic development of even the lowest-order approximations requires the use of the off-shell extension due to Alt, Grassberger, and Sandhas along with a consistent set of dynamical equations for the optical potential. In nucleon-nucleus scattering a lowest-order optical potential is obtained as part of a systematic, exact, inclusive connectivity expansion which is expected to be useful at moderately high energies. This lowest-order potential consists of an energy-shifted (trho)-type term with three-body kinematics plus a heavy-particle exchange or pickup term. The natural appearance of the exchange term additivity in the optical potential clarifies the role of the elastic distortion in connection with the treatment of these processes. The relationship of the relevant aspects of the present analysis of the optical potential to conventional multiple scattering methods is discussed

  16. On normal modes of gas sheets and discs

    International Nuclear Information System (INIS)

    Drury, L.O'C.

    1980-01-01

    A method is described for calculating the reflection and transmission coefficients characterizing normal modes of the Goldreich-Lynden-Bell gas sheet. Two families of gas discs without self-gravity for which the normal modes can be found analytically are given and used to illustrate the validity of the sheet approximation. (author)

  17. The self-normalized Donsker theorem revisited

    OpenAIRE

    Parczewski, Peter

    2016-01-01

    We extend the Poincar\\'{e}--Borel lemma to a weak approximation of a Brownian motion via simple functionals of uniform distributions on n-spheres in the Skorokhod space $D([0,1])$. This approach is used to simplify the proof of the self-normalized Donsker theorem in Cs\\"{o}rg\\H{o} et al. (2003). Some notes on spheres with respect to $\\ell_p$-norms are given.

  18. The thoracic paraspinal shadow: normal appearances.

    Science.gov (United States)

    Lien, H H; Kolbenstvedt, A

    1982-01-01

    The width of the right and left thoracic paraspinal shadows were measured at all levels in 200 presumably normal individuals. The paraspinal shadow could be identified in nearly all cases on the left side and in approximately one-third on the right. The range of variation was greater on the left side than one the right. The left paraspinal shadow was wider at the upper levels and in individuals above 40 years of age.

  19. Crate counter for normal operating loss

    International Nuclear Information System (INIS)

    Harlan, R.A.

    A lithium-loaded zinc sulfide scintillation counter to closely assay plutonium in waste packaged in 1.3 by 1.3 by 2.13m crates was built. In addition to assays for normal operating loss accounting, the counter will allow safeguards verification immediately before shipment of the crates for burial. The counter should detect approximately 10 g of plutonium in 1000 kg of waste

  20. Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...

  1. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    Science.gov (United States)

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright

  2. Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization

    OpenAIRE

    Huang, Xiaofei

    2006-01-01

    The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derive...

  3. Idiopathic Normal Pressure Hydrocephalus

    Directory of Open Access Journals (Sweden)

    Basant R. Nassar BS

    2016-04-01

    Full Text Available Idiopathic normal pressure hydrocephalus (iNPH is a potentially reversible neurodegenerative disease commonly characterized by a triad of dementia, gait, and urinary disturbance. Advancements in diagnosis and treatment have aided in properly identifying and improving symptoms in patients. However, a large proportion of iNPH patients remain either undiagnosed or misdiagnosed. Using PubMed search engine of keywords “normal pressure hydrocephalus,” “diagnosis,” “shunt treatment,” “biomarkers,” “gait disturbances,” “cognitive function,” “neuropsychology,” “imaging,” and “pathogenesis,” articles were obtained for this review. The majority of the articles were retrieved from the past 10 years. The purpose of this review article is to aid general practitioners in further understanding current findings on the pathogenesis, diagnosis, and treatment of iNPH.

  4. Normal Weight Dyslipidemia

    DEFF Research Database (Denmark)

    Ipsen, David Hojland; Tveden-Nyborg, Pernille; Lykkesfeldt, Jens

    2016-01-01

    Objective: The liver coordinates lipid metabolism and may play a vital role in the development of dyslipidemia, even in the absence of obesity. Normal weight dyslipidemia (NWD) and patients with nonalcoholic fatty liver disease (NAFLD) who do not have obesity constitute a unique subset...... of individuals characterized by dyslipidemia and metabolic deterioration. This review examined the available literature on the role of the liver in dyslipidemia and the metabolic characteristics of patients with NAFLD who do not have obesity. Methods: PubMed was searched using the following keywords: nonobese......, dyslipidemia, NAFLD, NWD, liver, and metabolically obese/unhealthy normal weight. Additionally, article bibliographies were screened, and relevant citations were retrieved. Studies were excluded if they had not measured relevant biomarkers of dyslipidemia. Results: NWD and NAFLD without obesity share a similar...

  5. A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence

    Science.gov (United States)

    Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.

    2018-04-01

    Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.

  6. Ethics and "normal birth".

    Science.gov (United States)

    Lyerly, Anne Drapkin

    2012-12-01

    The concept of "normal birth" has been promoted as ideal by several international organizations, although debate about its meaning is ongoing. In this article, I examine the concept of normalcy to explore its ethical implications and raise a trio of concerns. First, in its emphasis on nonuse of technology as a goal, the concept of normalcy may marginalize women for whom medical intervention is necessary or beneficial. Second, in its emphasis on birth as a socially meaningful event, the mantra of normalcy may unintentionally avert attention to meaning in medically complicated births. Third, the emphasis on birth as a normal and healthy event may be a contributor to the long-standing tolerance for the dearth of evidence guiding the treatment of illness during pregnancy and the failure to responsibly and productively engage pregnant women in health research. Given these concerns, it is worth debating not just what "normal birth" means, but whether the term as an ideal earns its keep. © 2012, Copyright the Authors Journal compilation © 2012, Wiley Periodicals, Inc.

  7. Photoelectron spectroscopy and the dipole approximation

    Energy Technology Data Exchange (ETDEWEB)

    Hemmers, O.; Hansen, D.L.; Wang, H. [Univ. of Nevada, Las Vegas, NV (United States)] [and others

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  8. Pentaquarks in the Jaffe-Wilczek approximation

    International Nuclear Information System (INIS)

    Narodetskii, I.M.; Simonov, Yu.A.; Trusov, M.A.; Semay, C.; Silvestre-Brac, B.

    2005-01-01

    The masses of uudds-bar, uuddd-bar, and uussd-bar pentaquarks are evaluated in a framework of both the effective Hamiltonian approach to QCD and spinless Salpeter equation using the Jaffe-Wilczek diquark approximation and the string interaction for the diquark-diquark-antiquark system. The pentaquark masses are found to be in the region above 2 GeV. That indicates that the Goldstone-boson-exchange effects may play an important role in the light pentaquarks. The same calculations yield the mass of [ud] 2 c-bar pentaquark ∼3250 MeV and [ud] 2 b-bar pentaquark ∼6509 MeV [ru

  9. Localization and stationary phase approximation on supermanifolds

    Science.gov (United States)

    Zakharevich, Valentin

    2017-08-01

    Given an odd vector field Q on a supermanifold M and a Q-invariant density μ on M, under certain compactness conditions on Q, the value of the integral ∫Mμ is determined by the value of μ on any neighborhood of the vanishing locus N of Q. We present a formula for the integral in the case where N is a subsupermanifold which is appropriately non-degenerate with respect to Q. In the process, we discuss the linear algebra necessary to express our result in a coordinate independent way. We also extend the stationary phase approximation and the Morse-Bott lemma to supermanifolds.

  10. SAM revisited: uniform semiclassical approximation with absorption

    International Nuclear Information System (INIS)

    Hussein, M.S.; Pato, M.P.

    1986-01-01

    The uniform semiclassical approximation is modified to take into account strong absorption. The resulting theory, very similar to the one developed by Frahn and Gross is used to discuss heavy-ion elastic scattering at intermediate energies. The theory permits a reasonably unambiguos separation of refractive and diffractive effects. The systems 12 C+ 12 C and 12 C+ 16 O, which seem to exhibit a remnant of a nuclear rainbow at E=20 Mev/N, are analysed with theory which is built directly on a model for the S-matrix. Simple relations between the fit S-matrix and the underlying complex potential are derived. (Author) [pt

  11. TMB: Automatic differentiation and laplace approximation

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Nielsen, Anders; Berg, Casper Willestofte

    2016-01-01

    TMB is an open source R package that enables quick implementation of complex nonlinear random effects (latent variable) models in a manner similar to the established AD Model Builder package (ADMB, http://admb-project.org/; Fournier et al. 2011). In addition, it offers easy access to parallel...... computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects...

  12. Shape theory categorical methods of approximation

    CERN Document Server

    Cordier, J M

    2008-01-01

    This in-depth treatment uses shape theory as a ""case study"" to illustrate situations common to many areas of mathematics, including the use of archetypal models as a basis for systems of approximations. It offers students a unified and consolidated presentation of extensive research from category theory, shape theory, and the study of topological algebras.A short introduction to geometric shape explains specifics of the construction of the shape category and relates it to an abstract definition of shape theory. Upon returning to the geometric base, the text considers simplical complexes and

  13. On one approximation in quantum chromodynamics

    International Nuclear Information System (INIS)

    Alekseev, A.I.; Bajkov, V.A.; Boos, Eh.Eh.

    1982-01-01

    Form of a complete fermion propagator near the mass shell is investigated. Considered is a nodel of quantum chromodynamics (MQC) where in the fermion section the Block-Nordsic approximation has been made, i. e. u-numbers are substituted for ν matrices. The model was investigated by means of the Schwinger-Dyson equation for a quark propagator in the infrared region. The Schwinger-Dyson equation was managed to reduce to a differential equation which is easily solved. At that, the Green function is suitable to represent as integral transformation

  14. Static correlation beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian Sommer

    2014-01-01

    derived from Hedin's equations (Random Phase Approximation (RPA), Time-dependent Hartree-Fock (TDHF), Bethe-Salpeter equation (BSE), and Time-Dependent GW) all reproduce the correct dissociation limit. We also show that the BSE improves the correlation energies obtained within RPA and TDHF significantly...... and confirms that BSE greatly improves the RPA and TDHF results despite the fact that the BSE excitation spectrum breaks down in the dissociation limit. In contrast, second order screened exchange gives a poor description of the dissociation limit, which can be attributed to the fact that it cannot be derived...

  15. Multi-compartment linear noise approximation

    International Nuclear Information System (INIS)

    Challenger, Joseph D; McKane, Alan J; Pahle, Jürgen

    2012-01-01

    The ability to quantify the stochastic fluctuations present in biochemical and other systems is becoming increasing important. Analytical descriptions of these fluctuations are attractive, as stochastic simulations are computationally expensive. Building on previous work, a linear noise approximation is developed for biochemical models with many compartments, for example cells. The procedure is then implemented in the software package COPASI. This technique is illustrated with two simple examples and is then applied to a more realistic biochemical model. Expressions for the noise, given in the form of covariance matrices, are presented. (paper)

  16. Approximation of Moessbauer spectra of metallic glasses

    International Nuclear Information System (INIS)

    Miglierini, M.; Sitek, J.

    1988-01-01

    Moessbauer spectra of iron-rich metallic glasses are approximated by means of six broadened lines which have line position relations similar to those of α-Fe. It is shown via the results of the DISPA (dispersion mode vs. absorption mode) line shape analysis that each spectral peak is broadened owing to a sum of Lorentzian lines weighted by a Gaussian distribution in the peak position. Moessbauer parameters of amorphous metallic Fe 83 B 17 and Fe 40 Ni 40 B 20 alloys are presented, derived from the fitted spectra. (author). 2 figs., 2 tabs., 21 refs

  17. High energy approximations in quantum field theory

    International Nuclear Information System (INIS)

    Orzalesi, C.A.

    1975-01-01

    New theoretical methods in hadron physics based on a high-energy perturbation theory are discussed. The approximated solutions to quantum field theory obtained by this method appear to be sufficiently simple and rich in structure to encourage hadron dynamics studies. Operator eikonal form for field - theoretic Green's functions is derived and discussion is held on how the eikonal perturbation theory is to be renormalized. This method is extended to massive quantum electrodynamics of scalar charged bosons. Possible developments and applications of this theory are given [pt

  18. Weak field approximation of new general relativity

    International Nuclear Information System (INIS)

    Fukui, Masayasu; Masukawa, Junnichi

    1985-01-01

    In the weak field approximation, gravitational field equations of new general relativity with arbitrary parameters are examined. Assuming a conservation law delta sup(μ)T sub(μν) = 0 of the energy-momentum tensor T sub(μν) for matter fields in addition to the usual one delta sup(ν)T sub(μν) = 0, we show that the linearized gravitational field equations are decomposed into equations for a Lorentz scalar field and symmetric and antisymmetric Lorentz tensor fields. (author)

  19. Pentaquarks in the Jaffe-Wilczek Approximation

    International Nuclear Information System (INIS)

    Narodetskii, I.M.; Simonov, Yu.A.; Trusov, M.A.; Semay, C.; Silvestre-Brac, B.

    2005-01-01

    The masses of uudds-bar, uuddd-bar, and uussd-bar pentaquarks are evaluated in a framework of both the effective Hamiltonian approach to QCD and the spinless Salpeter equation using the Jaffe-Wilczek diquark approximation and the string interaction for the diquark-diquark-antiquark system. The pentaquark masses are found to be in the region above 2 GeV. That indicates that the Goldstone boson exchange effects may play an important role in the light pentaquarks. The same calculations yield the mass of [ud] 2 c-bar pentaquark ∼3250 MeV and [ud] 2 b-bar pentaquark ∼6509 MeV

  20. Turbo Equalization Using Partial Gaussian Approximation

    DEFF Research Database (Denmark)

    Zhang, Chuanzong; Wang, Zhongyong; Manchón, Carles Navarro

    2016-01-01

    This letter deals with turbo equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation propagation rule to convert messages passed from the demodulator and decoder to the equalizer and computes messages...... returned by the equalizer by using a partial Gaussian approximation (PGA). We exploit the specific structure of the ISI channel model to compute the latter messages from the beliefs obtained using a Kalman smoother/equalizer. Doing so leads to a significant complexity reduction compared to the initial PGA...

  1. Topics in multivariate approximation and interpolation

    CERN Document Server

    Jetter, Kurt

    2005-01-01

    This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr

  2. A Stokes drift approximation based on the Phillips spectrum

    Science.gov (United States)

    Breivik, Øyvind; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.

    2016-04-01

    A new approximation to the Stokes drift velocity profile based on the exact solution for the Phillips spectrum is explored. The profile is compared with the monochromatic profile and the recently proposed exponential integral profile. ERA-Interim spectra and spectra from a wave buoy in the central North Sea are used to investigate the behavior of the profile. It is found that the new profile has a much stronger gradient near the surface and lower normalized deviation from the profile computed from the spectra. Based on estimates from two open-ocean locations, an average value has been estimated for a key parameter of the profile. Given this parameter, the profile can be computed from the same two parameters as the monochromatic profile, namely the transport and the surface Stokes drift velocity.

  3. Layers of Cold Dipolar Molecules in the Harmonic Approximation

    DEFF Research Database (Denmark)

    R. Armstrong, J.; Zinner, Nikolaj Thomas; V. Fedorov, D.

    2012-01-01

    We consider the N-body problem in a layered geometry containing cold polar molecules with dipole moments that are polarized perpendicular to the layers. A harmonic approximation is used to simplify the hamiltonian and bound state properties of the two-body inter-layer dipolar potential are used...... to adjust this effective interaction. To model the intra-layer repulsion of the polar molecules, we introduce a repulsive inter-molecule potential that can be parametrically varied. Single chains containing one molecule in each layer, as well as multi-chain structures in many layers are discussed...... and their energies and radii determined. We extract the normal modes of the various systems as measures of their volatility and eventually of instability, and compare our findings to the excitations in crystals. We find modes that can be classified as either chains vibrating in phase or as layers vibrating against...

  4. SEE rate estimation based on diffusion approximation of charge collection

    Science.gov (United States)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  5. APPROXIMATING INNOVATION POTENTIAL WITH NEUROFUZZY ROBUST MODEL

    Directory of Open Access Journals (Sweden)

    Kasa, Richard

    2015-01-01

    Full Text Available In a remarkably short time, economic globalisation has changed the world’s economic order, bringing new challenges and opportunities to SMEs. These processes pushed the need to measure innovation capability, which has become a crucial issue for today’s economic and political decision makers. Companies cannot compete in this new environment unless they become more innovative and respond more effectively to consumers’ needs and preferences – as mentioned in the EU’s innovation strategy. Decision makers cannot make accurate and efficient decisions without knowing the capability for innovation of companies in a sector or a region. This need is forcing economists to develop an integrated, unified and complete method of measuring, approximating and even forecasting the innovation performance not only on a macro but also a micro level. In this recent article a critical analysis of the literature on innovation potential approximation and prediction is given, showing their weaknesses and a possible alternative that eliminates the limitations and disadvantages of classical measuring and predictive methods.

  6. Analytic approximate radiation effects due to Bremsstrahlung

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Zvi I.

    2012-02-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

  7. TMB: Automatic Differentiation and Laplace Approximation

    Directory of Open Access Journals (Sweden)

    Kasper Kristensen

    2016-04-01

    Full Text Available TMB is an open source R package that enables quick implementation of complex nonlinear random effects (latent variable models in a manner similar to the established AD Model Builder package (ADMB, http://admb-project.org/; Fournier et al. 2011. In addition, it offers easy access to parallel computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects are automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three of the joint likelihood. The computations are designed to be fast for problems with many random effects (≈ 106 and parameters (≈ 103 . Computation times using ADMB and TMB are compared on a suite of examples ranging from simple models to large spatial models where the random effects are a Gaussian random field. Speedups ranging from 1.5 to about 100 are obtained with increasing gains for large problems. The package and examples are available at http://tmb-project.org/.

  8. On some applications of diophantine approximations.

    Science.gov (United States)

    Chudnovsky, G V

    1984-03-01

    Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to "almost all" numbers. In particular, any such number has the "2 + epsilon" exponent of irrationality: Theta - p/q > q(-2-epsilon) for relatively prime rational integers p,q, with q >/= q(0) (Theta, epsilon). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162].

  9. Detecting Change-Point via Saddlepoint Approximations

    Institute of Scientific and Technical Information of China (English)

    Zhaoyuan LI; Maozai TIAN

    2017-01-01

    It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.

  10. Traveling cluster approximation for uncorrelated amorphous systems

    International Nuclear Information System (INIS)

    Kaplan, T.; Sen, A.K.; Gray, L.J.; Mills, R.

    1985-01-01

    In this paper, the authors apply the TCA concepts to spatially disordered, uncorrelated systems (e.g., fluids or amorphous metals without short-range order). This is the first approximation scheme for amorphous systems that takes cluster effects into account while preserving the Herglotz property for any amount of disorder. They have performed some computer calculations for the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results are compared with exact calculations (which, in principle, taken into account all cluster effects) and with the CPA, which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA, and yet, apparently, the pair approximation distorts some of the features of the exact results. They conclude that the effects of large clusters are much more important in an uncorrelated liquid metal than in a substitutional alloy. As a result, the pair TCA, which does quite a nice job for alloys, is not adequate for the liquid. Larger clusters must be treated exactly, and therefore an n-TCA with n > 2 must be used

  11. Approximating Markov Chains: What and why

    International Nuclear Information System (INIS)

    Pincus, S.

    1996-01-01

    Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics

  12. Approximation to estimation of critical state

    International Nuclear Information System (INIS)

    Orso, Jose A.; Rosario, Universidad Nacional

    2011-01-01

    The position of the control rod for the critical state of the nuclear reactor depends on several factors; including, but not limited to the temperature and configuration of the fuel elements inside the core. Therefore, the position can not be known in advance. In this paper theoretical estimations are developed to obtain an equation that allows calculating the position of the control rod for the critical state (approximation to critical) of the nuclear reactor RA-4; and will be used to create a software performing the estimation by entering the count rate of the reactor pulse channel and the length obtained from the control rod (in cm). For the final estimation of the approximation to critical state, a function obtained experimentally indicating control rods reactivity according to the function of their position is used, work is done mathematically to obtain a linear function, which gets the length of the control rod, which has to be removed to get the reactor in critical position. (author) [es

  13. Analytic approximate radiation effects due to Bremsstrahlung

    International Nuclear Information System (INIS)

    Ben-Zvi, I.

    2012-01-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R and D Energy Recovery Linac.

  14. Approximate analytic theory of the multijunction grill

    International Nuclear Information System (INIS)

    Hurtak, O.; Preinhaelter, J.

    1991-03-01

    An approximate analytic theory of the general multijunction grill is developed. Omitting the evanescent modes in the subsidiary waveguides both at the junction and at the grill mouth and neglecting multiple wave reflection, simple formulae are derived for the reflection coefficient, the amplitudes of the incident and reflected waves and the spectral power density. These quantities are expressed through the basic grill parameters (the electric length of the structure and phase shift between adjacent waveguides) and two sets of reflection coefficients describing wave reflections in the subsidiary waveguides at the junction and at the plasma. Approximate expressions for these coefficients are also given. The results are compared with a numerical solution of two specific examples; they were shown to be useful for the optimization and design of multijunction grills.For the JET structure it is shown that, in the case of a dense plasma,many results can be obtained from the simple formulae for a two-waveguide multijunction grill. (author) 12 figs., 12 refs

  15. Normal modes of weak colloidal gels

    Science.gov (United States)

    Varga, Zsigmond; Swan, James W.

    2018-01-01

    The normal modes and relaxation rates of weak colloidal gels are investigated in calculations using different models of the hydrodynamic interactions between suspended particles. The relaxation spectrum is computed for freely draining, Rotne-Prager-Yamakawa, and accelerated Stokesian dynamics approximations of the hydrodynamic mobility in a normal mode analysis of a harmonic network representing several colloidal gels. We find that the density of states and spatial structure of the normal modes are fundamentally altered by long-ranged hydrodynamic coupling among the particles. Short-ranged coupling due to hydrodynamic lubrication affects only the relaxation rates of short-wavelength modes. Hydrodynamic models accounting for long-ranged coupling exhibit a microscopic relaxation rate for each normal mode, λ that scales as l-2, where l is the spatial correlation length of the normal mode. For the freely draining approximation, which neglects long-ranged coupling, the microscopic relaxation rate scales as l-γ, where γ varies between three and two with increasing particle volume fraction. A simple phenomenological model of the internal elastic response to normal mode fluctuations is developed, which shows that long-ranged hydrodynamic interactions play a central role in the viscoelasticity of the gel network. Dynamic simulations of hard spheres that gel in response to short-ranged depletion attractions are used to test the applicability of the density of states predictions. For particle concentrations up to 30% by volume, the power law decay of the relaxation modulus in simulations accounting for long-ranged hydrodynamic interactions agrees with predictions generated by the density of states of the corresponding harmonic networks as well as experimental measurements. For higher volume fractions, excluded volume interactions dominate the stress response, and the prediction from the harmonic network density of states fails. Analogous to the Zimm model in polymer

  16. Dictionaries for Sparse Neural Network Approximation

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    submitted 27.12.2017 (2018) ISSN 2162-237X R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : measures of sparsity * fee dforward networks * binary classification * dictionaries of computational units * Chernoff-Hoeffding Bound Subject RIV: IN - Informatics, Computer Science OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 6.108, year: 2016

  17. Correlation chart of Pennsylvanian rocks in Alabama, Tennessee, Kentucky, Virginia, West Virginia, Ohio, Maryland, and Pennsylvania showing approximate position of coal beds, coal zones, and key stratigraphic units: Chapter D.2 in Coal and petroleum resources in the Appalachian basin: distribution, geologic framework, and geochemical character

    Science.gov (United States)

    Ruppert, Leslie F.; Trippi, Michael H.; Slucher, Ernie R.; Ruppert, Leslie F.; Ryder, Robert T.

    2014-01-01

    The Appalachian basin, one of the largest Pennsylvanian bituminous coal-producing regions in the world, currently contains nearly one-half of the top 15 coal-producing States in the United States (Energy Information Agency, 2006). Anthracite of Pennsylvanian age occurs in synclinal basins in eastern Pennsylvania, but production is minimal. A simplified correlation chart was compiled from published and unpublished sources as a means of visualizing currently accepted stratigraphic relations between the rock formations, coal beds, coal zones, and key stratigraphic units in Alabama, Tennessee, Kentucky, Virginia, West Virginia, Ohio, Maryland, and Pennsylvania. The thickness of each column is based on chronostratigraphic divisions (Lower, Middle, and Upper Pennsylvanian), not the thickness of strata. Researchers of Pennsylvanian strata in the Appalachian basin also use biostratigraphic markers and other relative and absolute geologic age associations between the rocks to better understand the spatial relations of the strata. Thus, the stratigraphic correlation data in this chart should be considered provisional and will be updated as coal-bearing rocks within the Appalachian coal regions continue to be evaluated.

  18. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....

  19. Digital Pupillometry in Normal Subjects

    Science.gov (United States)

    Rickmann, Annekatrin; Waizel, Maria; Kazerounian, Sara; Szurman, Peter; Wilhelm, Helmut; Boden, Karl T.

    2017-01-01

    ABSTRACT The aim of this study was to evaluate the pupil size of normal subjects at different illumination levels with a novel pupillometer. The pupil size of healthy study participants was measured with an infrared-video PupilX pupillometer (MEye Tech GmbH, Alsdorf, Germany) at five different illumination levels (0, 0.5, 4, 32, and 250 lux). Measurements were performed by the same investigator. Ninety images were executed during a measurement period of 3 seconds. The absolute linear camera resolution was approximately 20 pixels per mm. This cross-sectional study analysed 490 eyes of 245 subjects (mean age: 51.9 ± 18.3 years, range: 6–87 years). On average, pupil diameter decreased with increasing light intensities for both eyes, with a mean pupil diameter of 5.39 ± 1.04 mm at 0 lux, 5.20 ± 1.00 mm at 0.5 lux, 4.70 ± 0.97 mm at 4 lux, 3.74 ± 0.78 mm at 32 lux, and 2.84 ± 0.50 mm at 250 lux illumination. Furthermore, it was found that anisocoria increased by 0.03 mm per life decade for all illumination levels (R2 = 0.43). Anisocoria was higher under scotopic and mesopic conditions. This study provides additional information to the current knowledge concerning age- and light-related pupil size and anisocoria as a baseline for future patient studies. PMID:28228832

  20. Normal modes of Bardeen discs

    International Nuclear Information System (INIS)

    Verdaguer, E.

    1983-01-01

    The short wavelength normal modes of self-gravitating rotating polytropic discs in the Bardeen approximation are studied. The discs' oscillations can be seen in terms of two types of modes: the p-modes whose driving forces are pressure forces and the r-modes driven by Coriolis forces. As a consequence of differential rotation coupling between the two takes place and some mixed modes appear, their properties can be studied under the assumption of weak coupling and it is seen that they avoid the crossing of the p- and r-modes. The short wavelength analysis provides a basis for the classification of the modes, which can be made by using the properties of their phase diagrams. The classification is applied to the large wavelength modes of differentially rotating discs with strong coupling and to a uniformly rotating sequence with no coupling, which have been calculated in previous papers. Many of the physical properties and qualitative features of these modes are revealed by the analysis. (author)

  1. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    Science.gov (United States)

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  2. New Tests of the Fixed Hotspot Approximation

    Science.gov (United States)

    Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.

    2005-05-01

    We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of

  3. Limitations of the acoustic approximation for seismic crosshole tomography

    Science.gov (United States)

    Marelli, Stefano; Maurer, Hansruedi

    2010-05-01

    Modelling and inversion of seismic crosshole data is a challenging task in terms of computational resources. Even with the significant increase in power of modern supercomputers, full three-dimensional elastic modelling of high-frequency waveforms generated from hundreds of source positions in several boreholes is still an intractable task. However, it has been recognised that full waveform inversion offers substantially more information compared with traditional travel time tomography. A common strategy to reduce the computational burden for tomographic inversion is to approximate the true elastic wave propagation by acoustic modelling. This approximation assumes that the solid rock units can be treated like fluids (with no shear wave propagation) and is generally considered to be satisfactory so long as only the earliest portions of the recorded seismograms are considered. The main assumption is that most of the energy in the early parts of the recorded seismograms is carried by the faster compressional (P-) waves. Although a limited number of studies exist on the effects of this approximation for surface/marine synthetic reflection seismic data, and show it to be generally acceptable for models with low to moderate impedance contrasts, to our knowledge no comparable studies have been published on the effects for cross-borehole transmission data. An obvious question is whether transmission tomography should be less affected by elastic effects than surface reflection data when only short time windows are applied to primarily capture the first arriving wavetrains. To answer this question we have performed 2D and 3D investigations on the validity of the acoustic approximation for an elastic medium and using crosshole source-receiver configurations. In order to generate consistent acoustic and elastic data sets, we ran the synthetic tests using the same finite-differences time-domain elastic modelling code for both types of simulations. The acoustic approximation was

  4. Random phase approximation in relativistic approach

    International Nuclear Information System (INIS)

    Ma Zhongyu; Yang Ding; Tian Yuan; Cao Ligang

    2009-01-01

    Some special issues of the random phase approximation(RPA) in the relativistic approach are reviewed. A full consistency and proper treatment of coupling to the continuum are responsible for the successful application of the RPA in the description of dynamical properties of finite nuclei. The fully consistent relativistic RPA(RRPA) requires that the relativistic mean filed (RMF) wave function of the nucleus and the RRPA correlations are calculated in a same effective Lagrangian and the consistent treatment of the Dirac sea of negative energy states. The proper treatment of the single particle continuum with scattering asymptotic conditions in the RMF and RRPA is discussed. The full continuum spectrum can be described by the single particle Green's function and the relativistic continuum RPA is established. A separable form of the paring force is introduced in the relativistic quasi-particle RPA. (authors)

  5. Random-phase approximation and broken symmetry

    International Nuclear Information System (INIS)

    Davis, E.D.; Heiss, W.D.

    1986-01-01

    The validity of the random-phase approximation (RPA) in broken-symmetry bases is tested in an appropriate many-body system for which exact solutions are available. Initially the regions of stability of the self-consistent quasiparticle bases in this system are established and depicted in a 'phase' diagram. It is found that only stable bases can be used in an RPA calculation. This is particularly true for those RPA modes which are not associated with the onset of instability of the basis; it is seen that these modes do not describe any excited state when the basis is unstable, although from a formal point of view they remain acceptable. The RPA does well in a stable broken-symmetry basis provided one is not too close to a point where a phase transition occurs. This is true for both energies and matrix elements. (author)

  6. Approximated solutions to the Schroedinger equation

    International Nuclear Information System (INIS)

    Rico, J.F.; Fernandez-Alonso, J.I.

    1977-01-01

    The authors are currently working on a couple of the well-known deficiencies of the variation method and present here some of the results that have been obtained so far. The variation method does not give information a priori on the trial functions best suited for a particular problem nor does it give information a posteriori on the degree of precision attained. In order to clarify the origin of both difficulties, a geometric interpretation of the variation method is presented. This geometric interpretation is the starting point for the exact formal solution to the fundamental state and for the step-by-step approximations to the exact solution which are also given. Some comments on these results are included. (Auth.)

  7. Vortex sheet approximation of boundary layers

    International Nuclear Information System (INIS)

    Chorin, A.J.

    1978-01-01

    a grid free method for approximating incomprssible boundary layers is introduced. The computational elements are segments of vortex sheets. The method is related to the earlier vortex method; simplicity is achieved at the cost of replacing the Navier-Stokes equations by the Prandtl boundary layer equations. A new method for generating vorticity at boundaries is also presented; it can be used with the earlier voartex method. The applications presented include (i) flat plate problems, and (ii) a flow problem in a model cylinder- piston assembly, where the new method is used near walls and an improved version of the random choice method is used in the interior. One of the attractive features of the new method is the ease with which it can be incorporated into hybrid algorithms

  8. Approximate Stokes Drift Profiles in Deep Water

    Science.gov (United States)

    Breivik, Øyvind; Janssen, Peter A. E. M.; Bidlot, Jean-Raymond

    2014-09-01

    A deep-water approximation to the Stokes drift velocity profile is explored as an alternative to the monochromatic profile. The alternative profile investigated relies on the same two quantities required for the monochromatic profile, viz the Stokes transport and the surface Stokes drift velocity. Comparisons with parametric spectra and profiles under wave spectra from the ERA-Interim reanalysis and buoy observations reveal much better agreement than the monochromatic profile even for complex sea states. That the profile gives a closer match and a more correct shear has implications for ocean circulation models since the Coriolis-Stokes force depends on the magnitude and direction of the Stokes drift profile and Langmuir turbulence parameterizations depend sensitively on the shear of the profile. The alternative profile comes at no added numerical cost compared to the monochromatic profile.

  9. Analytical approximations for wide and narrow resonances

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2005-01-01

    This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U 238 were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)

  10. Analytical approximations for wide and narrow resonances

    Energy Technology Data Exchange (ETDEWEB)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br

    2005-07-01

    This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U{sup 238} were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)

  11. The Bloch Approximation in Periodically Perforated Media

    International Nuclear Information System (INIS)

    Conca, C.; Gomez, D.; Lobo, M.; Perez, E.

    2005-01-01

    We consider a periodically heterogeneous and perforated medium filling an open domain Ω of R N . Assuming that the size of the periodicity of the structure and of the holes is O(ε),we study the asymptotic behavior, as ε → 0, of the solution of an elliptic boundary value problem with strongly oscillating coefficients posed in Ω ε (Ω ε being Ω minus the holes) with a Neumann condition on the boundary of the holes. We use Bloch wave decomposition to introduce an approximation of the solution in the energy norm which can be computed from the homogenized solution and the first Bloch eigenfunction. We first consider the case where Ωis R N and then localize the problem for abounded domain Ω, considering a homogeneous Dirichlet condition on the boundary of Ω

  12. Approximate analytical modeling of leptospirosis infection

    Science.gov (United States)

    Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani

    2017-11-01

    Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.

  13. Approximate spacetime symmetries and conservation laws

    Energy Technology Data Exchange (ETDEWEB)

    Harte, Abraham I [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States)], E-mail: harte@uchicago.edu

    2008-10-21

    A notion of geometric symmetry is introduced that generalizes the classical concepts of Killing fields and other affine collineations. There is a sense in which flows under these new vector fields minimize deformations of the connection near a specified observer. Any exact affine collineations that may exist are special cases. The remaining vector fields can all be interpreted as analogs of Poincare and other well-known symmetries near timelike worldlines. Approximate conservation laws generated by these objects are discussed for both geodesics and extended matter distributions. One example is a generalized Komar integral that may be taken to define the linear and angular momenta of a spacetime volume as seen by a particular observer. This is evaluated explicitly for a gravitational plane wave spacetime.

  14. Coated sphere scattering by geometric optics approximation.

    Science.gov (United States)

    Mengran, Zhai; Qieni, Lü; Hongxia, Zhang; Yinxin, Zhang

    2014-10-01

    A new geometric optics model has been developed for the calculation of light scattering by a coated sphere, and the analytic expression for scattering is presented according to whether rays hit the core or not. The ray of various geometric optics approximation (GOA) terms is parameterized by the number of reflections in the coating/core interface, the coating/medium interface, and the number of chords in the core, with the degeneracy path and repeated path terms considered for the rays striking the core, which simplifies the calculation. For the ray missing the core, the various GOA terms are dealt with by a homogeneous sphere. The scattering intensity of coated particles are calculated and then compared with those of Debye series and Aden-Kerker theory. The consistency of the results proves the validity of the method proposed in this work.

  15. Approximation by max-product type operators

    CERN Document Server

    Bede, Barnabás; Gal, Sorin G

    2016-01-01

    This monograph presents a broad treatment of developments in an area of constructive approximation involving the so-called "max-product" type operators. The exposition highlights the max-product operators as those which allow one to obtain, in many cases, more valuable estimates than those obtained by classical approaches. The text considers a wide variety of operators which are studied for a number of interesting problems such as quantitative estimates, convergence, saturation results, localization, to name several. Additionally, the book discusses the perfect analogies between the probabilistic approaches of the classical Bernstein type operators and of the classical convolution operators (non-periodic and periodic cases), and the possibilistic approaches of the max-product variants of these operators. These approaches allow for two natural interpretations of the max-product Bernstein type operators and convolution type operators: firstly, as possibilistic expectations of some fuzzy variables, and secondly,...

  16. Approximate Sensory Data Collection: A Survey.

    Science.gov (United States)

    Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong

    2017-03-10

    With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.

  17. Approximate Sensory Data Collection: A Survey

    Directory of Open Access Journals (Sweden)

    Siyao Cheng

    2017-03-01

    Full Text Available With the rapid development of the Internet of Things (IoTs, wireless sensor networks (WSNs and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.

  18. Approximate truncation robust computed tomography—ATRACT

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Maier, Andreas

    2013-01-01

    We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)

  19. Hydromagnetic turbulence in the direct interaction approximation

    International Nuclear Information System (INIS)

    Nagarajan, S.

    1975-01-01

    The dissertation is concerned with the nature of turbulence in a medium with large electrical conductivity. Three distinct though inter-related questions are asked. Firstly, the evolution of a weak, random initial magnetic field in a highly conducting, isotropically turbulent fluid is discussed. This was first discussed in the paper 'Growth of Turbulent Magnetic Fields' by Kraichnan and Nagargian. The Physics of Fluids, volume 10, number 4, 1967. Secondly, the direct interaction approximation for hydromagnetic turbulence maintained by stationary, isotropic, random stirring forces is formulated in the wave-number-frequency domain. Thirdly, the dynamical evolution of a weak, random, magnetic excitation in a turbulent electrically conducting fluid is examined under varying kinematic conditions. (G.T.H.)

  20. Approximation Preserving Reductions among Item Pricing Problems

    Science.gov (United States)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.

  1. Approximate direct georeferencing in national coordinates

    Science.gov (United States)

    Legat, Klaus

    Direct georeferencing has gained an increasing importance in photogrammetry and remote sensing. Thereby, the parameters of exterior orientation (EO) of an image sensor are determined by GPS/INS, yielding results in a global geocentric reference frame. Photogrammetric products like digital terrain models or orthoimages, however, are often required in national geodetic datums and mapped by national map projections, i.e., in "national coordinates". As the fundamental mathematics of photogrammetry is based on Cartesian coordinates, the scene restitution is often performed in a Cartesian frame located at some central position of the image block. The subsequent transformation to national coordinates is a standard problem in geodesy and can be done in a rigorous manner-at least if the formulas of the map projection are rigorous. Drawbacks of this procedure include practical deficiencies related to the photogrammetric processing as well as the computational cost of transforming the whole scene. To avoid these problems, the paper pursues an alternative processing strategy where the EO parameters are transformed prior to the restitution. If only this transition was done, however, the scene would be systematically distorted. The reason is that the national coordinates are not Cartesian due to the earth curvature and the unavoidable length distortion of map projections. To settle these distortions, several corrections need to be applied. These are treated in detail for both passive and active imaging. Since all these corrections are approximations only, the resulting technique is termed "approximate direct georeferencing". Still, the residual distortions are usually very low as is demonstrated by simulations, rendering the technique an attractive approach to direct georeferencing.

  2. Nonlinear Schroedinger Approximations for Partial Differential Equations with Quadratic and Quasilinear Terms

    Science.gov (United States)

    Cummings, Patrick

    We consider the approximation of solutions of two complicated, physical systems via the nonlinear Schrodinger equation (NLS). In particular, we discuss the evolution of wave packets and long waves in two physical models. Due to the complicated nature of the equations governing many physical systems and the in-depth knowledge we have for solutions of the nonlinear Schrodinger equation, it is advantageous to use approximation results of this kind to model these physical systems. The approximations are simple enough that we can use them to understand the qualitative and quantitative behavior of the solutions, and by justifying them we can show that the behavior of the approximation captures the behavior of solutions to the original equation, at least for long, but finite time. We first consider a model of the water wave equations which can be approximated by wave packets using the NLS equation. We discuss a new proof that both simplifies and strengthens previous justification results of Schneider and Wayne. Rather than using analytic norms, as was done by Schneider and Wayne, we construct a modified energy functional so that the approximation holds for the full interval of existence of the approximate NLS solution as opposed to a subinterval (as is seen in the analytic case). Furthermore, the proof avoids problems associated with inverting the normal form transform by working with a modified energy functional motivated by Craig and Hunter et al. We then consider the Klein-Gordon-Zakharov system and prove a long wave approximation result. In this case there is a non-trivial resonance that cannot be eliminated via a normal form transform. By combining the normal form transform for small Fourier modes and using analytic norms elsewhere, we can get a justification result on the order 1 over epsilon squared time scale.

  3. Principles of applying Poisson units in radiology

    International Nuclear Information System (INIS)

    Benyumovich, M.S.

    2000-01-01

    The probability that radioactive particles hit particular space patterns (e.g. cells in the squares of a count chamber net) and time intervals (e.g. radioactive particles hit a given area per time unit) follows the Poisson distribution. The mean is the only parameter from which all this distribution depends. A metrological base of counting the cells and radioactive particles is a property of the Poisson distribution assuming equality of a standard deviation to a root square of mean (property 1). The application of Poisson units in counting of blood formed elements and cultured cells was proposed by us (Russian Federation Patent No. 2126230). Poisson units relate to the means which make the property 1 valid. In a case of cells counting, the square of these units is equal to 1/10 of one of count chamber net where they count the cells. Thus one finds the means from the single cell count rate divided by 10. Finding the Poisson units when counting the radioactive particles should assume determination of a number of these particles sufficient to make equality 1 valid. To this end one should subdivide a time interval used in counting a single particle count rate into different number of equal portions (count numbers). Next one should pick out the count number ensuring the satisfaction of equality 1. Such a portion is taken as a Poisson unit in the radioactive particles count. If the flux of particles is controllable one should set up a count rate sufficient to make equality 1 valid. Operations with means obtained by with the use of Poisson units are performed on the base of approximation of the Poisson distribution by a normal one. (author)

  4. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  5. Theory of normal metals

    International Nuclear Information System (INIS)

    Mahan, G.D.

    1992-01-01

    The organizers requested that I give eight lectures on the theory of normal metals, ''with an eye on superconductivity.'' My job was to cover the general properties of metals. The topics were selected according to what the students would need to known for the following lectures on superconductivity. My role was to prepare the ground work for the later lectures. The problem is that there is not yet a widely accepted theory for the mechanism which pairs the electrons. Many mechanisms have been proposed, with those of phonons and spin fluctuations having the most followers. So I tried to discuss both topics. I also introduced the tight-binding model for metals, which forms the basis for most of the work on the cuprate superconductors

  6. Approximate Eigensolutions of the Deformed Woods—Saxon Potential via AIM

    International Nuclear Information System (INIS)

    Ikhdair, Sameer M.; Falaye Babatunde, J.; Hamzavi, Majid

    2013-01-01

    Using the Pekeris approximation, the Schrödinger equation is solved for the nuclear deformed Woods—Saxon potential within the framework of the asymptotic iteration method. The energy levels are worked out and the corresponding normalized eigenfunctions are obtained in terms of hypergeometric function

  7. Rate-distortion functions of non-stationary Markoff chains and their block-independent approximations

    OpenAIRE

    Agarwal, Mukul

    2018-01-01

    It is proved that the limit of the normalized rate-distortion functions of block independent approximations of an irreducible, aperiodic Markoff chain is independent of the initial distribution of the Markoff chain and thus, is also equal to the rate-distortion function of the Markoff chain.

  8. Some properties of dual and approximate dual of fusion frames

    OpenAIRE

    Arefijamaal, Ali Akbar; Neyshaburi, Fahimeh Arabyani

    2016-01-01

    In this paper we extend the notion of approximate dual to fusion frames and present some approaches to obtain dual and approximate alternate dual fusion frames. Also, we study the stability of dual and approximate alternate dual fusion frames.

  9. Approximation algorithms for a genetic diagnostics problem.

    Science.gov (United States)

    Kosaraju, S R; Schäffer, A A; Biesecker, L G

    1998-01-01

    We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.

  10. Adaptive approximation of higher order posterior statistics

    KAUST Repository

    Lee, Wonjung

    2014-02-01

    Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively. © 2013 Elsevier Inc.

  11. Configuring Airspace Sectors with Approximate Dynamic Programming

    Science.gov (United States)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  12. Rainbows: Mie computations and the Airy approximation.

    Science.gov (United States)

    Wang, R T; van de Hulst, H C

    1991-01-01

    Efficient and accurate computation of the scattered intensity pattern by the Mie formulas is now feasible for size parameters up to x = 50,000 at least, which in visual light means spherical drops with diameters up to 6 mm. We present a method for evaluating the Mie coefficients from the ratios between Riccati-Bessel and Neumann functions of successive order. We probe the applicability of the Airy approximation, which we generalize to rainbows of arbitrary p (number of internal reflections = p - 1), by comparing the Mie and Airy intensity patterns. Millimeter size water drops show a match in all details, including the position and intensity of the supernumerary maxima and the polarization. A fairly good match is still seen for drops of 0.1 mm. A small spread in sizes helps to smooth out irrelevant detail. The dark band between the rainbows is used to test more subtle features. We conclude that this band contains not only externally reflected light (p = 0) but also a sizable contribution f rom the p = 6 and p = 7 rainbows, which shift rapidly with wavelength. The higher the refractive index, the closer both theories agree on the first primary rainbow (p = 2) peak for drop diameters as small as 0.02 mm. This may be useful in supporting experimental work.

  13. Dynamical Vertex Approximation for the Hubbard Model

    Science.gov (United States)

    Toschi, Alessandro

    A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.

  14. Quantum adiabatic approximation and the geometric phase

    International Nuclear Information System (INIS)

    Mostafazadeh, A.

    1997-01-01

    A precise definition of an adiabaticity parameter ν of a time-dependent Hamiltonian is proposed. A variation of the time-dependent perturbation theory is presented which yields a series expansion of the evolution operator U(τ)=summation scr(l) U (scr(l)) (τ) with U (scr(l)) (τ) being at least of the order ν scr(l) . In particular, U (0) (τ) corresponds to the adiabatic approximation and yields Berry close-quote s adiabatic phase. It is shown that this series expansion has nothing to do with the 1/τ expansion of U(τ). It is also shown that the nonadiabatic part of the evolution operator is generated by a transformed Hamiltonian which is off-diagonal in the eigenbasis of the initial Hamiltonian. This suggests the introduction of an adiabatic product expansion for U(τ) which turns out to yield exact expressions for U(τ) for a large number of quantum systems. In particular, a simple application of the adiabatic product expansion is used to show that for the Hamiltonian describing the dynamics of a magnetic dipole in an arbitrarily changing magnetic field, there exists another Hamiltonian with the same eigenvectors for which the Schroedinger equation is exactly solvable. Some related issues concerning geometric phases and their physical significance are also discussed. copyright 1997 The American Physical Society

  15. Magnetic reconnection under anisotropic magnetohydrodynamic approximation

    International Nuclear Information System (INIS)

    Hirabayashi, K.; Hoshino, M.

    2013-01-01

    We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p ∥ >p ⊥ ) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere

  16. When Density Functional Approximations Meet Iron Oxides.

    Science.gov (United States)

    Meng, Yu; Liu, Xing-Wu; Huo, Chun-Fang; Guo, Wen-Ping; Cao, Dong-Bo; Peng, Qing; Dearden, Albert; Gonze, Xavier; Yang, Yong; Wang, Jianguo; Jiao, Haijun; Li, Yongwang; Wen, Xiao-Dong

    2016-10-11

    Three density functional approximations (DFAs), PBE, PBE+U, and Heyd-Scuseria-Ernzerhof screened hybrid functional (HSE), were employed to investigate the geometric, electronic, magnetic, and thermodynamic properties of four iron oxides, namely, α-FeOOH, α-Fe 2 O 3 , Fe 3 O 4 , and FeO. Comparing our calculated results with available experimental data, we found that HSE (a = 0.15) (containing 15% "screened" Hartree-Fock exchange) can provide reliable values of lattice constants, Fe magnetic moments, band gaps, and formation energies of all four iron oxides, while standard HSE (a = 0.25) seriously overestimates the band gaps and formation energies. For PBE+U, a suitable U value can give quite good results for the electronic properties of each iron oxide, but it is challenging to accurately get other properties of the four iron oxides using the same U value. Subsequently, we calculated the Gibbs free energies of transformation reactions among iron oxides using the HSE (a = 0.15) functional and plotted the equilibrium phase diagrams of the iron oxide system under various conditions, which provide reliable theoretical insight into the phase transformations of iron oxides.

  17. Relating normalization to neuronal populations across cortical areas.

    Science.gov (United States)

    Ruff, Douglas A; Alberts, Joshua J; Cohen, Marlene R

    2016-09-01

    Normalization, which divisively scales neuronal responses to multiple stimuli, is thought to underlie many sensory, motor, and cognitive processes. In every study where it has been investigated, neurons measured in the same brain area under identical conditions exhibit a range of normalization, ranging from suppression by nonpreferred stimuli (strong normalization) to additive responses to combinations of stimuli (no normalization). Normalization has been hypothesized to arise from interactions between neuronal populations, either in the same or different brain areas, but current models of normalization are not mechanistic and focus on trial-averaged responses. To gain insight into the mechanisms underlying normalization, we examined interactions between neurons that exhibit different degrees of normalization. We recorded from multiple neurons in three cortical areas while rhesus monkeys viewed superimposed drifting gratings. We found that neurons showing strong normalization shared less trial-to-trial variability with other neurons in the same cortical area and more variability with neurons in other cortical areas than did units with weak normalization. Furthermore, the cortical organization of normalization was not random: neurons recorded on nearby electrodes tended to exhibit similar amounts of normalization. Together, our results suggest that normalization reflects a neuron's role in its local network and that modulatory factors like normalization share the topographic organization typical of sensory tuning properties. Copyright © 2016 the American Physiological Society.

  18. Neutron scattering by normal liquids

    Energy Technology Data Exchange (ETDEWEB)

    Gennes, P.G. de [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1961-07-01

    Neutron data on motions in normal liquids well below critical point are reviewed and classified according to the order of magnitude of momentum transfers {Dirac_h}q and energy transfers {Dirac_h}w. For large momentum transfers a perfect gas model is valid. For smaller q and incoherent scattering, the major effects are related to the existence of two characteristic times: the period of oscillation of an atom in its cell, and the average lifetime of the atom in a definite cell. Various interpolation schemes covering both time scales are discussed. For coherent scattering and intermediate q, the energy spread is expected to show a minimum whenever q corresponds to a diffraction peak. For very small q the standard macroscopic description of density fluctuations is applicable. The limits of the various (q) and (w) domains and the validity of various approximations are discussed by a method of moments. The possibility of observing discrete transitions due to internal degrees of freedom in polyatomic molecules, in spite of the 'Doppler width' caused by translational motions, is also examined. (author) [French] L'auteur examine les donnees neutroniques sur les mouvements dans les liquides normaux, bien au-dessous du point critique, et les classe d'apres l'ordre de grandeur des transferts de quantite de mouvement {Dirac_h}q et des transferts d'energie {Dirac_h}w. Pour les grands transferts de, quantite de mouvement, un modele de gaz parfait est valable. En ce qui concerne les faibles valeurs de q et la diffussion incoherente, les principaux effets sont lies a l'existence de deux temps caracteristiques: la periode d'oscillation d'un atome dans sa cellule et la duree moyenne de vie de l'atome dans une cellule determinee. L'auteur etudie divers systemes d'interpolation se rapportant aux deux echelles de temps. Pour la diffusion coherente et les valeurs intermediaires de q, on presume que le spectre d'energie accuse un minimum chaque fois que q correspond a un pic de

  19. Dynamical cluster approximation plus semiclassical approximation study for a Mott insulator and d-wave pairing

    Science.gov (United States)

    Kim, SungKun; Lee, Hunpyo

    2017-06-01

    Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.

  20. Hydration thermodynamics beyond the linear response approximation.

    Science.gov (United States)

    Raineri, Fernando O

    2016-10-19

    The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute

  1. Bond selective chemistry beyond the adiabatic approximation

    Energy Technology Data Exchange (ETDEWEB)

    Butler, L.J. [Univ. of Chicago, IL (United States)

    1993-12-01

    One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.

  2. Cophylogeny reconstruction via an approximate Bayesian computation.

    Science.gov (United States)

    Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F

    2015-05-01

    Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  3. Coronal Loops: Evolving Beyond the Isothermal Approximation

    Science.gov (United States)

    Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.

    2002-05-01

    Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.

  4. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  5. Mode-field half-widths of Gaussian approximation for the fundamental mode of two kinds of optical waveguides

    International Nuclear Information System (INIS)

    Lian-Huang, Li; Fu-Yuan, Guo

    2009-01-01

    This paper analyzes the characteristic of matching efficiency between the fundamental mode of two kinds of optical waveguides and its Gaussian approximate field. Then, it presents a new method where the mode-field half-width of Gaussian approximation for the fundamental mode should be defined according to the maximal matching efficiency method. The relationship between the mode-field half-width of the Gaussian approximate field obtained from the maximal matching efficiency and normalized frequency is studied; furthermore, two formulas of mode-field half-widths as a function of normalized frequency are proposed

  6. Discrete factor approximations in simultaneous equation models: estimating the impact of a dummy endogenous variable on a continuous outcome.

    Science.gov (United States)

    Mroz, T A

    1999-10-01

    This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.

  7. Deficiency of normal galaxies among Markaryan galaxies

    International Nuclear Information System (INIS)

    Iyeveer, M.M.

    1986-01-01

    Comparison of the morphological types of Markaryan galaxies and other galaxies in the Uppsala catalog indicates a strong deficiency of normal ellipticals among the Markaryan galaxies, for which the fraction of type E galaxies is ≤ 1% against 10% among the remaining galaxies. Among the Markaryan galaxies, an excess of barred galaxies is observed - among the Markaryan galaxies with types Sa-Scd, approximately half or more have bars, whereas among the remaining galaxies of the same types bars are found in about 1/3

  8. The consequences of non-normality

    International Nuclear Information System (INIS)

    Hip, I.; Lippert, Th.; Neff, H.; Schilling, K.; Schroers, W.

    2002-01-01

    The non-normality of Wilson-type lattice Dirac operators has important consequences - the application of the usual concepts from the textbook (hermitian) quantum mechanics should be reconsidered. This includes an appropriate definition of observables and the refinement of computational tools. We show that the truncated singular value expansion is the optimal approximation to the inverse operator D -1 and we prove that due to the γ 5 -hermiticity it is equivalent to γ 5 times the truncated eigenmode expansion of the hermitian Wilson-Dirac operator

  9. The evolution of voids in the adhesion approximation

    Science.gov (United States)

    Sahni, Varun; Sathyaprakah, B. S.; Shandarin, Sergei F.

    1994-08-01

    We apply the adhesion approximation to study the formation and evolution of voids in the universe. Our simulations-carried out using 1283 particles in a cubical box with side 128 Mpc-indicate that the void spectrum evolves with time and that the mean void size in the standard Cosmic Background Explorer Satellite (COBE)-normalized cold dark matter (CDM) model with H50 = 1 scals approximately as bar D(z) = bar Dzero/(1+2)1/2, where bar Dzero approximately = 10.5 Mpc. Interestingly, we find a strong correlation between the sizes of voids and the value of the primordial gravitational potential at void centers. This observation could in principle, pave the way toward reconstructing the form of the primordial potential from a knowledge of the observed void spectrum. Studying the void spectrum at different cosmological epochs, for spectra with a built in k-space cutoff we find that the number of voids in a representative volume evolves with time. The mean number of voids first increases until a maximum value is reached (indicating that the formation of cellular structure is complete), and then begins to decrease as clumps and filaments erge leading to hierarchical clustering and the subsequent elimination of small voids. The cosmological epoch characterizing the completion of cellular structure occurs when the length scale going nonlinear approaches the mean distance between peaks of the gravitaional potential. A central result of this paper is that voids can be populated by substructure such as mini-sheets and filaments, which run through voids. The number of such mini-pancakes that pass through a given void can be measured by the genus characteristic of an individual void which is an indicator of the topology of a given void in intial (Lagrangian) space. Large voids have on an average a larger measure than smaller voids indicating more substructure within larger voids relative to smaller ones. We find that the topology of individual voids is strongly epoch dependent

  10. Hypervascular liver lesions in radiologically normal liver

    Energy Technology Data Exchange (ETDEWEB)

    Amico, Enio Campos; Alves, Jose Roberto; Souza, Dyego Leandro Bezerra de; Salviano, Fellipe Alexandre Macena; Joao, Samir Assi; Liguori, Adriano de Araujo Lima, E-mail: ecamic@uol.com.br [Hospital Universitario Onofre Lopes (HUOL/UFRN), Natal, RN (Brazil). Clinica Gastrocentro e Ambulatorios de Cirurgia do Aparelho Digestivo e de Cirurgia Hepatobiliopancreatica

    2017-09-01

    Background: The hypervascular liver lesions represent a diagnostic challenge. Aim: To identify risk factors for cancer in patients with non-hemangiomatous hypervascular hepatic lesions in radiologically normal liver. Method: This prospective study included patients with hypervascular liver lesions in radiologically normal liver. The diagnosis was made by biopsy or was presumed on the basis of radiologic stability in follow-up period of one year. Cirrhosis or patients with typical imaging characteristics of haemangioma were excluded. Results: Eighty eight patients were included. The average age was 42.4. The lesions were unique and were between 2-5 cm in size in most cases. Liver biopsy was performed in approximately 1/3 of cases. The lesions were benign or most likely benign in 81.8%, while cancer was diagnosed in 12.5% of cases. Univariate analysis showed that age >45 years (p< 0.001), personal history of cancer (p=0.020), presence of >3 nodules (p=0.003) and elevated alkaline phosphatase (p=0.013) were significant risk factors for cancer. Conclusion: It is safe to observe hypervascular liver lesions in normal liver in patients up to 45 years, normal alanine amino transaminase, up to three nodules and no personal history of cancer. Lesion biopsies are safe in patients with atypical lesions and define the treatment to be established for most of these patients. (author)

  11. Toward a consistent random phase approximation based on the relativistic Hartree approximation

    International Nuclear Information System (INIS)

    Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.

    1992-01-01

    We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data

  12. Short proofs of strong normalization

    OpenAIRE

    Wojdyga, Aleksander

    2008-01-01

    This paper presents simple, syntactic strong normalization proofs for the simply-typed lambda-calculus and the polymorphic lambda-calculus (system F) with the full set of logical connectives, and all the permutative reductions. The normalization proofs use translations of terms and types to systems, for which strong normalization property is known.

  13. Interface unit

    NARCIS (Netherlands)

    Keyson, D.V.; Freudenthal, A.; De Hoogh, M.P.A.; Dekoven, E.A.M.

    2001-01-01

    The invention relates to an interface unit comprising at least a display unit for communication with a user, which is designed for being coupled with a control unit for at least one or more parameters in a living or working environment, such as the temperature setting in a house, which control unit

  14. The holonomy expansion: Invariants and approximate supersymmetry

    International Nuclear Information System (INIS)

    Jaffe, Arthur

    2000-01-01

    is differentiable in the unit (ε, λ) square, except at the origin. Using the holonomy expansion, we prove for fixed θ(negated-set-membership sign)γ sing that Z(ε, λ; θ) is also jointly continuous in (ε, λ), at the origin. As a consequence, if θ(negated-set-membership sign)γ sing , then we can interchange limits and Z(λ;θ)=lim ε→0 Z(ε,0;θ). We observe that the joint continuity of Z(ε, λ; θ) in (ε, λ) is not uniform in θ, and Z(ε, λ; θ) is not jointly continuous for θ(set-membership sign)γ sing . But the limiting function Z(λ; θ) is continuous in θ; so the ε-limit also determines Z(λ; θ) for all θ, including for θ(set-membership sign)γ sing . We use these facts to calculate Z(λ; θ). Our regularization destroys supersymmetry, but the holonomy expansion gives quantitative bounds on the error terms. (c) 2000 Academic Press, Inc

  15. Increase in the accuracy of approximating the profile of the erosion zone in planar magnetrons

    Science.gov (United States)

    Rogov, A. V.; Kapustin, Yu. V.

    2017-09-01

    It has been shown that the use of the survival function of the Weibull distribution shifted along the ordinate axis allows one to increase the accuracy of the approximation of the normalized profile of an erosion zone in the area from the axis to the maximum sputtering region compared with the previously suggested distribution function of the extremum values. The survival function of the Weibull distribution is used in the area from the maximum to the outer boundary of an erosion zone. The major advantage of using the new approximation is observed for magnetrons with a large central nonsputtered spot and for magnetrons with substantial sputtering in the paraxial zone.

  16. Approximal morphology as predictor of approximal caries in primary molar teeth

    DEFF Research Database (Denmark)

    Cortes, A; Martignon, S; Qvist, V

    2018-01-01

    consent was given, participated. Upper and lower molar teeth of one randomly selected side received a 2-day temporarily separation. Bitewing radiographs and silicone impressions of interproximal area (IPA) were obtained. One-year procedures were repeated in 52 children (84%). The morphology of the distal...... surfaces of the first molar teeth and the mesial surfaces on the second molar teeth (n=208) was scored from the occlusal aspect on images from the baseline resin models resulting in four IPA variants: concave-concave; concave-convex; convex-concave, and convex-convex. Approximal caries on the surface...

  17. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    Science.gov (United States)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  18. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    International Nuclear Information System (INIS)

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-01-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N 4 ). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S ^2 〉 are also developed and tested

  19. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  20. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Degao; Yang, Yang; Zhang, Peng [Department of Chemistry, Duke University, Durham, North Carolina 27708 (United States); Yang, Weitao, E-mail: weitao.yang@duke.edu [Department of Chemistry and Department of Physics, Duke University, Durham, North Carolina 27708 (United States)

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

  1. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  2. Visual attention and flexible normalization pools

    Science.gov (United States)

    Schwartz, Odelia; Coen-Cagli, Ruben

    2013-01-01

    Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413

  3. 26 CFR 1.985-3 - United States dollar approximate separate transactions method.

    Science.gov (United States)

    2010-04-01

    ...). For all purposes of subtitle A, this method of accounting must be used to compute the gross income... in section 989(a)) that has the dollar as its functional currency pursuant to § 1.985-1(b)(2). (2... currency (as defined in § 1.985-1(b)(2)(ii)(D)); (2) Making the adjustments necessary to conform such...

  4. SFU-driven transparent approximation acceleration on GPUs

    NARCIS (Netherlands)

    Li, A.; Song, S.L.; Wijtvliet, M.; Kumar, A.; Corporaal, H.

    2016-01-01

    Approximate computing, the technique that sacrifices certain amount of accuracy in exchange for substantial performance boost or power reduction, is one of the most promising solutions to enable power control and performance scaling towards exascale. Although most existing approximation designs

  5. Some approximate calculations in SU2 lattice mean field theory

    International Nuclear Information System (INIS)

    Hari Dass, N.D.; Lauwers, P.G.

    1981-12-01

    Approximate calculations are performed for small Wilson loops of SU 2 lattice gauge theory in mean field approximation. Reasonable agreement is found with Monte Carlo data. Ways of improving these calculations are discussed. (Auth.)

  6. Approximate viability for nonlinear evolution inclusions with application to controllability

    Directory of Open Access Journals (Sweden)

    Omar Benniche

    2016-12-01

    Full Text Available We investigate approximate viability for a graph with respect to fully nonlinear quasi-autonomous evolution inclusions. As application, an approximate null controllability result is given.

  7. Bicervical normal uterus with normal vagina | Okeke | Annals of ...

    African Journals Online (AJOL)

    To the best of our knowledge, only few cases of bicervical normal uterus with normal vagina exist in the literature; one of the cases had an anterior‑posterior disposition. This form of uterine abnormality is not explicable by the existing classical theory of mullerian anomalies and suggests that a complex interplay of events ...

  8. Longitudinal study of serum placental GH in 455 normal pregnancies

    DEFF Research Database (Denmark)

    Chellakooty, Marla; Skibsted, Lillian; Skouby, Sven O

    2002-01-01

    women with normal singleton pregnancies at approximately 19 and 28 wk gestation. Serum placental GH concentrations were measured by a highly specific immunoradiometric assay, and fetal size was measured by ultrasound. Data on birth weight, gender, prepregnancy body mass index (BMI), parity, and smoking...

  9. Pawlak algebra and approximate structure on fuzzy lattice.

    Science.gov (United States)

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.

  10. Group normalization for genomic data.

    Science.gov (United States)

    Ghandi, Mahmoud; Beer, Michael A

    2012-01-01

    Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN), to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  11. Group normalization for genomic data.

    Directory of Open Access Journals (Sweden)

    Mahmoud Ghandi

    Full Text Available Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN, to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  12. Comparison of four support-vector based function approximators

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2004-01-01

    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been

  13. Aspects of three field approximations: Darwin, frozen, EMPULSE

    International Nuclear Information System (INIS)

    Boyd, J.K.; Lee, E.P.; Yu, S.S.

    1985-01-01

    The traditional approach used to study high energy beam propagation relies on the frozen field approximation. A minor modification of the frozen field approximation yields the set of equations applied to the analysis of the hose instability. These models are constrasted with the Darwin field approximation. A statement is made of the Darwin model equations relevant to the analysis of the hose instability

  14. Approximation Properties of Certain Summation Integral Type Operators

    Directory of Open Access Journals (Sweden)

    Patel P.

    2015-03-01

    Full Text Available In the present paper, we study approximation properties of a family of linear positive operators and establish direct results, asymptotic formula, rate of convergence, weighted approximation theorem, inverse theorem and better approximation for this family of linear positive operators.

  15. On Love's approximation for fluid-filled elastic tubes

    International Nuclear Information System (INIS)

    Caroli, E.; Mainardi, F.

    1980-01-01

    A simple procedure is set up to introduce Love's approximation for wave propagation in thin-walled fluid-filled elastic tubes. The dispersion relation for linear waves and the radial profile for fluid pressure are determined in this approximation. It is shown that the Love approximation is valid in the low-frequency regime. (author)

  16. Extending the random-phase approximation for electronic correlation energies: the renormalized adiabatic local density approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2012-01-01

    The adiabatic connection fluctuation-dissipation theorem with the random phase approximation (RPA) has recently been applied with success to obtain correlation energies of a variety of chemical and solid state systems. The main merit of this approach is the improved description of dispersive forces...... while chemical bond strengths and absolute correlation energies are systematically underestimated. In this work we extend the RPA by including a parameter-free renormalized version of the adiabatic local-density (ALDA) exchange-correlation kernel. The renormalization consists of a (local) truncation...... of the ALDA kernel for wave vectors q > 2kF, which is found to yield excellent results for the homogeneous electron gas. In addition, the kernel significantly improves both the absolute correlation energies and atomization energies of small molecules over RPA and ALDA. The renormalization can...

  17. Comparison of spectrum normalization techniques for univariate ...

    Indian Academy of Sciences (India)

    Laser-induced breakdown spectroscopy; univariate study; normalization models; stainless steel; standard error of prediction. Abstract. Analytical performance of six different spectrum normalization techniques, namelyinternal normalization, normalization with total light, normalization with background along with their ...

  18. SPOKEN CUZCO QUECHUA, UNITS 1-6.

    Science.gov (United States)

    SOLA, DONALD F.; AND OTHERS

    THE MATERIALS IN THIS VOLUME COMPRISE SIX UNITS WHICH PRESENT BASIC ASPECTS OF CUZCO QUECHUA PHONOLOGY, MORPHOLOGY, AND SYNTAX FOR THE BEGINNING STUDENT. THE SIX UNITS ARE DESIGNED FOR APPROXIMATELY 120 HOURS OF SUPERVISED CLASS WORK WITH OUTSIDE PREPARATION EXPECTED OF THE STUDENT. EACH UNIT CONSISTS OF A DIALOGUE TO BE MEMORIZED, A DIALOGUE…

  19. Non-linear adjustment to purchasing power parity: an analysis using Fourier approximations

    OpenAIRE

    Juan-Ángel Jiménez-Martín; M. Dolores Robles Fernández

    2005-01-01

    This paper estimates the dynamics of adjustment to long run purchasing power parity (PPP) using data for 18 mayor bilateral US dollar exchange rates, over the post-Bretton Woods period, in a non-linear framework. We use new unit root and cointegration tests that do not assume a specific non-linear adjustment process. Using a first-order Fourier approximation, we find evidence of non-linear mean reversion in deviations from both absolute and relative PPP. This first-order Fourier approximation...

  20. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....