WorldWideScience

Sample records for unit normal approximation

  1. Spherical Approximation on Unit Sphere

    Directory of Open Access Journals (Sweden)

    Eman Samir Bhaya

    2018-01-01

    Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of  functions in  spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in    spaces for  by modulus of smoothness of functions.

  2. The triangular density to approximate the normal density: decision rules-of-thumb

    International Nuclear Information System (INIS)

    Scherer, William T.; Pomroy, Thomas A.; Fuller, Douglas N.

    2003-01-01

    In this paper we explore the approximation of the normal density function with the triangular density function, a density function that has extensive use in risk analysis. Such an approximation generates a simple piecewise-linear density function and a piecewise-quadratic distribution function that can be easily manipulated mathematically and that produces surprisingly accurate performance under many instances. This mathematical tractability proves useful when it enables closed-form solutions not otherwise possible, as with problems involving the embedded use of the normal density. For benchmarking purposes we compare the basic triangular approximation with two flared triangular distributions and with two simple uniform approximations; however, throughout the paper our focus is on using the triangular density to approximate the normal for reasons of parsimony. We also investigate the logical extensions of using a non-symmetric triangular density to approximate a lognormal density. Several issues associated with using a triangular density as a substitute for the normal and lognormal densities are discussed, and we explore the resulting numerical approximation errors for the normal case. Finally, we present several examples that highlight simple decision rules-of-thumb that the use of the approximation generates. Such rules-of-thumb, which are useful in risk and reliability analysis and general business analysis, can be difficult or impossible to extract without the use of approximations. These examples include uses of the approximation in generating random deviates, uses in mixture models for risk analysis, and an illustrative decision analysis problem. It is our belief that this exploratory look at the triangular approximation to the normal will provoke other practitioners to explore its possible use in various domains and applications

  3. On the approximative normal values of multivalued operators in topological vector space

    International Nuclear Information System (INIS)

    Nguyen Minh Chuong; Khuat van Ninh

    1989-09-01

    In this paper the problem of approximation of normal values of multivalued linear closed operators from topological vector Mackey space into E-space is considered. Existence of normal value and convergence of approximative values to normal value are proved. (author). 4 refs

  4. Padé approximant for normal stress differences in large-amplitude oscillatory shear flow

    Science.gov (United States)

    Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.

    2018-04-01

    Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.

  5. The approximation of the normal distribution by means of chaotic expression

    International Nuclear Information System (INIS)

    Lawnik, M

    2014-01-01

    The approximation of the normal distribution by means of a chaotic expression is achieved by means of Weierstrass function, where, for a certain set of parameters, the density of the derived recurrence renders good approximation of the bell curve

  6. A simple approximation to the bivariate normal distribution with large correlation coefficient

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.

    1994-01-01

    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the

  7. Environmental assessment: Transfer of normal and low-enriched uranium billets to the United Kingdom, Hanford Site, Richland, Washington

    International Nuclear Information System (INIS)

    1995-11-01

    Under the auspices of an agreement between the U.S. and the United Kingdom, the U.S. Department of Energy (DOE) has an opportunity to transfer approximately 710,000 kilograms (1,562,000 pounds) of unneeded normal and low-enriched uranium (LEU) to the United Kingdom; thus, reducing long-term surveillance and maintenance burdens at the Hanford Site. The material, in the form of billets, is controlled by DOE's Defense Programs, and is presently stored as surplus material in the 300 Area of the Hanford Site. The United Kingdom has expressed a need for the billets. The surplus uranium billets are currently stored in wooden shipping containers in secured facilities in the 300 Area at the Hanford Site (the 303-B and 303-G storage facilities). There are 482 billets at an enrichment level (based on uranium-235 content) of 0.71 weight-percent. This enrichment level is normal uranium; that is, uranium having 0.711 as the percentage by weight of uranium-235 as occurring in nature. There are 3,242 billets at an enrichment level of 0.95 weight-percent (i.e., low-enriched uranium). This inventory represents a total of approximately 532 curies. The facilities are routinely monitored. The dose rate on contact of a uranium billet is approximately 8 millirem per hour. The dose rate on contact of a wooden shipping container containing 4 billets is approximately 4 millirem per hour. The dose rate at the exterior of the storage facilities is indistinguishable from background levels

  8. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  9. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2014-01-01

    Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.

  10. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-01

    log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible

  11. Design of reciprocal unit based on the Newton-Raphson approximation

    DEFF Research Database (Denmark)

    Gundersen, Anders Torp; Winther-Almstrup, Rasmus; Boesen, Michael

    A design of a reciprocal unit based on Newton-Raphson approximation is described and implemented. We present two different designs for single precisions where one of them is extremely fast but the trade-off is an increase in area. The solution behind the fast design is that the design is fully...

  12. [Statistical (Poisson) motor unit number estimation. Methodological aspects and normal results in the extensor digitorum brevis muscle of healthy subjects].

    Science.gov (United States)

    Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J

    Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (page than CMAP amplitude ( 0.5002 and 0.4142, respectively pphisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does.

  13. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  14. Compact quantum group C*-algebras as Hopf algebras with approximate unit

    International Nuclear Information System (INIS)

    Do Ngoc Diep; Phung Ho Hai; Kuku, A.O.

    1999-04-01

    In this paper, we construct and study the representation theory of a Hopf C*-algebra with approximate unit, which constitutes quantum analogue of a compact group C*-algebra. The construction is done by first introducing a convolution-product on an arbitrary Hopf algebra H with integral, and then constructing the L 2 and C*-envelopes of H (with the new convolution-product) when H is a compact Hopf *-algebra. (author)

  15. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  16. Persistence and failure of mean-field approximations adapted to a class of systems of delay-coupled excitable units

    Science.gov (United States)

    Franović, Igor; Todorović, Kristina; Vasović, Nebojša; Burić, Nikola

    2014-02-01

    We consider the approximations behind the typical mean-field model derived for a class of systems made up of type II excitable units influenced by noise and coupling delays. The formulation of the two approximations, referred to as the Gaussian and the quasi-independence approximation, as well as the fashion in which their validity is verified, are adapted to reflect the essential properties of the underlying system. It is demonstrated that the failure of the mean-field model associated with the breakdown of the quasi-independence approximation can be predicted by the noise-induced bistability in the dynamics of the mean-field system. As for the Gaussian approximation, its violation is related to the increase of noise intensity, but the actual condition for failure can be cast in qualitative, rather than quantitative terms. We also discuss how the fulfillment of the mean-field approximations affects the statistics of the first return times for the local and global variables, further exploring the link between the fulfillment of the quasi-independence approximation and certain forms of synchronization between the individual units.

  17. 77 FR 38857 - Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units of Normal...

    Science.gov (United States)

    2012-06-29

    ... Filtration and Adsorption Units of Normal Atmosphere Cleanup Systems in Light-Water- Cooled Nuclear Power... Criteria for Air Filtration and Adsorption Units of Normal Atmosphere Cleanup Systems in Light-Water-Cooled... draft regulatory guide (DG), DG-1280, ``Design, Inspection, and Testing Criteria for Air Filtration and...

  18. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    Science.gov (United States)

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  19. Midwives' experiences of facilitating normal birth in an obstetric-led unit: a feminist perspective.

    LENUS (Irish Health Repository)

    Keating, Annette

    2012-01-31

    OBJECTIVE: to explore midwives\\' experiences of facilitating normal birth in an obstetric-led unit. DESIGN: a feminist approach using semi-structured interviews focusing on midwives\\' perceptions of normal birth and their ability to facilitate this birth option in an obstetric-led unit. SETTING: Ireland. PARTICIPATION: a purposeful sample of 10 midwives with 6-30 years of midwifery experience. All participants had worked for a minimum of 6 years in a labour ward setting, and had been in their current setting for the previous 2 years. FINDINGS: the midwives\\' narratives related to the following four concepts of patriarchy: \\'hierarchical thinking\\

  20. A parallel approximate string matching under Levenshtein distance on graphics processing units using warp-shuffle operations.

    Directory of Open Access Journals (Sweden)

    ThienLuan Ho

    Full Text Available Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs. In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively.

  1. Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.

    Directory of Open Access Journals (Sweden)

    Umair Khalil

    Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.

  2. The approximate number system and domain-general abilities as predictors of math ability in children with normal hearing and hearing loss.

    Science.gov (United States)

    Bull, Rebecca; Marschark, Marc; Nordmann, Emily; Sapere, Patricia; Skene, Wendy A

    2018-06-01

    Many children with hearing loss (CHL) show a delay in mathematical achievement compared to children with normal hearing (CNH). This study examined whether there are differences in acuity of the approximate number system (ANS) between CHL and CNH, and whether ANS acuity is related to math achievement. Working memory (WM), short-term memory (STM), and inhibition were considered as mediators of any relationship between ANS acuity and math achievement. Seventy-five CHL were compared with 75 age- and gender-matched CNH. ANS acuity, mathematical reasoning, WM, and STM of CHL were significantly poorer compared to CNH. Group differences in math ability were no longer significant when ANS acuity, WM, or STM was controlled. For CNH, WM and STM fully mediated the relationship of ANS acuity to math ability; for CHL, WM and STM only partially mediated this relationship. ANS acuity, WM, and STM are significant contributors to hearing status differences in math achievement, and to individual differences within the group of CHL. Statement of contribution What is already known on this subject? Children with hearing loss often perform poorly on measures of math achievement, although there have been few studies focusing on basic numerical cognition in these children. In typically developing children, the approximate number system predicts math skills concurrently and longitudinally, although there have been some contradictory findings. Recent studies suggest that domain-general skills, such as inhibition, may account for the relationship found between the approximate number system and math achievement. What does this study adds? This is the first robust examination of the approximate number system in children with hearing loss, and the findings suggest poorer acuity of the approximate number system in these children compared to hearing children. The study addresses recent issues regarding the contradictory findings of the relationship of the approximate number system to math ability

  3. Simulation of mineral dust aerosol with Piecewise Log-normal Approximation (PLA in CanAM4-PAM

    Directory of Open Access Journals (Sweden)

    Y. Peng

    2012-08-01

    Full Text Available A new size-resolved dust scheme based on the numerical method of piecewise log-normal approximation (PLA was developed and implemented in the fourth generation of the Canadian Atmospheric Global Climate Model with the PLA Aerosol Model (CanAM4-PAM. The total simulated annual global dust emission is 2500 Tg yr−1, and the dust mass load is 19.3 Tg for year 2000. Both are consistent with estimates from other models. Results from simulations are compared with multiple surface measurements near and away from dust source regions, validating the generation, transport and deposition of dust in the model. Most discrepancies between model results and surface measurements are due to unresolved aerosol processes. Biases in long-range transport are also contributing. Radiative properties of dust aerosol are derived from approximated parameters in two size modes using Mie theory. The simulated aerosol optical depth (AOD is compared with satellite and surface remote sensing measurements and shows general agreement in terms of the dust distribution around sources. The model yields a dust AOD of 0.042 and dust aerosol direct radiative forcing (ADRF of −1.24 W m−2 respectively, which show good consistency with model estimates from other studies.

  4. NIMROD: a program for inference via a normal approximation of the posterior in models with random effects based on ordinary differential equations.

    Science.gov (United States)

    Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe

    2013-08-01

    Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Birkhoff normalization

    NARCIS (Netherlands)

    Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.

    2003-01-01

    The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for

  6. Approximate symmetries of Hamiltonians

    Science.gov (United States)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  7. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  8. Complementary-relationship-based 30 year normals (1981-2010) of monthly latent heat fluxes across the contiguous United States

    Science.gov (United States)

    Szilagyi, Jozsef

    2015-11-01

    Thirty year normal (1981-2010) monthly latent heat fluxes (ET) over the conterminous United States were estimated by a modified Advection-Aridity model from North American Regional Reanalysis (NARR) radiation and wind as well as Parameter-Elevation Regressions on Independent Slopes Model (PRISM) air and dew-point temperature data. Mean annual ET values were calibrated with PRISM precipitation (P) and validated against United States Geological Survey runoff (Q) data. At the six-digit Hydrologic Unit Code level (sample size of 334) the estimated 30 year normal runoff (P - ET) had a bias of 18 mm yr-1, a root-mean-square error of 96 mm yr-1, and a linear correlation coefficient value of 0.95, making the estimates on par with the latest Land Surface Model results but without the need for soil and vegetation information or any soil moisture budgeting.

  9. An Integrable Approximation for the Fermi Pasta Ulam Lattice

    Science.gov (United States)

    Rink, Bob

    This contribution presents a review of results obtained from computations of approximate equations of motion for the Fermi-Pasta-Ulam lattice. These approximate equations are obtained as a finite-dimensional Birkhoff normal form. It turns out that in many cases, the Birkhoff normal form is suitable for application of the KAM theorem. In particular, this proves Nishida's 1971 conjecture stating that almost all low-energetic motions of the anharmonic Fermi-Pasta-Ulam lattice with fixed endpoints are quasi-periodic. The proof is based on the formal Birkhoff normal form computations of Nishida, the KAM theorem and discrete symmetry considerations.

  10. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  11. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  12. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  13. Clinical outcomes of the first midwife-led normal birth unit in China: a retrospective cohort study.

    Science.gov (United States)

    Cheung, Ngai Fen; Mander, Rosemary; Wang, Xiaoli; Fu, Wei; Zhou, Hong; Zhang, Liping

    2011-10-01

    to report the clinical outcomes of the first six months of operation of an innovative midwife-led normal birth unit (MNBU) in China in 2008, aiming to facilitate normal birth and enhance midwifery practice. an urban hospital with 2000-3000 deliveries per year. this study was part of a major action research project that led to implementation of the MNBU. A retrospective cohort and a questionnaire survey were used. The data were analysed thematically. the outcomes of the first 226 women accessing the MNBU were compared with a matched retrospective cohort of 226 women accessing standard care. In total, 128 participants completed a satisfaction questionnaire before discharge. mode of birth and model of care. the vaginal birth rate was 87.6% in the MNBU compared with 58.8% in the standard care unit. All women who accessed the MNBU were supported by both a midwife and a birth companion, referred to as 'two-to-one' care. None of the women labouring in the standard care unit were identified as having a birth companion. the concept of 'two-to-one' care emerged as fundamental to women's experiences and utilisation of midwives' skills to promote normal birth and decrease the likelihood of a caesarean section. the MNBU provides an environment where midwives can practice to the full extent of their role. The high vaginal birth rate in the MNBU indicates the potential of this model of care to reduce obstetric intervention and increase women's satisfaction with care within a context of extraordinary high caesarean section rates. midwife-led care implies a separation of obstetric care from maternity care, which has been advocated in many European countries. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. The N'ormal Distribution

    Indian Academy of Sciences (India)

    An optimal way of choosing sample size in an opinion poll is indicated using the normal distribution. Introduction. In this article, the ubiquitous normal distribution is intro- duced as a convenient approximation for computing bino- mial probabilities for large values of n. Stirling's formula. • and DeMoivre-Laplace theorem ...

  15. Pore size determination using normalized J-function for different hydraulic flow units

    Directory of Open Access Journals (Sweden)

    Ali Abedini

    2015-06-01

    Full Text Available Pore size determination of hydrocarbon reservoirs is one of the main challenging areas in reservoir studies. Precise estimation of this parameter leads to enhance the reservoir simulation, process evaluation, and further forecasting of reservoir behavior. Hence, it is of great importance to estimate the pore size of reservoir rocks with an appropriate accuracy. In the present study, a modified J-function was developed and applied to determine the pore radius in one of the hydrocarbon reservoir rocks located in the Middle East. The capillary pressure data vs. water saturation (Pc–Sw as well as routine reservoir core analysis include porosity (φ and permeability (k were used to develop the J-function. First, the normalized porosity (φz, the rock quality index (RQI, and the flow zone indicator (FZI concepts were used to categorize all data into discrete hydraulic flow units (HFU containing unique pore geometry and bedding characteristics. Thereafter, the modified J-function was used to normalize all capillary pressure curves corresponding to each of predetermined HFU. The results showed that the reservoir rock was classified into five separate rock types with the definite HFU and reservoir pore geometry. Eventually, the pore radius for each of these HFUs was determined using a developed equation obtained by normalized J-function corresponding to each HFU. The proposed equation is a function of reservoir rock characteristics including φz, FZI, lithology index (J*, and pore size distribution index (ɛ. This methodology used, the reservoir under study was classified into five discrete HFU with unique equations for permeability, normalized J-function and pore size. The proposed technique is able to apply on any reservoir to determine the pore size of the reservoir rock, specially the one with high range of heterogeneity in the reservoir rock properties.

  16. The modified signed likelihood statistic and saddlepoint approximations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1992-01-01

    SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....

  17. Approximate reflection coefficients for a thin VTI layer

    KAUST Repository

    Hao, Qi

    2017-09-18

    We present an approximate method to derive simple expressions for the reflection coefficients of P- and SV-waves for a thin transversely isotropic layer with a vertical symmetry axis (VTI) embedded in a homogeneous VTI background. The layer thickness is assumed to be much smaller than the wavelengths of P- and SV-waves inside. The exact reflection and transmission coefficients are derived by the propagator matrix method. In the case of normal incidence, the exact reflection and transmission coefficients are expressed in terms of the impedances of vertically propagating P- and S-waves. For subcritical incidence, the approximate reflection coefficients are expressed in terms of the contrast in the VTI parameters between the layer and the background. Numerical examples are designed to analyze the reflection coefficients at normal and oblique incidence, and investigate the influence of transverse isotropy on the reflection coefficients. Despite giving numerical errors, the approximate formulae are sufficiently simple to qualitatively analyze the variation of the reflection coefficients with the angle of incidence.

  18. Normalization of Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  19. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    Science.gov (United States)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  20. PWL approximation of nonlinear dynamical systems, part I: structural stability

    International Nuclear Information System (INIS)

    Storace, M; De Feo, O

    2005-01-01

    This paper and its companion address the problem of the approximation/identification of nonlinear dynamical systems depending on parameters, with a view to their circuit implementation. The proposed method is based on a piecewise-linear approximation technique. In particular, this paper describes the approximation method and applies it to some particularly significant dynamical systems (topological normal forms). The structural stability of the PWL approximations of such systems is investigated through a bifurcation analysis (via continuation methods)

  1. The application of the piecewise linear approximation to the spectral neighborhood of soil line for the analysis of the quality of normalization of remote sensing materials

    Science.gov (United States)

    Kulyanitsa, A. L.; Rukhovich, A. D.; Rukhovich, D. D.; Koroleva, P. V.; Rukhovich, D. I.; Simakova, M. S.

    2017-04-01

    The concept of soil line can be to describe the temporal distribution of spectral characteristics of the bare soil surface. In this case, the soil line can be referred to as the multi-temporal soil line, or simply temporal soil line (TSL). In order to create TSL for 8000 regular lattice points for the territory of three regions of Tula oblast, we used 34 Landsat images obtained in the period from 1985 to 2014 after their certain transformation. As Landsat images are the matrices of the values of spectral brightness, this transformation is the normalization of matrices. There are several methods of normalization that move, rotate, and scale the spectral plane. In our study, we applied the method of piecewise linear approximation to the spectral neighborhood of soil line in order to assess the quality of normalization mathematically. This approach allowed us to range normalization methods according to their quality as follows: classic normalization > successive application of the turn and shift > successive application of the atmospheric correction and shift > atmospheric correction > shift > turn > raw data. The normalized data allowed us to create the maps of the distribution of a and b coefficients of the TSL. The map of b coefficient is characterized by the high correlation with the ground-truth data obtained from 1899 soil pits described during the soil surveys performed by the local institute for land management (GIPROZEM).

  2. Normal foot and ankle

    International Nuclear Information System (INIS)

    Weissman, S.D.

    1989-01-01

    The foot may be thought of as a bag of bones tied tightly together and functioning as a unit. The bones re expected to maintain their alignment without causing symptomatology to the patient. The author discusses a normal radiograph. The bones must have normal shape and normal alignment. The density of the soft tissues should be normal and there should be no fractures, tumors, or foreign bodies

  3. Major Accidents (Gray Swans) Likelihood Modeling Using Accident Precursors and Approximate Reasoning.

    Science.gov (United States)

    Khakzad, Nima; Khan, Faisal; Amyotte, Paul

    2015-07-01

    Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.

  4. Nuclear data processing, analysis, transformation and storage with Pade-approximants

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gay, E.V.; Guseynov, M.A.; Rabotnov, N.S.

    1992-01-01

    A method is described to generate rational approximants of high order with applications to neutron data handling. The problems considered are: The approximations of neutron cross-sections in resonance region producing the parameters for Adler-Adler type formulae; calculations of resulting rational approximants' errors given in analytical form allowing to compute the error at any energy point inside the interval of approximation; calculations of the correlation coefficient of error values in two arbitrary points provided that experimental errors are independent and normally distributed; a method of simultaneous generation of a few rational approximants with identical set of poles; functionals other than LSM; two-dimensional approximation. (orig.)

  5. PWL approximation of nonlinear dynamical systems, part II: identification issues

    International Nuclear Information System (INIS)

    De Feo, O; Storace, M

    2005-01-01

    This paper and its companion address the problem of the approximation/identification of nonlinear dynamical systems depending on parameters, with a view to their circuit implementation. The proposed method is based on a piecewise-linear approximation technique. In particular, this paper describes a black-box identification method based on state space reconstruction and PWL approximation, and applies it to some particularly significant dynamical systems (two topological normal forms and the Colpitts oscillator)

  6. Multidimensional stochastic approximation using locally contractive functions

    Science.gov (United States)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  7. Coefficients Calculation in Pascal Approximation for Passive Filter Design

    Directory of Open Access Journals (Sweden)

    George B. Kasapoglu

    2018-02-01

    Full Text Available The recently modified Pascal function is further exploited in this paper in the design of passive analog filters. The Pascal approximation has non-equiripple magnitude, in contrast of the most well-known approximations, such as the Chebyshev approximation. A novelty of this work is the introduction of a precise method that calculates the coefficients of the Pascal function. Two examples are presented for the passive design to illustrate the advantages and the disadvantages of the Pascal approximation. Moreover, the values of the passive elements can be taken from tables, which are created to define the normalized values of these elements for the Pascal approximation, as Zverev had done for the Chebyshev, Elliptic, and other approximations. Although Pascal approximation can be implemented to both passive and active filter designs, a passive filter design is addressed in this paper, and the benefits and shortcomings of Pascal approximation are presented and discussed.

  8. EnviroAtlas - Average Direct Normal Solar resources kWh/m2/Day by 12-Digit HUC for the Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — The annual average direct normal solar resources by 12-Digit Hydrologic Unit (HUC) was estimated from maps produced by the National Renewable Energy Laboratory for...

  9. Nonlinear Schroedinger Approximations for Partial Differential Equations with Quadratic and Quasilinear Terms

    Science.gov (United States)

    Cummings, Patrick

    We consider the approximation of solutions of two complicated, physical systems via the nonlinear Schrodinger equation (NLS). In particular, we discuss the evolution of wave packets and long waves in two physical models. Due to the complicated nature of the equations governing many physical systems and the in-depth knowledge we have for solutions of the nonlinear Schrodinger equation, it is advantageous to use approximation results of this kind to model these physical systems. The approximations are simple enough that we can use them to understand the qualitative and quantitative behavior of the solutions, and by justifying them we can show that the behavior of the approximation captures the behavior of solutions to the original equation, at least for long, but finite time. We first consider a model of the water wave equations which can be approximated by wave packets using the NLS equation. We discuss a new proof that both simplifies and strengthens previous justification results of Schneider and Wayne. Rather than using analytic norms, as was done by Schneider and Wayne, we construct a modified energy functional so that the approximation holds for the full interval of existence of the approximate NLS solution as opposed to a subinterval (as is seen in the analytic case). Furthermore, the proof avoids problems associated with inverting the normal form transform by working with a modified energy functional motivated by Craig and Hunter et al. We then consider the Klein-Gordon-Zakharov system and prove a long wave approximation result. In this case there is a non-trivial resonance that cannot be eliminated via a normal form transform. By combining the normal form transform for small Fourier modes and using analytic norms elsewhere, we can get a justification result on the order 1 over epsilon squared time scale.

  10. Slab-diffusion approximation from time-constant-like calculations

    International Nuclear Information System (INIS)

    Johnson, R.W.

    1976-12-01

    Two equations were derived which describe the quantity and any fluid diffused from a slab as a function of time. One equation is applicable to the initial stage of the process; the other to the final stage. Accuracy is 0.2 percent at the one point where both approximations apply and where accuracy of either approximation is the poorest. Characterizing other rate processes might be facilitated by the use of the concept of NOLOR (normal of the logarithm of the rate) and its time dependence

  11. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  12. Investigation of the vibration spectrum of SbSI crystals in harmonic and in anharmonic approximations

    International Nuclear Information System (INIS)

    Audzijonis, A.; Zigas, L.; Vinokurova, I.V.; Farberovic, O.V.; Zaltauskas, R.; Cijauskas, E.; Pauliukas, A.; Kvedaravicius, A.

    2006-01-01

    The force constants of SbSI crystal have been calculated by the pseudo-potential method. The frequencies and normal coordinates of SbSI vibration modes along the c (z) direction have been determined in harmonic approximation. The potential energies of SbSI normal modes dependence on normal coordinates along the c (z) direction V(z) have been determined in anharmonic approximation, taking into account the interaction between the phonons. It has been found, that in the range of 30-120 cm -1 , the vibrational spectrum is determined by a V(z) double-well normal mode, but in the range of 120-350 cm -1 , it is determined by a V(z) single-well normal mode

  13. On Born approximation in black hole scattering

    Science.gov (United States)

    Batic, D.; Kelkar, N. G.; Nowakowski, M.

    2011-12-01

    A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordström and Reissner-Nordström-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes.

  14. Normalization and Implementation of Three Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.

    2016-01-01

    Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  15. The United States and Iran: Prospects for Normalization

    National Research Council Canada - National Science Library

    Harris, William

    1999-01-01

    .... Even then, normalization will not come quickly or easily. It will require steady, long-term US effort and will be complicated by two decades of hostility and by domestic political dynamics in both countries that hinder rational policy debate.

  16. Mode-field half-widths of Gaussian approximation for the fundamental mode of two kinds of optical waveguides

    International Nuclear Information System (INIS)

    Lian-Huang, Li; Fu-Yuan, Guo

    2009-01-01

    This paper analyzes the characteristic of matching efficiency between the fundamental mode of two kinds of optical waveguides and its Gaussian approximate field. Then, it presents a new method where the mode-field half-width of Gaussian approximation for the fundamental mode should be defined according to the maximal matching efficiency method. The relationship between the mode-field half-width of the Gaussian approximate field obtained from the maximal matching efficiency and normalized frequency is studied; furthermore, two formulas of mode-field half-widths as a function of normalized frequency are proposed

  17. On normal modes of gas sheets and discs

    International Nuclear Information System (INIS)

    Drury, L.O'C.

    1980-01-01

    A method is described for calculating the reflection and transmission coefficients characterizing normal modes of the Goldreich-Lynden-Bell gas sheet. Two families of gas discs without self-gravity for which the normal modes can be found analytically are given and used to illustrate the validity of the sheet approximation. (author)

  18. Approximate Eigensolutions of the Deformed Woods—Saxon Potential via AIM

    International Nuclear Information System (INIS)

    Ikhdair, Sameer M.; Falaye Babatunde, J.; Hamzavi, Majid

    2013-01-01

    Using the Pekeris approximation, the Schrödinger equation is solved for the nuclear deformed Woods—Saxon potential within the framework of the asymptotic iteration method. The energy levels are worked out and the corresponding normalized eigenfunctions are obtained in terms of hypergeometric function

  19. Approximating Multivariate Normal Orthant Probabilities. ONR Technical Report. [Biometric Lab Report No. 90-1.

    Science.gov (United States)

    Gibbons, Robert D.; And Others

    The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…

  20. Elasto-plastic stress/strain at notches, comparison of test and approximative computations

    International Nuclear Information System (INIS)

    Beste, A.; Seeger, T.

    1979-01-01

    The lifetime of cyclically loaded components is decisively determined by the value of the local load in the notch root. The determination of the elasto-plastic notch-stress and-strain is therefore an important element of recent methods of lifetime determination. These local loads are normally calculated with the help of approximation formulas. Yet there are no details about their accuracy. The basic construction of the approximation formulas is presented, along with some particulars. The use of approximations within the fully plastic range and for material laws which show a non-linear stress-strain (sigma-epsilon-)-behaviour from the beginning is explained. The use of approximation for cyclic loads is particularly discussed. Finally, the approximations are evaluated in terms of their exactness. The test results are compared with the results of the approximation calculations. (orig.) 891 RW/orig. 892 RKD [de

  1. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  2. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  3. Examination of muscle composition and motor unit behavior of the first dorsal interosseous of normal and overweight children.

    Science.gov (United States)

    Miller, Jonathan D; Sterczala, Adam J; Trevino, Michael A; Herda, Trent J

    2018-05-01

    We examined differences between normal weight (NW) and overweight (OW) children aged 8-10 yr in strength, muscle composition, and motor unit (MU) behavior of the first dorsal interosseous. Ultrasonography was used to determine muscle cross-sectional area (CSA), subcutaneous fat (sFAT), and echo intensity (EI). MU behavior was assessed during isometric muscle actions at 20% and 50% of maximal voluntary contraction (MVC) by analyzing electromyography amplitude (EMG RMS ) and relationships between mean firing rates (MFR), recruitment thresholds (RT), and MU action potential amplitudes (MUAP size ) and durations (MUAP time ). The OW group had significantly greater EI than the NW group ( P = 0.002; NW, 47.99 ± 6.01 AU; OW, 58.90 ± 10.63 AU, where AU is arbitrary units) with no differences between groups for CSA ( P = 0.688) or MVC force ( P = 0.790). MUAP size was larger for NW than OW in relation to RT ( P = 0.002) and for MUs expressing similar MFRs ( P = 0.011). There were no significant differences ( P = 0.279-0.969) between groups for slopes or y-intercepts from the MFR vs. RT relationships. MUAP time was larger in OW ( P = 0.015) and EMG RMS was attenuated in OW compared with NW ( P = 0.034); however, there were no significant correlations ( P = 0.133-0.164, r = 0.270-0.291) between sFAT and EMG RMS . In a muscle that does not support body mass, the OW children had smaller MUAP size as well as greater EI, although anatomical CSA was similar. This contradicts previous studies examining larger limb muscles. Despite evidence of smaller MUs, the OW children had similar isometric strength compared with NW children. NEW & NOTEWORTHY Ultrasound data and motor unit action potential sizes suggest that overweight children have poorer muscle composition and smaller motor units in the first dorsal interosseous than normal weight children. Evidence is presented that suggests differences in action potential size cannot be explained

  4. Closure Report for Corrective Action Unit 110: Areas 3 RWMS U-3ax/bl Disposal Unit, Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    J. L. Smith

    2001-08-01

    This Closure Report (CR) has been prepared for the Area 3 Radioactive Waste Management Site (RWMS) U-3ax/bl Disposal Unit Corrective Action Unit (CAU) 110 in accordance with the reissued (November 2000) Resource Conservation and Recovery Act (RCRA) Part B operational permit NEV HW009 (Nevada Division of Environmental Protection [NDEP], 2000) and the Federal Facility and Consent Order (FFACO) (NDEP et al., 1996). CAU 110 consists of one Corrective Action Site 03-23-04, described as the U-3ax/bl Subsidence Crater. Certifications of closure are located in Appendix A. The U-3ax/bl is a historic disposal unit within the Area 3 RWMS located on the Nevada Test Site (NTS). The unit, which was formed by excavating the area between two subsidence craters (U-3ax and U-3bl), was operationally closed in 1987. The U-3ax/bl disposal unit was closed under the RCRA, as a hazardous waste landfill. Existing records indicate that, from July 1968 to December 1987, U-3ax/bl received 2.3 x 10{sup 5} cubic meters (m{sup 3}) (8.12 x 10{sup 6} cubic feet [ft{sup 3}]) of waste. NTS atmospheric nuclear device testing generated approximately 95% of the total waste volume disposed of in U-3ax/bl; 80% of the total volume was generated from the Waste Consolidation Project. Area 3 is located in Yucca Flat, within the northeast quadrant of the NTS. The Yucca Flat watershed is a structurally closed basin encompassing an area of approximately 780 square kilometers (300 square miles). The structural geomorphology of Yucca Flat is typical of the Basin and Range Physiographic Province. Yucca Flat lies in one of the most arid regions of the country. Water balance calculations for Area 3 indicate that it is normally in a state of moisture deficit.

  5. Analysis of a Dynamic Viscoelastic Contact Problem with Normal Compliance, Normal Damped Response, and Nonmonotone Slip Rate Dependent Friction

    Directory of Open Access Journals (Sweden)

    Mikaël Barboteu

    2016-01-01

    Full Text Available We consider a mathematical model which describes the dynamic evolution of a viscoelastic body in frictional contact with an obstacle. The contact is modelled with a combination of a normal compliance and a normal damped response law associated with a slip rate-dependent version of Coulomb’s law of dry friction. We derive a variational formulation and an existence and uniqueness result of the weak solution of the problem is presented. Next, we introduce a fully discrete approximation of the variational problem based on a finite element method and on an implicit time integration scheme. We study this fully discrete approximation schemes and bound the errors of the approximate solutions. Under regularity assumptions imposed on the exact solution, optimal order error estimates are derived for the fully discrete solution. Finally, after recalling the solution of the frictional contact problem, some numerical simulations are provided in order to illustrate both the behavior of the solution related to the frictional contact conditions and the theoretical error estimate result.

  6. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    Science.gov (United States)

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  7. A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence

    Science.gov (United States)

    Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.

    2018-04-01

    Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.

  8. Differing effects of attention in single-units and populations are well predicted by heterogeneous tuning and the normalization model of attention.

    Science.gov (United States)

    Hara, Yuko; Pestilli, Franco; Gardner, Justin L

    2014-01-01

    Single-unit measurements have reported many different effects of attention on contrast-response (e.g., contrast-gain, response-gain, additive-offset dependent on visibility), while functional imaging measurements have more uniformly reported increases in response across all contrasts (additive-offset). The normalization model of attention elegantly predicts the diversity of effects of attention reported in single-units well-tuned to the stimulus, but what predictions does it make for more realistic populations of neurons with heterogeneous tuning? Are predictions in accordance with population-scale measurements? We used functional imaging data from humans to determine a realistic ratio of attention-field to stimulus-drive size (a key parameter for the model) and predicted effects of attention in a population of model neurons with heterogeneous tuning. We found that within the population, neurons well-tuned to the stimulus showed a response-gain effect, while less-well-tuned neurons showed a contrast-gain effect. Averaged across the population, these disparate effects of attention gave rise to additive-offsets in contrast-response, similar to reports in human functional imaging as well as population averages of single-units. Differences in predictions for single-units and populations were observed across a wide range of model parameters (ratios of attention-field to stimulus-drive size and the amount of baseline response modifiable by attention), offering an explanation for disparity in physiological reports. Thus, by accounting for heterogeneity in tuning of realistic neuronal populations, the normalization model of attention can not only predict responses of well-tuned neurons, but also the activity of large populations of neurons. More generally, computational models can unify physiological findings across different scales of measurement, and make links to behavior, but only if factors such as heterogeneous tuning within a population are properly accounted for.

  9. Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization

    OpenAIRE

    Huang, Xiaofei

    2006-01-01

    The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derive...

  10. Symmetries of th-Order Approximate Stochastic Ordinary Differential Equations

    OpenAIRE

    Fredericks, E.; Mahomed, F. M.

    2012-01-01

    Symmetries of $n$ th-order approximate stochastic ordinary differential equations (SODEs) are studied. The determining equations of these SODEs are derived in an Itô calculus context. These determining equations are not stochastic in nature. SODEs are normally used to model nature (e.g., earthquakes) or for testing the safety and reliability of models in construction engineering when looking at the impact of random perturbations.

  11. Discrete factor approximations in simultaneous equation models: estimating the impact of a dummy endogenous variable on a continuous outcome.

    Science.gov (United States)

    Mroz, T A

    1999-10-01

    This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.

  12. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  13. Uniform approximation is more appropriate for Wilcoxon Rank-Sum Test in gene set analysis.

    Directory of Open Access Journals (Sweden)

    Zhide Fang

    Full Text Available Gene set analysis is widely used to facilitate biological interpretations in the analyses of differential expression from high throughput profiling data. Wilcoxon Rank-Sum (WRS test is one of the commonly used methods in gene set enrichment analysis. It compares the ranks of genes in a gene set against those of genes outside the gene set. This method is easy to implement and it eliminates the dichotomization of genes into significant and non-significant in a competitive hypothesis testing. Due to the large number of genes being examined, it is impractical to calculate the exact null distribution for the WRS test. Therefore, the normal distribution is commonly used as an approximation. However, as we demonstrate in this paper, the normal approximation is problematic when a gene set with relative small number of genes is tested against the large number of genes in the complementary set. In this situation, a uniform approximation is substantially more powerful, more accurate, and less intensive in computation. We demonstrate the advantage of the uniform approximations in Gene Ontology (GO term analysis using simulations and real data sets.

  14. Approximate dynamic programming solving the curses of dimensionality

    CERN Document Server

    Powell, Warren B

    2007-01-01

    Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. The recipient of the 2004 INFORMS Fellow Award, Dr. Powell has authored over 100 refereed publications on stochastic optimization, approximate dynamic programming, and dynamic resource management.

  15. Borders and border representations: Comparative approximations among the United States and Latin America

    Directory of Open Access Journals (Sweden)

    Marcos Cueva Perus

    2005-01-01

    Full Text Available This article uses a comparative approach regarding frontier symbols and myths among the United States, Latin America and the Caribbean. Although wars fought over frontiers have greatly diminished throughout the world, the conception of frontier still held by the United States is that of a nationalist myth which embodies a semi-religious faith in the free market and democracy. On the other hand, Latin American and Caribbean countries, whose frontiers are far more complex, have shown extraordinary stability for several decades. This paper points out the risks involved in the spread of United States´ notions of frontier which, in addition, go hand-in-hand with the problem of multicultural segmentation. Although Latin American and Caribbean frontiers may be stable, they are vulnerable to the infiltration of foreing frontier representations.

  16. Group C∗-algebras without the completely bounded approximation property

    DEFF Research Database (Denmark)

    Haagerup, U.

    2016-01-01

    It is proved that: (1) The Fourier algebra A(G) of a simple Lie group G of real rank at least 2 with finite center does not have a multiplier bounded approximate unit. (2) The reduced C∗-algebra C∗ r of any lattice in a non-compact simple Lie group of real rank at least 2 with finite center does...... not have the completely bounded approximation property. Hence, the results obtained by de Canniere and the author for SOe (n, 1), n ≥ 2, and by Cowling for SU(n, 1) do not generalize to simple Lie groups of real rank at least 2. © 2016 Heldermann Verlag....

  17. Normal Approximations to the Distributions of the Wilcoxon Statistics: Accurate to What "N"? Graphical Insights

    Science.gov (United States)

    Bellera, Carine A.; Julien, Marilyse; Hanley, James A.

    2010-01-01

    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…

  18. Assessment of four shadow band correction models using beam normal irradiance data from the United Kingdom and Israel

    International Nuclear Information System (INIS)

    Lopez, G.; Muneer, T.; Claywell, R.

    2004-01-01

    Diffuse irradiance is a fundamental factor for all solar resource considerations. Diffuse irradiance is accurately determined by calculation from global and beam normal (direct) measurements. However, beam solar measurements and related support can be very expensive, and therefore, shadow bands are often used, along with pyranometers, to block the solar disk. The errors that result from the use of shadow bands are well known and have been studied by many authors. The thrust of this article is to examine four recognized techniques for correcting shadow band based, diffuse irradiance and statistically evaluate their individual performance using data culled from two contrasting sites within the United Kingdom and Israel

  19. Normal modes of weak colloidal gels

    Science.gov (United States)

    Varga, Zsigmond; Swan, James W.

    2018-01-01

    The normal modes and relaxation rates of weak colloidal gels are investigated in calculations using different models of the hydrodynamic interactions between suspended particles. The relaxation spectrum is computed for freely draining, Rotne-Prager-Yamakawa, and accelerated Stokesian dynamics approximations of the hydrodynamic mobility in a normal mode analysis of a harmonic network representing several colloidal gels. We find that the density of states and spatial structure of the normal modes are fundamentally altered by long-ranged hydrodynamic coupling among the particles. Short-ranged coupling due to hydrodynamic lubrication affects only the relaxation rates of short-wavelength modes. Hydrodynamic models accounting for long-ranged coupling exhibit a microscopic relaxation rate for each normal mode, λ that scales as l-2, where l is the spatial correlation length of the normal mode. For the freely draining approximation, which neglects long-ranged coupling, the microscopic relaxation rate scales as l-γ, where γ varies between three and two with increasing particle volume fraction. A simple phenomenological model of the internal elastic response to normal mode fluctuations is developed, which shows that long-ranged hydrodynamic interactions play a central role in the viscoelasticity of the gel network. Dynamic simulations of hard spheres that gel in response to short-ranged depletion attractions are used to test the applicability of the density of states predictions. For particle concentrations up to 30% by volume, the power law decay of the relaxation modulus in simulations accounting for long-ranged hydrodynamic interactions agrees with predictions generated by the density of states of the corresponding harmonic networks as well as experimental measurements. For higher volume fractions, excluded volume interactions dominate the stress response, and the prediction from the harmonic network density of states fails. Analogous to the Zimm model in polymer

  20. The normal holonomy group

    International Nuclear Information System (INIS)

    Olmos, C.

    1990-05-01

    The restricted holonomy group of a Riemannian manifold is a compact Lie group and its representation on the tangent space is a product of irreducible representations and a trivial one. Each one of the non-trivial factors is either an orthogonal representation of a connected compact Lie group which acts transitively on the unit sphere or it is the isotropy representation of a single Riemannian symmetric space of rank ≥ 2. We prove that, all these properties are also true for the representation on the normal space of the restricted normal holonomy group of any submanifold of a space of constant curvature. 4 refs

  1. Normal vibrations in gallium arsenide

    International Nuclear Information System (INIS)

    Dolling, G.; Waugh, J.L.T.

    1964-01-01

    The triple axis crystal spectrometer at Chalk River has been used to observe coherent slow neutron scattering from a single crystal of pure gallium arsenide at 296 o K. The frequencies of normal modes of vibration propagating in the [ζ00], (ζζζ], and (0ζζ] crystal directions have been determined with a precision of between 1 and 2·5 per cent. A limited number of normal modes have also been studied at 95 and 184 o K. Considerable difficulty was experienced in obtaining welt resolved neutron peaks corresponding to the two non-degenerate optic modes for very small wave-vector, particularly at 296 o K. However, from a comparison of results obtained under various experimental conditions at several different points in reciprocal space, frequencies (units 10 12 c/s) for these modes (at 296 o K) have been assigned: T 8·02±0·08 and L 8·55±02. Other specific normal modes, with their measured frequencies are (a) (1,0,0): TO 7·56 ± 008, TA 2·36 ± 0·015, LO 7·22 ± 0·15, LA 6·80 ± 0·06; (b) (0·5, 0·5, 0·5): TO 7·84 ± 0·12, TA 1·86 ± 0·02, LO 7·15 ± 0·07, LA 6·26 ± 0·10; (c) (0, 0·65, 0·65): optic 8·08 ±0·13, 7·54 ± 0·12 and 6·57 ± 0·11, acoustic 5·58 ± 0·08, 3·42 · 0·06 and 2·36 ± 004. These results are generally slightly lower than the corresponding frequencies for germanium. An analysis in terms of various modifications of the dipole approximation model has been carried out. A feature of this analysis is that the charge on the gallium atom appears to be very small, about +0·04 e. The frequency distribution function has been derived from one of the force models. (author)

  2. Non-linear adjustment to purchasing power parity: an analysis using Fourier approximations

    OpenAIRE

    Juan-Ángel Jiménez-Martín; M. Dolores Robles Fernández

    2005-01-01

    This paper estimates the dynamics of adjustment to long run purchasing power parity (PPP) using data for 18 mayor bilateral US dollar exchange rates, over the post-Bretton Woods period, in a non-linear framework. We use new unit root and cointegration tests that do not assume a specific non-linear adjustment process. Using a first-order Fourier approximation, we find evidence of non-linear mean reversion in deviations from both absolute and relative PPP. This first-order Fourier approximation...

  3. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  4. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  5. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  6. Rate-distortion functions of non-stationary Markoff chains and their block-independent approximations

    OpenAIRE

    Agarwal, Mukul

    2018-01-01

    It is proved that the limit of the normalized rate-distortion functions of block independent approximations of an irreducible, aperiodic Markoff chain is independent of the initial distribution of the Markoff chain and thus, is also equal to the rate-distortion function of the Markoff chain.

  7. Fructose intake at current levels in the United States may cause gastrointestinal distress in normal adults.

    Science.gov (United States)

    Beyer, Peter L; Caviar, Elena M; McCallum, Richard W

    2005-10-01

    Fructose intake has increased considerably in the United States, primarily as a result of increased consumption of high-fructose corn syrup, fruits and juices, and crystalline fructose. The purpose was to determine how often fructose, in amounts commonly consumed, would result in malabsorption and/or symptoms in healthy persons. Fructose absorption was measured using 3-hour breath hydrogen tests and symptom scores were used to rate subjective responses for gas, borborygmus, abdominal pain, and loose stools. The study included 15 normal, free-living volunteers from a medical center community and was performed in a gastrointestinal specialty clinic. Subjects consumed 25- and 50-g doses of crystalline fructose with water after an overnight fast on separate test days. Mean peak breath hydrogen, time of peak, area under the curve (AUC) for breath hydrogen and gastrointestinal symptoms were measured during a 3-hour period after subjects consumed both 25- and 50-g doses of fructose. Differences in mean breath hydrogen, AUC, and symptom scores between doses were analyzed using paired t tests. Correlations among peak breath hydrogen, AUC, and symptoms were also evaluated. More than half of the 15 adults tested showed evidence of fructose malabsorption after 25 g fructose and greater than two thirds showed malabsorption after 50 g fructose. AUC, representing overall breath hydrogen response, was significantly greater after the 50-g dose. Overall symptom scores were significantly greater than baseline after each dose, but scores were only marginally greater after 50 g than 25 g. Peak hydrogen levels and AUC were highly correlated, but neither was significantly related to symptoms. Fructose, in amounts commonly consumed, may result in mild gastrointestinal distress in normal people. Additional study is warranted to evaluate the response to fructose-glucose mixtures (as in high-fructose corn syrup) and fructose taken with food in both normal people and those with

  8. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  9. The self-normalized Donsker theorem revisited

    OpenAIRE

    Parczewski, Peter

    2016-01-01

    We extend the Poincar\\'{e}--Borel lemma to a weak approximation of a Brownian motion via simple functionals of uniform distributions on n-spheres in the Skorokhod space $D([0,1])$. This approach is used to simplify the proof of the self-normalized Donsker theorem in Cs\\"{o}rg\\H{o} et al. (2003). Some notes on spheres with respect to $\\ell_p$-norms are given.

  10. Crate counter for normal operating loss

    International Nuclear Information System (INIS)

    Harlan, R.A.

    A lithium-loaded zinc sulfide scintillation counter to closely assay plutonium in waste packaged in 1.3 by 1.3 by 2.13m crates was built. In addition to assays for normal operating loss accounting, the counter will allow safeguards verification immediately before shipment of the crates for burial. The counter should detect approximately 10 g of plutonium in 1000 kg of waste

  11. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  12. Polarized constituent quarks in NLO approximation

    International Nuclear Information System (INIS)

    Khorramian, Ali N.; Tehrani, S. Atashbar; Mirjalili, A.

    2006-01-01

    The valon representation provides a basis between hadrons and quarks, in terms of which the bound-state and scattering properties of hadrons can be united and described. We studied polarized valon distributions which have an important role in describing the spin dependence of parton distribution in leading and next-to-leading order approximation. Convolution integral in frame work of valon model as a useful tool, was used in polarized case. To obtain polarized parton distributions in a proton we need to polarized valon distribution in a proton and polarized parton distributions inside the valon. We employed Bernstein polynomial averages to get unknown parameters of polarized valon distributions by fitting to available experimental data

  13. Errors due to the cylindrical cell approximation in lattice calculations

    Energy Technology Data Exchange (ETDEWEB)

    Newmarch, D A [Reactor Development Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1960-06-15

    It is shown that serious errors in fine structure calculations may arise through the use of the cylindrical cell approximation together with transport theory methods. The effect of this approximation is to overestimate the ratio of the flux in the moderator to the flux in the fuel. It is demonstrated that the use of the cylindrical cell approximation gives a flux in the moderator which is considerably higher than in the fuel, even when the cell dimensions in units of mean free path tend to zero; whereas, for the case of real cells (e.g. square or hexagonal), the flux ratio must tend to unity. It is also shown that, for cylindrical cells of any size, the ratio of the flux in the moderator to flux in the fuel tends to infinity as the total neutron cross section in the moderator tends to zero; whereas the ratio remains finite for real cells. (author)

  14. Local facet approximation for image stitching

    Science.gov (United States)

    Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun

    2018-01-01

    Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.

  15. Improved Genetic Algorithm-Based Unit Commitment Considering Uncertainty Integration Method

    Directory of Open Access Journals (Sweden)

    Kyu-Hyung Jo

    2018-05-01

    Full Text Available In light of the dissemination of renewable energy connected to the power grid, it has become necessary to consider the uncertainty in the generation of renewable energy as a unit commitment (UC problem. A methodology for solving the UC problem is presented by considering various uncertainties, which are assumed to have a normal distribution, by using a Monte Carlo simulation. Based on the constructed scenarios for load, wind, solar, and generator outages, a combination of scenarios is found that meets the reserve requirement to secure the power balance of the power grid. In those scenarios, the uncertainty integration method (UIM identifies the best combination by minimizing the additional reserve requirements caused by the uncertainty of power sources. An integration process for uncertainties is formulated for stochastic unit commitment (SUC problems and optimized by the improved genetic algorithm (IGA. The IGA is composed of five procedures and finds the optimal combination of unit status at the scheduled time, based on the determined source data. According to the number of unit systems, the IGA demonstrates better performance than the other optimization methods by applying reserve repairing and an approximation process. To account for the result of the proposed method, various UC strategies are tested with a modified 24-h UC test system and compared.

  16. Public exposure from environmental release of radioactive material under normal operation of unit-1 Bushehr nuclear power plant

    International Nuclear Information System (INIS)

    Sohrabi, M.; Parsouzi, Z.; Amrollahi, R.; Khamooshy, C.; Ghasemi, M.

    2013-01-01

    Highlights: ► The unit-1 Bushehr nuclear power plant is a VVER type reactor with 1000 MWe power. ► Doses of public critical groups living around the plant were assessed under normal reactor operation conditions. ► PC-CREAM 98 computer code developed by the HPA was applied to assess the public doses. ► Doses are comparable with those in the FSAR, in the ER and doses monitored. ► The doses assessed are lower than the dose constraint of 0.1 mSv/y associated with the plant. - Abstract: The Unit-1 Bushehr Nuclear Power Plant (BNPP-1), constructed at the Hallileh site near Bushehr located at the coast of the Persian Gulf, Iran, is a VVER type reactor with 1000 MWe power. According to standard practices, under normal operation conditions of the plant, radiological assessment of atmospheric and aquatic releases to the environment and assessment of public exposures are considered essential. In order to assess the individual and collective doses of the critical groups of population who receive the highest dose from radioactive discharges into the environment (atmosphere and aquatic) under normal operation conditions, this study was conducted. To assess the doses, the PC-CREAM 98 computer code developed by the Radiation Protection Division of the Health Protection Agency (HPA; formerly called NRPB) was applied. It uses a standard Gaussian plume dispersion model and comprises a suite of models and data for estimation of the radiological impact assessments of routine and continuous discharges from an NPP. The input data include a stack height of 100 m annual radionuclides release of gaseous effluents from the stack and liquid effluents that are released from heat removal system, meteorological data from the Bushehr local meteorological station, and the data for agricultural products. To assess doses from marine discharges, consumption of sea fish, crustacean and mollusca were considered. According to calculation by PC-CREAM 98 computer code, the highest individual

  17. Principles of applying Poisson units in radiology

    International Nuclear Information System (INIS)

    Benyumovich, M.S.

    2000-01-01

    The probability that radioactive particles hit particular space patterns (e.g. cells in the squares of a count chamber net) and time intervals (e.g. radioactive particles hit a given area per time unit) follows the Poisson distribution. The mean is the only parameter from which all this distribution depends. A metrological base of counting the cells and radioactive particles is a property of the Poisson distribution assuming equality of a standard deviation to a root square of mean (property 1). The application of Poisson units in counting of blood formed elements and cultured cells was proposed by us (Russian Federation Patent No. 2126230). Poisson units relate to the means which make the property 1 valid. In a case of cells counting, the square of these units is equal to 1/10 of one of count chamber net where they count the cells. Thus one finds the means from the single cell count rate divided by 10. Finding the Poisson units when counting the radioactive particles should assume determination of a number of these particles sufficient to make equality 1 valid. To this end one should subdivide a time interval used in counting a single particle count rate into different number of equal portions (count numbers). Next one should pick out the count number ensuring the satisfaction of equality 1. Such a portion is taken as a Poisson unit in the radioactive particles count. If the flux of particles is controllable one should set up a count rate sufficient to make equality 1 valid. Operations with means obtained by with the use of Poisson units are performed on the base of approximation of the Poisson distribution by a normal one. (author)

  18. The Wallner Normal Fault: A new major tectonic structure within the Austroalpine Units south of the Tauern Window (Kreuzeck, Eastern Alps, Austria)

    Science.gov (United States)

    Griesmeier, Gerit E. U.; Schuster, Ralf; Grasemann, Bernhard

    2017-04-01

    The polymetamorphic Austroalpine Units of the Eastern Alps were derived from the northern Adriatic continental margin and have been significantly reworked during the Eoalpine intracontinental subduction. Several major basement/cover nappe systems, which experienced a markedly different tectono-metamorphic history, characterize the complex internal structure of the Austroalpine Units. This work describes a new major tectonic structure in the Kreuzeck Mountains, south of the famous Tauern Window - the Wallner Normal Fault. It separates the so called Koralpe-Wölz Nappe System in the footwall from the Drauzug-Gurktal Nappe System in the hanging wall. The Koralpe-Wölz Nappe System below the Wallner Normal Fault is dominated by monotonous paragneisses and minor mica schists, which are locally garnet bearing. Subordinated amphibolite bodies can be observed. The schistosity is homogeneously dipping steeply to the S and the partly mylonitic stretching lineation is typically moderately dipping to the ESE. The Alpine metamorphic peak reached eclogite facies further in the north and amphibolite facies in the study area. The metamorphic peak occurred in the Late Cretaceous followed by rapid cooling. The Drauzug-Gurktal Nappe System above the Wallner Normal Fault consists of various subunits. (i) Paragneisses and micaschists subunit (Gaugen Complex) with numerous quartz mobilisates are locally intercalated with amphibolites. Several millimeter large garnets together with staurolite and kyanite have been identified in thin sections. Even though the main striking direction is E-W, polyphase refolding resulted in strong local variations of the orientation of the main foliation. (ii) Garnet micaschists subunit (Strieden Complex) with garnets up to 15 mm are intercalated with up to tens of meters thick amphibolites. The lithologies are intensely folded with folding axes dipping moderately to the SSW and axial planes dipping steeply to the NW. (iii) A phyllites-marble subunit

  19. Comparative Study of Various Normal Mode Analysis Techniques Based on Partial Hessians

    OpenAIRE

    GHYSELS, AN; VAN SPEYBROECK, VERONIQUE; PAUWELS, EWALD; CATAK, SARON; BROOKS, BERNARD R.; VAN NECK, DIMITRI; WAROQUIER, MICHEL

    2010-01-01

    Standard normal mode analysis becomes problematic for complex molecular systems, as a result of both the high computational cost and the excessive amount of information when the full Hessian matrix is used. Several partial Hessian methods have been proposed in the literature, yielding approximate normal modes. These methods aim at reducing the computational load and/or calculating only the relevant normal modes of interest in a specific application. Each method has its own (dis)advantages and...

  20. The thoracic paraspinal shadow: normal appearances.

    Science.gov (United States)

    Lien, H H; Kolbenstvedt, A

    1982-01-01

    The width of the right and left thoracic paraspinal shadows were measured at all levels in 200 presumably normal individuals. The paraspinal shadow could be identified in nearly all cases on the left side and in approximately one-third on the right. The range of variation was greater on the left side than one the right. The left paraspinal shadow was wider at the upper levels and in individuals above 40 years of age.

  1. The consequences of non-normality

    International Nuclear Information System (INIS)

    Hip, I.; Lippert, Th.; Neff, H.; Schilling, K.; Schroers, W.

    2002-01-01

    The non-normality of Wilson-type lattice Dirac operators has important consequences - the application of the usual concepts from the textbook (hermitian) quantum mechanics should be reconsidered. This includes an appropriate definition of observables and the refinement of computational tools. We show that the truncated singular value expansion is the optimal approximation to the inverse operator D -1 and we prove that due to the γ 5 -hermiticity it is equivalent to γ 5 times the truncated eigenmode expansion of the hermitian Wilson-Dirac operator

  2. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  3. Normal vibrations in gallium arsenide

    Energy Technology Data Exchange (ETDEWEB)

    Dolling, G; Waugh, J L T

    1964-07-01

    The triple axis crystal spectrometer at Chalk River has been used to observe coherent slow neutron scattering from a single crystal of pure gallium arsenide at 296{sup o}K. The frequencies of normal modes of vibration propagating in the [{zeta}00], ({zeta}{zeta}{zeta}], and (0{zeta}{zeta}] crystal directions have been determined with a precision of between 1 and 2{center_dot}5 per cent. A limited number of normal modes have also been studied at 95 and 184{sup o}K. Considerable difficulty was experienced in obtaining welt resolved neutron peaks corresponding to the two non-degenerate optic modes for very small wave-vector, particularly at 296{sup o}K. However, from a comparison of results obtained under various experimental conditions at several different points in reciprocal space, frequencies (units 10{sup 12} c/s) for these modes (at 296{sup o}K) have been assigned: T 8{center_dot}02{+-}0{center_dot}08 and L 8{center_dot}55{+-}02. Other specific normal modes, with their measured frequencies are (a) (1,0,0): TO 7{center_dot}56 {+-} 008, TA 2{center_dot}36 {+-} 0{center_dot}015, LO 7{center_dot}22 {+-} 0{center_dot}15, LA 6{center_dot}80 {+-} 0{center_dot}06; (b) (0{center_dot}5, 0{center_dot}5, 0{center_dot}5): TO 7{center_dot}84 {+-} 0{center_dot}12, TA 1{center_dot}86 {+-} 0{center_dot}02, LO 7{center_dot}15 {+-} 0{center_dot}07, LA 6{center_dot}26 {+-} 0{center_dot}10; (c) (0, 0{center_dot}65, 0{center_dot}65): optic 8{center_dot}08 {+-}0{center_dot}13, 7{center_dot}54 {+-} 0{center_dot}12 and 6{center_dot}57 {+-} 0{center_dot}11, acoustic 5{center_dot}58 {+-} 0{center_dot}08, 3{center_dot}42 {center_dot} 0{center_dot}06 and 2{center_dot}36 {+-} 004. These results are generally slightly lower than the corresponding frequencies for germanium. An analysis in terms of various modifications of the dipole approximation model has been carried out. A feature of this analysis is that the charge on the gallium atom appears to be very small, about +0{center_dot}04 e. The

  4. SPOKEN CUZCO QUECHUA, UNITS 1-6.

    Science.gov (United States)

    SOLA, DONALD F.; AND OTHERS

    THE MATERIALS IN THIS VOLUME COMPRISE SIX UNITS WHICH PRESENT BASIC ASPECTS OF CUZCO QUECHUA PHONOLOGY, MORPHOLOGY, AND SYNTAX FOR THE BEGINNING STUDENT. THE SIX UNITS ARE DESIGNED FOR APPROXIMATELY 120 HOURS OF SUPERVISED CLASS WORK WITH OUTSIDE PREPARATION EXPECTED OF THE STUDENT. EACH UNIT CONSISTS OF A DIALOGUE TO BE MEMORIZED, A DIALOGUE…

  5. Reflectance spectrometry of normal and bruised human skins: experiments and modeling

    International Nuclear Information System (INIS)

    Kim, Oleg; Alber, Mark; McMurdy, John; Lines, Collin; Crawford, Gregory; Duffy, Susan

    2012-01-01

    A stochastic photon transport model in multilayer skin tissue combined with reflectance spectroscopy measurements is used to study normal and bruised skins. The model is shown to provide a very good approximation to both normal and bruised real skin tissues by comparing experimental and simulated reflectance spectra. The sensitivity analysis of the skin reflectance spectrum to variations of skin layer thicknesses, blood oxygenation parameter and concentrations of main chromophores is performed to optimize model parameters. The reflectance spectrum of a developed bruise in a healthy adult is simulated, and the concentrations of bilirubin, blood volume fraction and blood oxygenation parameter are determined for different times as the bruise progresses. It is shown that bilirubin and blood volume fraction reach their peak values at 80 and 55 h after contusion, respectively, and the oxygenation parameter is lower than its normal value during 80 h after contusion occurred. The obtained time correlations of chromophore concentrations in developing contusions are shown to be consistent with previous studies. The developed model uses a detailed seven-layer skin approximation for contusion and allows one to obtain more biologically relevant results than those obtained with previous models using one- to three-layer skin approximations. A combination of modeling with spectroscopy measurements provides a new tool for detailed biomedical studies of human skin tissue and for age determination of contusions. (paper)

  6. Postprocedural pain in shoulder arthrography: differences between using preservative-free normal saline and normal saline with benzyl alcohol as an intraarticular contrast diluent.

    Science.gov (United States)

    Storey, Troy F; Gilbride, George; Clifford, Kelly

    2014-11-01

    The purpose of this study was to prospectively evaluate the effect of benzyl alcohol, a common preservative in normal saline, on postprocedural pain after intraarticular injection for direct shoulder MR arthrography. From April 2011 through January 2013, 138 patients underwent direct shoulder MR arthrography. Using the Wong-Baker Faces Pain Scale, patients were asked to report their shoulder pain level immediately before and immediately after the procedure and then were contacted by telephone 6, 24, and 48 hours after the procedure. Fourteen patients did not receive the prescribed amount of contrast agent for diagnostic reasons or did not complete follow-up. Sixty-two patients received an intraarticular solution including preservative-free normal saline (control group) and 62 patients received an intraarticular solution including normal saline with 0.9% benzyl alcohol as a contrast diluent (test group). Patients were randomized as to which intraarticular diluent they received. Fluoroscopic and MR images were reviewed for extracapsular contrast agent administration or extravasation, full-thickness rotator cuff tears, and adhesive capsulitis. The effect of preservative versus control on pain level was estimated with multiple regression, which included time after procedure as the covariate and accounted for repeated measures over patients. Pain scale scores were significantly (p = 0.0382) higher (0.79 units; 95% CI, 0.034-1.154) with benzyl alcohol preservative compared with control (saline). In both study arms, the pain scale scores decreased slightly after the procedure, increased by roughly 1 unit over baseline for the test group and 0.3 unit over baseline for the control group by 6 hours after the procedure, were 0.50 unit over baseline for the test group and 0.12 unit over baseline for the control group at 24 hours, then fell to be slightly greater than baseline at 48 hours with benzyl alcohol and slightly less than baseline without benzyl alcohol. These trends

  7. Visual attention and flexible normalization pools

    Science.gov (United States)

    Schwartz, Odelia; Coen-Cagli, Ruben

    2013-01-01

    Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413

  8. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  9. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Normalized Atmospheric Deposition for 2002, Ammonium (NH4)

    Science.gov (United States)

    Wieczorek, Michael; LaMotte, Andrew E.

    2010-01-01

    This data set represents the average normalized atmospheric (wet) deposition, in kilograms, of Ammonium (NH4) for the year 2002 compiled for every catchment of NHDPlus for the conterminous United States. Estimates of NH4 deposition are based on National Atmospheric Deposition Program (NADP) measurements (B. Larsen, U.S. Geological Survey, written commun., 2007). De-trending methods applied to the year 2002 are described in Alexander and others, 2001. NADP site selection met the following criteria: stations must have records from 1995 to 2002 and have a minimum of 30 observations. The NHDPlus Version 1.1 is an integrated suite of application-ready geospatial datasets that incorporates many of the best features of the National Hydrography Dataset (NHD) and the National Elevation Dataset (NED). The NHDPlus includes a stream network (based on the 1:100,00-scale NHD), improved networking, naming, and value-added attributes (VAAs). NHDPlus also includes elevation-derived catchments (drainage areas) produced using a drainage enforcement technique first widely used in New England, and thus referred to as "the New England Method." This technique involves "burning in" the 1:100,000-scale NHD and when available building "walls" using the National Watershed Boundary Dataset (WBD). The resulting modified digital elevation model (HydroDEM) is used to produce hydrologic derivatives that agree with the NHD and WBD. Over the past two years, an interdisciplinary team from the U.S. Geological Survey (USGS), and the U.S. Environmental Protection Agency (USEPA), and contractors, found that this method produces the best quality NHD catchments using an automated process (USEPA, 2007). The NHDPlus dataset is organized by 18 Production Units that cover the conterminous United States. The NHDPlus version 1.1 data are grouped by the U.S. Geologic Survey's Major River Basins (MRBs, Crawford and others, 2006). MRB1, covering the New England and Mid-Atlantic River basins, contains NHDPlus

  10. Origin of quantum criticality in Yb-Al-Au approximant crystal and quasicrystal

    International Nuclear Information System (INIS)

    Watanabe, Shinji; Miyake, Kazumasa

    2016-01-01

    To get insight into the mechanism of emergence of unconventional quantum criticality observed in quasicrystal Yb 15 Al 34 Au 51 , the approximant crystal Yb 14 Al 35 Au 51 is analyzed theoretically. By constructing a minimal model for the approximant crystal, the heavy quasiparticle band is shown to emerge near the Fermi level because of strong correlation of 4f electrons at Yb. We find that charge-transfer mode between 4f electron at Yb on the 3rd shell and 3p electron at Al on the 4th shell in Tsai-type cluster is considerably enhanced with almost flat momentum dependence. The mode-coupling theory shows that magnetic as well as valence susceptibility exhibits χ ∼ T -0.5 for zero-field limit and is expressed as a single scaling function of the ratio of temperature to magnetic field T/B over four decades even in the approximant crystal when some condition is satisfied by varying parameters, e.g., by applying pressure. The key origin is clarified to be due to strong locality of the critical Yb-valence fluctuation and small Brillouin zone reflecting the large unit cell, giving rise to the extremely-small characteristic energy scale. This also gives a natural explanation for the quantum criticality in the quasicrystal corresponding to the infinite limit of the unit-cell size. (author)

  11. Hierarchical Decompositions for the Computation of High-Dimensional Multivariate Normal Probabilities

    KAUST Repository

    Genton, Marc G.

    2017-09-07

    We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.

  12. Hierarchical Decompositions for the Computation of High-Dimensional Multivariate Normal Probabilities

    KAUST Repository

    Genton, Marc G.; Keyes, David E.; Turkiyyah, George

    2017-01-01

    We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.

  13. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  14. General Rytov approximation.

    Science.gov (United States)

    Potvin, Guy

    2015-10-01

    We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.

  15. Relating normalization to neuronal populations across cortical areas.

    Science.gov (United States)

    Ruff, Douglas A; Alberts, Joshua J; Cohen, Marlene R

    2016-09-01

    Normalization, which divisively scales neuronal responses to multiple stimuli, is thought to underlie many sensory, motor, and cognitive processes. In every study where it has been investigated, neurons measured in the same brain area under identical conditions exhibit a range of normalization, ranging from suppression by nonpreferred stimuli (strong normalization) to additive responses to combinations of stimuli (no normalization). Normalization has been hypothesized to arise from interactions between neuronal populations, either in the same or different brain areas, but current models of normalization are not mechanistic and focus on trial-averaged responses. To gain insight into the mechanisms underlying normalization, we examined interactions between neurons that exhibit different degrees of normalization. We recorded from multiple neurons in three cortical areas while rhesus monkeys viewed superimposed drifting gratings. We found that neurons showing strong normalization shared less trial-to-trial variability with other neurons in the same cortical area and more variability with neurons in other cortical areas than did units with weak normalization. Furthermore, the cortical organization of normalization was not random: neurons recorded on nearby electrodes tended to exhibit similar amounts of normalization. Together, our results suggest that normalization reflects a neuron's role in its local network and that modulatory factors like normalization share the topographic organization typical of sensory tuning properties. Copyright © 2016 the American Physiological Society.

  16. Performance of HESCO Bastion Units Under Combined Normal and Cyclic Lateral Loading

    Science.gov (United States)

    2017-02-01

    4 vii Figures and Tables Figures Figure 1. HESCO bastion welded wire mesh (WWM) panels. .................................................. 3...in the testing program. The Commander of ERDC was COL Bryan S. Green and the Director was Dr. Jeffery P. Holland. ERDC/CERL TR-17-4 x Unit... welded wire mesh that is used with a geotextile liner. HESCO units are set up onsite and filled with soil or sand, as available at the construction

  17. Approximating perfection a mathematician's journey into the world of mechanics

    CERN Document Server

    Lebedev, Leonid P

    2004-01-01

    This is a book for those who enjoy thinking about how and why Nature can be described using mathematical tools. Approximating Perfection considers the background behind mechanics as well as the mathematical ideas that play key roles in mechanical applications. Concentrating on the models of applied mechanics, the book engages the reader in the types of nuts-and-bolts considerations that are normally avoided in formal engineering courses: how and why models remain imperfect, and the factors that motivated their development. The opening chapter reviews and reconsiders the basics of c

  18. Longitudinal study of serum placental GH in 455 normal pregnancies

    DEFF Research Database (Denmark)

    Chellakooty, Marla; Skibsted, Lillian; Skouby, Sven O

    2002-01-01

    women with normal singleton pregnancies at approximately 19 and 28 wk gestation. Serum placental GH concentrations were measured by a highly specific immunoradiometric assay, and fetal size was measured by ultrasound. Data on birth weight, gender, prepregnancy body mass index (BMI), parity, and smoking...

  19. Correction of Bowtie-Filter Normalization and Crescent Artifacts for a Clinical CBCT System.

    Science.gov (United States)

    Zhang, Hong; Kong, Vic; Huang, Ke; Jin, Jian-Yue

    2017-02-01

    To present our experiences in understanding and minimizing bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a clinical cone beam computed tomography system. Bowtie-filter position and profile variations during gantry rotation were studied. Two previously proposed strategies (A and B) were applied to the clinical cone beam computed tomography system to correct bowtie-filter crescent artifacts. Physical calibration and analytical approaches were used to minimize the norm phantom misalignment and to correct for bowtie-filter normalization artifacts. A combined procedure to reduce bowtie-filter crescent artifacts and bowtie-filter normalization artifacts was proposed and tested on a norm phantom, CatPhan, and a patient and evaluated using standard deviation of Hounsfield unit along a sampling line. The bowtie-filter exhibited not only a translational shift but also an amplitude variation in its projection profile during gantry rotation. Strategy B was better than strategy A slightly in minimizing bowtie-filter crescent artifacts, possibly because it corrected the amplitude variation, suggesting that the amplitude variation plays a role in bowtie-filter crescent artifacts. The physical calibration largely reduced the misalignment-induced bowtie-filter normalization artifacts, and the analytical approach further reduced bowtie-filter normalization artifacts. The combined procedure minimized both bowtie-filter crescent artifacts and bowtie-filter normalization artifacts, with Hounsfield unit standard deviation being 63.2, 45.0, 35.0, and 18.8 Hounsfield unit for the best correction approaches of none, bowtie-filter crescent artifacts, bowtie-filter normalization artifacts, and bowtie-filter normalization artifacts + bowtie-filter crescent artifacts, respectively. The combined procedure also demonstrated reduction of bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a CatPhan and a patient. We have developed a step

  20. Log-Normal Turbulence Dissipation in Global Ocean Models

    Science.gov (United States)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  1. Antitissue Transglutaminase Normalization Postdiagnosis in Children With Celiac Disease.

    Science.gov (United States)

    Isaac, Daniela Migliarese; Rajani, Seema; Yaskina, Maryna; Huynh, Hien Q; Turner, Justine M

    2017-08-01

    Limited pediatric data exist examining the trend and predictors of antitissue transglutaminase (atTG) normalization over time in children with celiac disease (CD). We aimed to evaluate time to normalization of atTG in children after CD diagnosis, and to assess for independent predictors affecting this duration. A retrospective chart review was completed in pediatric patients with CD diagnosed from 2007 to 2014 at the Stollery Children's Hospital Celiac Clinic (Edmonton, Alberta, Canada). The clinical predictors assessed for impact on time to atTG normalization were initial atTG, Marsh score at diagnosis, gluten-free diet compliance (GFDC), age at diagnosis, sex, ethnicity, medical comorbidities, and family history of CD. Kaplan-Meier survival analysis was completed to assess time to atTG normalization, and Cox regression to assess for independent predictors of this time. A total of 487 patients met inclusion criteria. Approximately 80.5% of patients normalized atTG levels. Median normalization time was 407 days for all patients (95% confidence interval [CI: 361-453]), and 364 days for gluten-free diet compliant patients (95% CI [335-393]). Type 1 diabetes mellitus (T1DM) patients took significantly longer to normalize at 1204 days (95% CI [199-2209], P normalization time. GFDC was a significant predictor of earlier normalization (OR = 13.91 [7.86-24.62], P normalization. Patients with T1DM are less likely to normalize atTG levels, with longer normalization time. Additional research and education for higher-risk populations are needed.

  2. Gauss-Arnoldi quadrature for -1φ,φ> and rational Pade-type approximation for Markov-type functions

    International Nuclear Information System (INIS)

    Knizhnerman, L A

    2008-01-01

    The efficiency of Gauss-Arnoldi quadrature for the calculation of the quantity -1 φ,φ> is studied, where A is a bounded operator in a Hilbert space and φ is a non-trivial vector in this space. A necessary and a sufficient conditions are found for the efficiency of the quadrature in the case of a normal operator. An example of a non-normal operator for which this quadrature is inefficient is presented. It is shown that Gauss-Arnoldi quadrature is related in certain cases to rational Pade-type approximation (with the poles at Ritz numbers) for functions of Markov type and, in particular, can be used for the localization of the poles of a rational perturbation. Error estimates are found, which can also be used when classical Pade approximation does not work or it may not be efficient. Theoretical results and conjectures are illustrated by numerical experiments. Bibliography: 44 titles

  3. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  4. COBE DMR-normalized open inflation cold dark matter cosmogony

    Science.gov (United States)

    Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.

    1995-01-01

    A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.

  5. THE FEATURES OF CONNEXINS EXPRESSION IN THE CELLS OF NEUROVASCLAR UNIT IN NORMAL CONDITIONS AND HYPOXIA IN VITRO

    Directory of Open Access Journals (Sweden)

    A. V. Morgun

    2014-01-01

    Full Text Available The aim of this research was to assess a role of connexin 43 (Cx43 and associated molecule CD38 in the regulation of cell-cell interactions in the neurovascular unit (NVU in vitro in physiological conditions and in hypoxia.Materials and methods. The study was done using the original neurovascular unit model in vitro. The NVU consisted of three cell types: neurons, astrocytes, and cerebral endothelial cells derived from rats. Hypoxia was induced by incubating cells with sodium iodoacetate for 30 min at37 °C in standard culture conditions.Results. We investigated the role of connexin 43 in the regulation of cell interactions within the NVU in normal and hypoxic injury in vitro. We found that astrocytes were characterized by high levels of expression of Cx43 and low level of CD38 expression, neurons demonstrated high levels of CD38 and low levels of Cx43. In hypoxic conditions, the expression of Cx43 and CD38 in astrocytes markedly increased while CD38 expression in neurons decreased, however no changes were found in endothelial cells. Suppression of Cx43 activity resulted in down-regulation of CD38 in NVU cells, both in physiological conditions and at chemical hypoxia.Conclusion. Thus, the Cx-regulated intercellular NAD+-dependent communication and secretory phenotype of astroglial cells that are the part of the blood-brain barrier is markedly changed in hypoxia.

  6. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  7. Kullback–Leibler Divergence of the γ–ordered Normal over t–distribution

    OpenAIRE

    Toulias, T-L.; Kitsos, C-P.

    2012-01-01

    The aim of this paper is to evaluate and study the Kullback–Leibler divergence of the γ–ordered Normal distribution, a generalization of Normal distribution emerged from the generalized Fisher’s information measure, over the scaled t–distribution. We investigate this evaluation through a series of bounds and approximations while the asymptotic behavior of the divergence is also studied. Moreover, we obtain a generalization of the known Kullback–Leibler information measure betwe...

  8. Comparison of different moment-closure approximations for stochastic chemical kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Schnoerr, David [School of Biological Sciences, University of Edinburgh, Edinburgh (United Kingdom); School of Informatics, University of Edinburgh, Edinburgh (United Kingdom); Sanguinetti, Guido [School of Informatics, University of Edinburgh, Edinburgh (United Kingdom); Grima, Ramon [School of Biological Sciences, University of Edinburgh, Edinburgh (United Kingdom)

    2015-11-14

    In recent years, moment-closure approximations (MAs) of the chemical master equation have become a popular method for the study of stochastic effects in chemical reaction systems. Several different MA methods have been proposed and applied in the literature, but it remains unclear how they perform with respect to each other. In this paper, we study the normal, Poisson, log-normal, and central-moment-neglect MAs by applying them to understand the stochastic properties of chemical systems whose deterministic rate equations show the properties of bistability, ultrasensitivity, and oscillatory behaviour. Our results suggest that the normal MA is favourable over the other studied MAs. In particular, we found that (i) the size of the region of parameter space where a closure gives physically meaningful results, e.g., positive mean and variance, is considerably larger for the normal closure than for the other three closures, (ii) the accuracy of the predictions of the four closures (relative to simulations using the stochastic simulation algorithm) is comparable in those regions of parameter space where all closures give physically meaningful results, and (iii) the Poisson and log-normal MAs are not uniquely defined for systems involving conservation laws in molecule numbers. We also describe the new software package MOCA which enables the automated numerical analysis of various MA methods in a graphical user interface and which was used to perform the comparative analysis presented in this paper. MOCA allows the user to develop novel closure methods and can treat polynomial, non-polynomial, as well as time-dependent propensity functions, thus being applicable to virtually any chemical reaction system.

  9. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  10. Time-dependent Hartree approximation and time-dependent harmonic oscillator model

    International Nuclear Information System (INIS)

    Blaizot, J.P.

    1982-01-01

    We present an analytically soluble model for studying nuclear collective motion within the framework of the time-dependent Hartree (TDH) approximation. The model reduces the TDH equations to the Schroedinger equation of a time-dependent harmonic oscillator. Using canonical transformations and coherent states we derive a few properties of the time-dependent harmonic oscillator which are relevant for applications. We analyse the role of the normal modes in the time evolution of a system governed by TDH equations. We show how these modes couple together due to the anharmonic terms generated by the non-linearity of the theory. (orig.)

  11. Improved Root Normal Size Distributions for Liquid Atomization

    Science.gov (United States)

    2015-11-01

    ANSI Std. Z39.18 j CONVERSION TABLE Conversion Factors for U.S. Customary to metric (SI) units of measurement. MULTIPLY BY TO...Gray (Gy) coulomb /kilogram (C/kg) second (s) kilogram (kg) kilo pascal (kPa) 1 Improved Root Normal Size Distributions for Liquid

  12. Novel surgical performance evaluation approximates Standardized Incidence Ratio with high accuracy at simple means.

    Science.gov (United States)

    Gabbay, Itay E; Gabbay, Uri

    2013-01-01

    Excess adverse events may be attributable to poor surgical performance but also to case-mix, which is controlled through the Standardized Incidence Ratio (SIR). SIR calculations can be complicated, resource consuming, and unfeasible in some settings. This article suggests a novel method for SIR approximation. In order to evaluate a potential SIR surrogate measure we predefined acceptance criteria. We developed a new measure - Approximate Risk Index (ARI). "Number Needed for Event" (NNE) is the theoretical number of patients needed "to produce" one adverse event. ARI is defined as the quotient of the group of patients needed for no observed events Ge by total patients treated Ga. Our evaluation compared 2500 surgical units and over 3 million heterogeneous risk surgical patients that were induced through a computerized simulation. Surgical unit's data were computed for SIR and ARI to evaluate compliance with the predefined criteria. Approximation was evaluated by correlation analysis and performance prediction capability by Receiver Operating Characteristics (ROC) analysis. ARI strongly correlates with SIR (r(2) = 0.87, p 0.9) 87% sensitivity and 91% specificity. ARI provides good approximation of SIR and excellent prediction capability. ARI is simple and cost-effective as it requires thorough risk evaluation of only the adverse events patients. ARI can provide a crucial screening and performance evaluation quality control tool. The ARI method may suit other clinical and epidemiological settings where relatively small fraction of the entire population is affected. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  13. Semantic and phonological coding in poor and normal readers.

    Science.gov (United States)

    Vellutino, F R; Scanlon, D M; Spearing, D

    1995-02-01

    Three studies were conducted evaluating semantic and phonological coding deficits as alternative explanations of reading disability. In the first study, poor and normal readers in second and sixth grade were compared on various tests evaluating semantic development as well as on tests evaluating rapid naming and pseudoword decoding as independent measures of phonological coding ability. In a second study, the same subjects were given verbal memory and visual-verbal learning tasks using high and low meaning words as verbal stimuli and Chinese ideographs as visual stimuli. On the semantic tasks, poor readers performed below the level of the normal readers only at the sixth grade level, but, on the rapid naming and pseudoword learning tasks, they performed below the normal readers at the second as well as at the sixth grade level. On both the verbal memory and visual-verbal learning tasks, performance in poor readers approximated that of normal readers when the word stimuli were high in meaning but not when they were low in meaning. These patterns were essentially replicated in a third study that used some of the same semantic and phonological measures used in the first experiment, and verbal memory and visual-verbal learning tasks that employed word lists and visual stimuli (novel alphabetic characters) that more closely approximated those used in learning to read. It was concluded that semantic coding deficits are an unlikely cause of reading difficulties in most poor readers at the beginning stages of reading skills acquisition, but accrue as a consequence of prolonged reading difficulties in older readers. It was also concluded that phonological coding deficits are a probable cause of reading difficulties in most poor readers.

  14. Development of Normalization Factors for Canada and the United States and Comparison with European Factors

    DEFF Research Database (Denmark)

    Lautier, Anne; Rosenbaum, Ralph K.; Margni, Manuele

    2010-01-01

    In Life Cycle Assessment (LCA), normalization calculates the magnitude of an impact (midpoint or endpoint) relative to the total effect of a given reference. The goal of this work is to calculate normalization factors for Canada and the US and to compare them with existing European normalization...... factors. The differences between geographical areas were highlighted by identifying and comparing the main contributors to a given impact category in Canada, the US and Europe. This comparison verified that the main contributors in Europe and in the US are also present in the Canadian inventory. It also...

  15. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    Science.gov (United States)

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  16. Gamma-Ray Burst Host Galaxies Have "Normal" Luminosities.

    Science.gov (United States)

    Schaefer

    2000-04-10

    The galactic environment of gamma-ray bursts can provide good evidence about the nature of the progenitor system, with two old arguments implying that the burst host galaxies are significantly subluminous. New data and new analysis have now reversed this picture: (1) Even though the first two known host galaxies are indeed greatly subluminous, the next eight hosts have absolute magnitudes typical for a population of field galaxies. A detailed analysis of the 16 known hosts (10 with redshifts) shows them to be consistent with a Schechter luminosity function with R*=-21.8+/-1.0, as expected for normal galaxies. (2) Bright bursts from the Interplanetary Network are typically 18 times brighter than the faint bursts with redshifts; however, the bright bursts do not have galaxies inside their error boxes to limits deeper than expected based on the luminosities for the two samples being identical. A new solution to this dilemma is that a broad burst luminosity function along with a burst number density varying as the star formation rate will require the average luminosity of the bright sample (>6x1058 photons s-1 or>1.7x1052 ergs s-1) to be much greater than the average luminosity of the faint sample ( approximately 1058 photons s-1 or approximately 3x1051 ergs s-1). This places the bright bursts at distances for which host galaxies with a normal luminosity will not violate the observed limits. In conclusion, all current evidence points to gamma-ray burst host galaxies being normal in luminosity.

  17. Approximating the r-process on earth with thermonuclear explosions

    International Nuclear Information System (INIS)

    Becker, S.A.

    1992-01-01

    The astrophysical r-process can be approximately simulated in certain types of thermonuclear explosions. Between 1952 and 1969 twenty-three nuclear tests were fielded by the United States which had as one of their objectives the production of heavy transuranic elements. Of these tests, fifteen were at least partially successful. Some of these shots were conducted under the project Plowshare Peaceful Nuclear Explosion Program as scientific research experiments. A review of the program, target nuclei used, and heavy element yields achieved, will be presented as well as discussion of plans for a new experiment in a future nuclear test

  18. Detecting altered postural control after cerebral concussion in athletes with normal postural stability

    OpenAIRE

    Cavanaugh, J; Guskiewicz, K; Giuliani, C; Marshall, S; Mercer, V; Stergiou, N

    2005-01-01

    Objective: To determine if approximate entropy (ApEn), a regularity statistic from non-linear dynamics, could detect changes in postural control during quiet standing in athletes with normal postural stability after cerebral concussion.

  19. Gestational age and birth weight centiles of singleton babies delivered normally following spontaneous labor, in Southern Sri Lanka

    Science.gov (United States)

    Attanayake, K; Munasinghe, S; Goonewardene, M; Widanapathirana, P; Sandeepani, I; Sanjeewa, L

    2018-03-31

    To estimate the gestational age and birth weight centiles of babies delivered normally, without any obstetric intervention, in women with uncomplicated singleton pregnancies establishing spontaneous onset of labour. Consecutive women with uncomplicated singleton pregnancies, attending the Academic Obstetrics and Gynecology Unit of the Teaching Hospital Mahamodara Galle, Sri Lanka, with confirmed dates and establishing spontaneous onset of labor and delivering vaginally between gestational age of 34 - 41 weeks, without any obstetric intervention , during the period September 2013 to February 2014 were studied. The gestational age at spontaneous onset of labor and vaginal delivery and the birth weights of the babies were recorded. There were 3294 consecutive deliveries during this period, and of them 1602 (48.6%) met the inclusion criteria. Median gestational age at delivery was 275 days (range 238-291 days, IQR 269 to 280 days) and the median birth weight was 3000 g (range1700g - 4350g; IQR 2750-3250g). The 10th, 50th and 90th birth weight centiles of the babies delivered at a gestational age of 275 days were approximately 2570g, 3050g and 3550g respectively. The median gestational age among women with uncomplicated singleton pregnancies who established spontaneous onset of labor and delivered vaginally, without any obstetric intervention, was approximately five days shorter than the traditionally accepted 280 days. At a gestational age of 275 days, the mean birth weight was approximately 3038g and the 50th centile of the birth weight of the babies delivered was approximately 3050g.

  20. Circulating sex hormones and terminal duct lobular unit involution of the normal breast.

    Science.gov (United States)

    Khodr, Zeina G; Sherman, Mark E; Pfeiffer, Ruth M; Gierach, Gretchen L; Brinton, Louise A; Falk, Roni T; Patel, Deesha A; Linville, Laura M; Papathomas, Daphne; Clare, Susan E; Visscher, Daniel W; Mies, Carolyn; Hewitt, Stephen M; Storniolo, Anna Maria V; Rosebrock, Adrian; Caban, Jesus J; Figueroa, Jonine D

    2014-12-01

    Terminal duct lobular units (TDLU) are the predominant source of breast cancers. Lesser degrees of age-related TDLU involution have been associated with increased breast cancer risk, but factors that influence involution are largely unknown. We assessed whether circulating hormones, implicated in breast cancer risk, are associated with levels of TDLU involution using data from the Susan G. Komen Tissue Bank (KTB) at the Indiana University Simon Cancer Center (2009-2011). We evaluated three highly reproducible measures of TDLU involution, using normal breast tissue samples from the KTB (n = 390): TDLU counts, median TDLU span, and median acini counts per TDLU. RRs (for continuous measures), ORs (for categorical measures), 95% confidence intervals (95% CI), and Ptrends were calculated to assess the association between tertiles of estradiol, testosterone, sex hormone-binding globulin (SHBG), progesterone, and prolactin with TDLU measures. All models were stratified by menopausal status and adjusted for confounders. Among premenopausal women, higher prolactin levels were associated with higher TDLU counts (RRT3vsT1:1.18; 95% CI: 1.07-1.31; Ptrend = 0.0005), but higher progesterone was associated with lower TDLU counts (RRT3vsT1: 0.80; 95% CI: 0.72-0.89; Ptrend < 0.0001). Among postmenopausal women, higher levels of estradiol (RRT3vsT1:1.61; 95% CI: 1.32-1.97; Ptrend < 0.0001) and testosterone (RRT3vsT1: 1.32; 95% CI: 1.09-1.59; Ptrend = 0.0043) were associated with higher TDLU counts. These data suggest that select hormones may influence breast cancer risk potentially through delaying TDLU involution. Increased understanding of the relationship between circulating markers and TDLU involution may offer new insights into breast carcinogenesis. Cancer Epidemiol Biomarkers Prev; 23(12); 2765-73. ©2014 AACR. ©2014 American Association for Cancer Research.

  1. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  2. Motor unit number estimation in the quantitative assessment of severity and progression of motor unit loss in Hirayama disease.

    Science.gov (United States)

    Zheng, Chaojun; Zhu, Yu; Zhu, Dongqing; Lu, Feizhou; Xia, Xinlei; Jiang, Jianyuan; Ma, Xiaosheng

    2017-06-01

    To investigate motor unit number estimation (MUNE) as a method to quantitatively evaluate severity and progression of motor unit loss in Hirayama disease (HD). Multipoint incremental MUNE was performed bilaterally on both abductor digiti minimi and abductor pollicis brevis muscles in 46 patients with HD and 32 controls, along with handgrip strength examination. MUNE was re-evaluated approximately 1year after initial examination in 17 patients with HD. The MUNE values were significantly lower in all the tested muscles in the HD group (Pdisease duration (Pmotor unit loss in patients with HD within approximately 1year (P4years. A reduction in the functioning motor units was found in patients with HD compared with that in controls, even in the early asymptomatic stages. Moreover, the motor unit loss in HD progresses gradually as the disease advances. These results have provided evidence for the application of MUNE in estimating the reduction of motor unit in HD and confirming the validity of MUNE for tracking the progression of HD in a clinical setting. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  3. Increase in the accuracy of approximating the profile of the erosion zone in planar magnetrons

    Science.gov (United States)

    Rogov, A. V.; Kapustin, Yu. V.

    2017-09-01

    It has been shown that the use of the survival function of the Weibull distribution shifted along the ordinate axis allows one to increase the accuracy of the approximation of the normalized profile of an erosion zone in the area from the axis to the maximum sputtering region compared with the previously suggested distribution function of the extremum values. The survival function of the Weibull distribution is used in the area from the maximum to the outer boundary of an erosion zone. The major advantage of using the new approximation is observed for magnetrons with a large central nonsputtered spot and for magnetrons with substantial sputtering in the paraxial zone.

  4. Simple eigenvectors of unbounded operators of the type “normal plus compact”

    Directory of Open Access Journals (Sweden)

    Michael Gil'

    2015-01-01

    Full Text Available The paper deals with operators of the form \\(A=S+B\\, where \\(B\\ is a compact operator in a Hilbert space \\(H\\ and \\(S\\ is an unbounded normal one in \\(H\\, having a compact resolvent. We consider approximations of the eigenvectors of \\(A\\, corresponding to simple eigenvalues by the eigenvectors of the operators \\(A_n=S+B_n\\ (\\(n=1,2, \\ldots\\, where \\(B_n\\ is an \\(n\\-dimensional operator. In addition, we obtain the error estimate of the approximation.

  5. Development of Normalization Factors for Canada and the United States and Comparison with European Factors

    Science.gov (United States)

    In Life Cycle Assessment (LCA), normalization calculates the magnitude of an impact (midpoint or endpoint) relative to the total effect of a given reference. Using a country or a continent as a reference system is a first step towards global normalization. The goal of this wor...

  6. Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...

  7. Radioimmunoassay of erythropoietin: circulating levels in normal and polycythemic human beings

    International Nuclear Information System (INIS)

    Garcia, J.F.; Ebbe, S.N.; Hollander, L.; Cutting, H.O.; Miller, M.E.; Cronkite, E.P.

    1982-01-01

    Techniques are described in detail for the RIA of human Ep in unextracted plasma or serum. With 100 μl of sample, the assay is sensitive at an Ep concentration of approximately 4 mU/ml, and when required, the sensitivity can be increased to 0.4 mU/ml, a range considerably less than the concentration observed in normal human beings. This is approximately 100 times more sensitive than existing in vivo bioassays for this hormone. Studies concerned with the validation of the Ep RIA show a high degree of correlation with the polycythemic mouse bioassay. Dilutions of a variety of human serum samples show a parallel relationship with the standard reference preparation for Ep. Validation of the RIA is further confirmed by observations of appropriate increases or decreases of circulating Ep levels in physiological and clinical conditions known to be associated with stimulation or suppression of Ep secretion. Significantly different mean serum concentrations of 17.2 mU/ml for normal male subjects and 18.8 mU/ml for normal female subjects were observed. Mean plasma Ep concentrations in patients with polycythemia vera are significantly decreased, and those of patients with secondary polycythemia are significantly increased as compared to plasma levels in normal subjects. These results demonstrate an initial practical value of the Ep RA in the hematology clinic, which will most certainly be expanded with its more extensive use

  8. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon; Liang, Faming

    2014-01-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo

  9. Interpreting the Coulomb-field approximation for generalized-Born electrostatics using boundary-integral equation theory.

    Science.gov (United States)

    Bardhan, Jaydeep P

    2008-10-14

    The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement

  10. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Correlation of zero-point energy with molecular structure and molecular forces. 1. Development of the approximation

    International Nuclear Information System (INIS)

    Oi, T.; Ishida, T.

    1983-01-01

    An approximation formula for the zero-point energy (ZPE) has been developed on the basis of Lanczos' tau method in which the ZPE has been expressed in terms of the traces of positive integral powers of the FG matrix. It requires two approximation parameters, i.e., a normalization reference point in a domain of vibration eigenvalues and a range for the purpose of expansion. These parameters have been determined for two special cases as well as for general situation at various values of a weighting function parameter. The approximation method has been tested on water, carbon dioxide, formaldehyde, and methane. The relative errors are 3% or less for the molecules examined, and the best choice of the parameters moderately depends on the frequency distribution. 25 references, 2 figures, 9 tables

  12. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  13. Approximate cohomology in Banach algebras | Pourabbas ...

    African Journals Online (AJOL)

    We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...

  14. Traveltime approximations for transversely isotropic media with an inhomogeneous background

    KAUST Repository

    Alkhalifah, Tariq

    2011-05-01

    A transversely isotropic (TI) model with a tilted symmetry axis is regarded as one of the most effective approximations to the Earth subsurface, especially for imaging purposes. However, we commonly utilize this model by setting the axis of symmetry normal to the reflector. This assumption may be accurate in many places, but deviations from this assumption will cause errors in the wavefield description. Using perturbation theory and Taylor\\'s series, I expand the solutions of the eikonal equation for 2D TI media with respect to the independent parameter θ, the angle the tilt of the axis of symmetry makes with the vertical, in a generally inhomogeneous TI background with a vertical axis of symmetry. I do an additional expansion in terms of the independent (anellipticity) parameter in a generally inhomogeneous elliptically anisotropic background medium. These new TI traveltime solutions are given by expansions in and θ with coefficients extracted from solving linear first-order partial differential equations. Pade approximations are used to enhance the accuracy of the representation by predicting the behavior of the higher-order terms of the expansion. A simplification of the expansion for homogenous media provides nonhyperbolic moveout descriptions of the traveltime for TI models that are more accurate than other recently derived approximations. In addition, for 3D media, I develop traveltime approximations using Taylor\\'s series type of expansions in the azimuth of the axis of symmetry. The coefficients of all these expansions can also provide us with the medium sensitivity gradients (Jacobian) for nonlinear tomographic-based inversion for the tilt in the symmetry axis. © 2011 Society of Exploration Geophysicists.

  15. Traveltime approximations for transversely isotropic media with an inhomogeneous background

    KAUST Repository

    Alkhalifah, Tariq

    2011-01-01

    A transversely isotropic (TI) model with a tilted symmetry axis is regarded as one of the most effective approximations to the Earth subsurface, especially for imaging purposes. However, we commonly utilize this model by setting the axis of symmetry normal to the reflector. This assumption may be accurate in many places, but deviations from this assumption will cause errors in the wavefield description. Using perturbation theory and Taylor's series, I expand the solutions of the eikonal equation for 2D TI media with respect to the independent parameter θ, the angle the tilt of the axis of symmetry makes with the vertical, in a generally inhomogeneous TI background with a vertical axis of symmetry. I do an additional expansion in terms of the independent (anellipticity) parameter in a generally inhomogeneous elliptically anisotropic background medium. These new TI traveltime solutions are given by expansions in and θ with coefficients extracted from solving linear first-order partial differential equations. Pade approximations are used to enhance the accuracy of the representation by predicting the behavior of the higher-order terms of the expansion. A simplification of the expansion for homogenous media provides nonhyperbolic moveout descriptions of the traveltime for TI models that are more accurate than other recently derived approximations. In addition, for 3D media, I develop traveltime approximations using Taylor's series type of expansions in the azimuth of the axis of symmetry. The coefficients of all these expansions can also provide us with the medium sensitivity gradients (Jacobian) for nonlinear tomographic-based inversion for the tilt in the symmetry axis. © 2011 Society of Exploration Geophysicists.

  16. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  17. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  18. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  19. Initial startup and operations of Yonggwang Units 3 and 4

    International Nuclear Information System (INIS)

    Collier, T.J.; Chari, D.R.; Kiraly, F.

    1996-01-01

    A significant milestone in the nuclear power industry was achieved in 1995, when Yonggwang (YGN) Units 3 and 4 were accepted into in commercial operation by Korea Electric Power Corporation (KEPCO). YGN Unit 3 was accepted into commercial operation on March 31, 1995, the original date established during project initiation. YGN Unit 4 was accepted into operation on January 1, 1996, 3 months ahead of schedule. Each YGN unit produces approximately 1,050 Mwe and supplies approximately ten percent of the total electric power demand in the Republic of Korea (ROK). The overall plant efficiency is approximately 37% which is at least 1% higher than most nuclear units. Since achieving commercial operation, YGN Unit 3 has operated at essentially full power which has resulted in an annual performance rate in excess of 85%. YGN Unit 3 is the first of six pressurized water reactors which are currently under design and construction in the ROK and serves as the reference design for the Korean Standard Nuclear Power Plant program. Both YGN Units 3 and 4 include a System 800 Nuclear Steam Supply System (NSSS). The NSSS is rated at 2,815 Mwth and is the ABB-CE standard design. The design includes numerous advanced design features which enhance plant safety, performance and operability. A well executed startup test program was successfully completed on both units prior to commercial operation. A summary of the YGN NSSS design features, the startup test program and selected test results demonstrating the performance of those features are presented in this paper

  20. Approximations of Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Vinai K. Singh

    2013-03-01

    Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions

  1. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  2. Deficiency of normal galaxies among Markaryan galaxies

    International Nuclear Information System (INIS)

    Iyeveer, M.M.

    1986-01-01

    Comparison of the morphological types of Markaryan galaxies and other galaxies in the Uppsala catalog indicates a strong deficiency of normal ellipticals among the Markaryan galaxies, for which the fraction of type E galaxies is ≤ 1% against 10% among the remaining galaxies. Among the Markaryan galaxies, an excess of barred galaxies is observed - among the Markaryan galaxies with types Sa-Scd, approximately half or more have bars, whereas among the remaining galaxies of the same types bars are found in about 1/3

  3. Iterative approximation of the solution of a monotone operator equation in certain Banach spaces

    International Nuclear Information System (INIS)

    Chidume, C.E.

    1988-01-01

    Let X=L p (or l p ), p ≥ 2. The solution of the equation Ax=f, f is an element of X is approximated in X by an iteration process in each of the following two cases: (i) A is a bounded linear mapping of X into itself which is also bounded below; and, (ii) A is a nonlinear Lipschitz mapping of X into itself and satisfies ≥ m |x-y| 2 , for some constant m > 0 and for all x, y in X, where j is the single-valued normalized duality mapping of X into X* (the dual space of X). A related result deals with the iterative approximation of the fixed point of a Lipschitz strictly pseudocontractive mapping in X. (author). 12 refs

  4. Cosmological applications of Padé approximant

    International Nuclear Information System (INIS)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation

  5. Cosmological applications of Padé approximant

    Science.gov (United States)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

  6. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  7. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    Science.gov (United States)

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright

  8. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  9. Integral type operators from normal weighted Bloch spaces to QT,S spaces

    Directory of Open Access Journals (Sweden)

    Yongyi GU

    2016-08-01

    Full Text Available Operator theory is an important research content of the analytic function space theory. The discussion of simultaneous operator and function space is an effective way to study operator and function space. Assuming that  is an analytic self map on the unit disk Δ, and the normal weighted bloch space μ-B is a Banach space on the unit disk Δ, defining a composition operator C∶C(f=f on μ-B for all f∈μ-B, integral type operator JhC and CJh are generalized by integral operator and composition operator. The boundeness and compactness of the integral type operator JhC acting from normal weighted Bloch spaces to QT,S spaces are discussed, as well as the boundeness of the integral type operators CJh acting from normal weighted Bloch spaces to QT,S spaces. The related sufficient and necessary conditions are given.

  10. Posterior urethral valves: Morphological normalization of posterior urethra after fulguration is a significant factor in prognosis.

    Science.gov (United States)

    Menon, Prema; Rao, K L N; Vijaymahantesh, S; Kanojia, R P; Samujh, R; Batra, Y K; Sodhi, K S; Saxena, A K; Bhattacharya, A; Mittal, B R

    2010-07-01

    To assess the changes in urethral morphology 3 months post fulguration of posterior urethral valves (PUVs) on micturating cystourethrogram (MCUG) and correlate these changes with the overall clinical status of the patient. A total of 217 children, managed for PUVs during a period of 6 years in a single surgical unit were prospectively studied. The ratio of the diameters of the prostatic and bulbar urethras (PU/BU) was calculated on the pre- and post-fulguration MCUG films. They were categorized into three groups based on the degree of normalization of posterior urethra (post-fulguration PU/BU ratio). Of the 133 patients, 131 had normal urinary stream and 4 (3%) had nocturnal enuresis. Vesicoureteral reflux (VUR), initially seen in 83 units (31% units), regressed completely at a mean duration of 6 months in 41 units (49%). Of the 152 non-VUR, hydroureteronephrosis (HUN) units, 11 were poorly functioning kidneys. Persistent slow but unobstructed drainage was seen in 23 units (16%) over a period of 1.5-5 years (mean 2.5 years). Group B: All the 11 patients had a normal stream. Four (36.4%) had daytime frequency for a mean duration of 1 year and one (9%) had nocturnal enuresis for 1 year. Grade IV-V VUR was seen in five patients (three bilateral), which regressed completely by 3 months in five units (62.5%). In the non-VUR, HUN patients, slow (but unobstructed) drainage was persistent in two units (14%) at 3 years. Group C: Of the 16 patients, only 5 (31.3%) were asymptomatic. Six patients (nine units) had persistent VUR for 6 months to 3 years. Of the 20 units with HUN, 17 (85%) were persistent at 1-4 years (mean 2 years). Eight patients (50%) required a second fulguration while 3 (18.7%) required urethral dilatation for stricture following which all parameters improved. Adequacy of fulguration should be assessed by a properly performed MCUG. A postop PU/BU ratio >3 SD (1.92) should alert to an incomplete fulguration or stricture. Patients within normal range ratio

  11. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....

  12. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming

    2013-03-01

    The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  13. A numerical approximation to the elastic properties of sphere-reinforced composites

    Science.gov (United States)

    Segurado, J.; Llorca, J.

    2002-10-01

    Three-dimensional cubic unit cells containing 30 non-overlapping identical spheres randomly distributed were generated using a new, modified random sequential adsortion algorithm suitable for particle volume fractions of up to 50%. The elastic constants of the ensemble of spheres embedded in a continuous and isotropic elastic matrix were computed through the finite element analysis of the three-dimensional periodic unit cells, whose size was chosen as a compromise between the minimum size required to obtain accurate results in the statistical sense and the maximum one imposed by the computational cost. Three types of materials were studied: rigid spheres and spherical voids in an elastic matrix and a typical composite made up of glass spheres in an epoxy resin. The moduli obtained for different unit cells showed very little scatter, and the average values obtained from the analysis of four unit cells could be considered very close to the "exact" solution to the problem, in agreement with the results of Drugan and Willis (J. Mech. Phys. Solids 44 (1996) 497) referring to the size of the representative volume element for elastic composites. They were used to assess the accuracy of three classical analytical models: the Mori-Tanaka mean-field analysis, the generalized self-consistent method, and Torquato's third-order approximation.

  14. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    Pedersen, Steffen Højris

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...

  15. Bounded-Degree Approximations of Stochastic Networks

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.

  16. Hypervascular liver lesions in radiologically normal liver

    Energy Technology Data Exchange (ETDEWEB)

    Amico, Enio Campos; Alves, Jose Roberto; Souza, Dyego Leandro Bezerra de; Salviano, Fellipe Alexandre Macena; Joao, Samir Assi; Liguori, Adriano de Araujo Lima, E-mail: ecamic@uol.com.br [Hospital Universitario Onofre Lopes (HUOL/UFRN), Natal, RN (Brazil). Clinica Gastrocentro e Ambulatorios de Cirurgia do Aparelho Digestivo e de Cirurgia Hepatobiliopancreatica

    2017-09-01

    Background: The hypervascular liver lesions represent a diagnostic challenge. Aim: To identify risk factors for cancer in patients with non-hemangiomatous hypervascular hepatic lesions in radiologically normal liver. Method: This prospective study included patients with hypervascular liver lesions in radiologically normal liver. The diagnosis was made by biopsy or was presumed on the basis of radiologic stability in follow-up period of one year. Cirrhosis or patients with typical imaging characteristics of haemangioma were excluded. Results: Eighty eight patients were included. The average age was 42.4. The lesions were unique and were between 2-5 cm in size in most cases. Liver biopsy was performed in approximately 1/3 of cases. The lesions were benign or most likely benign in 81.8%, while cancer was diagnosed in 12.5% of cases. Univariate analysis showed that age >45 years (p< 0.001), personal history of cancer (p=0.020), presence of >3 nodules (p=0.003) and elevated alkaline phosphatase (p=0.013) were significant risk factors for cancer. Conclusion: It is safe to observe hypervascular liver lesions in normal liver in patients up to 45 years, normal alanine amino transaminase, up to three nodules and no personal history of cancer. Lesion biopsies are safe in patients with atypical lesions and define the treatment to be established for most of these patients. (author)

  17. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  18. Investigation reactor D-2201 polypropylene production unit using nuclear technique

    International Nuclear Information System (INIS)

    Wibisono; Sugiharto; Jefri Simanjuntak

    2016-01-01

    D-2201 reactor is a unit in the polypropylene production process at Pertamina Refinery Unit III Plaju. Reactor with a capacity of 45 kilo liter is not operated in normal operation condition. The validity of liquid level indicator on the unit is doubtful when refers to the production quality. Gamma source of 150 mCi Cobalt-60 and a scintillation detector had been used to scan the outer wall of the reactor to detect the liquid level during operation with a capacity of 40 %. Measurements were made along the reactor walls with 25 mm scan resolution and 5 seconds time sampling. Experiment result shows that the liquid level at the position of 40 % and at normal level position are not observed. Investigation did not find the liquid level above normal. D-2201 is diagnose not normal operating condition diagnosed with liquid abundant passed the recommended limits. Investigation advised to repair or to calibrate the liquid level indicator which is currently installed. (author)

  19. Normal SPECT thallium-201 bull's-eye display: gender differences

    International Nuclear Information System (INIS)

    Eisner, R.L.; Tamas, M.J.; Cloninger, K.

    1988-01-01

    The bull's-eye technique synthesizes three-dimensional information from single photon emission computed tomographic 201 TI images into two dimensions so that a patient's data can be compared quantitatively against a normal file. To characterize the normal database and to clarify differences between males and females, clinical data and exercise electrocardiography were used to identify 50 males and 50 females with less than 5% probability of coronary artery disease. Results show inhomogeneity of the 201 TI distributions at stress and delay: septal to lateral wall count ratios are less than 1.0 in both females and males; anterior to inferior wall count ratios are greater than 1.0 in males but are approximately equal to 1.0 in females. Washout rate is faster in females than males at the same peak exercise heart rate and systolic blood pressure, despite lower exercise time. These important differences suggest that quantitative analysis of single photon emission computed tomographic 201 TI images requires gender-matched normal files

  20. Limitations of shallow nets approximation.

    Science.gov (United States)

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Lindhard's polarization parameter and atomic sum rules in the local plasma approximation

    DEFF Research Database (Denmark)

    Cabrera-Trujillo, R.; Apell, P.; Oddershede, J.

    2017-01-01

    In this work, we analyze the effects of Lindhard polarization parameter, χ, on the sum rule, Sp, within the local plasma approximation (LPA) as well as on the logarithmic sum rule Lp = dSp/dp, in both cases for the system in an initial excited state. We show results for a hydrogenic atom with nuc......In this work, we analyze the effects of Lindhard polarization parameter, χ, on the sum rule, Sp, within the local plasma approximation (LPA) as well as on the logarithmic sum rule Lp = dSp/dp, in both cases for the system in an initial excited state. We show results for a hydrogenic atom...... in terms of a screened charge Z* for the ground state. Our study shows that by increasing χ, the sum rule for p0 it increases, and the value p=0 provides the normalization/closure relation which remains fixed to the number of electrons for the same initial state. When p is fixed...

  2. Prevalence of overweight misperception and weight control behaviors among normal weight adolescents in the United States

    Directory of Open Access Journals (Sweden)

    Kathleen S. Talamayan

    2006-01-01

    Full Text Available Weight perceptions and weight control behaviors have been documented with underweight and overweight adolescents, yet limited information is available on normal weight adolescents. This study investigates the prevalence of overweight misperceptions and weight control behaviors among normal weight adolescents in the U.S. by sociodemographic and geographic characteristics. We examined data from the 2003 Youth Risk Behavior Survey (YRBS. A total of 9,714 normal weight U.S. high school students were included in this study. Outcome measures included self-reported height and weight measurements, overweight misperceptions, and weight control behaviors. Weighted prevalence estimates and odds ratios were computed. There were 16.2% of normal weight students who perceived themselves as overweight. Females (25.3% were more likely to perceive themselves as overweight than males (6.7% (p < 0.05. Misperceptions of overweight were highest among white (18.3% and Hispanic students (15.2% and lowest among black students (5.8%. Females (16.8% outnumbered males (6.8% in practicing at least one unhealthy weight control behavior (use of diet pills, laxatives, and fasting in the past 30 days. The percentage of students who practiced at least one weight control behavior was similar by ethnicity. There were no significant differences in overweight misperception and weight control behaviors by grade level, geographic region, or metropolitan status. A significant portion of normal weight adolescents misperceive themselves as overweight and are engaging in unhealthy weight control behaviors. These data suggest that obesity prevention programs should address weight misperceptions and the harmful effects of unhealthy weight control methods even among normal weight adolescents.

  3. A 3 Year-Old Male Child Ingested Approximately 750 Grams of Elemental Mercury

    Directory of Open Access Journals (Sweden)

    Metin Uysalol

    2016-08-01

    Full Text Available Background: The oral ingestion of elemental mercury is unlikely to cause systemic toxicity, as it is poorly absorbed through the gastrointestinal system. However, abnormal gastrointestinal function or anatomy may allow elemental mercury into the bloodstream and the peritoneal space. Systemic effects of massive oral intake of mercury have rarely been reported. Case Report: In this paper, we are presenting the highest ingle oral intake of elemental mercury by a child aged 3 years. A Libyan boy aged 3 years ingested approximately 750 grams of elemental mercury and was still asymptomatic. Conclusion: The patient had no existing disease or abnormal gastrointestinal function or anatomy. The physical examination was normal. His serum mercury level was 91 μg/L (normal: <5 μg/L, and he showed no clinical manifestations. Exposure to mercury in children through different circumstances remains a likely occurrence.

  4. An Oblivious O(1)-Approximation for Single Source Buy-at-Bulk

    KAUST Repository

    Goel, Ashish

    2009-10-01

    We consider the single-source (or single-sink) buy-at-bulk problem with an unknown concave cost function. We want to route a set of demands along a graph to or from a designated root node, and the cost of routing x units of flow along an edge is proportional to some concave, non-decreasing function f such that f(0) = 0. We present a polynomial time algorithm that finds a distribution over trees such that the expected cost of a tree for any f is within an O(1)-factor of the optimum cost for that f. The previous best simultaneous approximation for this problem, even ignoring computation time, was O(log |D|), where D is the multi-set of demand nodes. We design a simple algorithmic framework using the ellipsoid method that finds an O(1)-approximation if one exists, and then construct a separation oracle using a novel adaptation of the Guha, Meyerson, and Munagala [10] algorithm for the single-sink buy-at-bulk problem that proves an O(1) approximation is possible for all f. The number of trees in the support of the distribution constructed by our algorithm is at most 1 + log |D|. © 2009 IEEE.

  5. Approximate circuits for increased reliability

    Science.gov (United States)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  6. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey; Alkhalifah, Tariq Ali

    2013-01-01

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  7. Analytical approximation of neutron physics data

    International Nuclear Information System (INIS)

    Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.

    1984-01-01

    The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy

  8. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  9. Nuclear Hartree-Fock approximation testing and other related approximations

    International Nuclear Information System (INIS)

    Cohenca, J.M.

    1970-01-01

    Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt

  10. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate the perform......The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...

  11. Decay hazard (Scheffer) index values calculated from 1971-2000 climate normal data

    Science.gov (United States)

    Charles G. Carll

    2009-01-01

    Climate index values for estimating decay hazard to wood exposed outdoors above ground (commonly known as Scheffer index values) were calculated for 280 locations in the United States (270 locations in the conterminous United States) using the most current climate normal data available from the National Climatic Data Center. These were data for the period 1971–2000. In...

  12. Simulation of motor unit recruitment and microvascular unit perfusion: spatial considerations.

    Science.gov (United States)

    Fuglevand, A J; Segal, S S

    1997-10-01

    Muscle fiber activity is the principal stimulus for increasing capillary perfusion during exercise. The control elements of perfusion, i.e., microvascular units (MVUs), supply clusters of muscle fibers, whereas the control elements of contraction, i.e., motor units, are composed of fibers widely scattered throughout muscle. The purpose of this study was to examine how the discordant spatial domains of MVUs and motor units could influence the proportion of open capillaries (designated as perfusion) throughout a muscle cross section. A computer model simulated the locations of perfused MVUs in response to the activation of up to 100 motor units in a muscle with 40,000 fibers and a cross-sectional area of 100 mm2. The simulation increased contraction intensity by progressive recruitment of motor units. For each step of motor unit recruitment, the percentage of active fibers and the number of perfused MVUs were determined for several conditions: 1) motor unit fibers widely dispersed and motor unit territories randomly located (which approximates healthy human muscle), 2) regionalized motor unit territories, 3) reversed recruitment order of motor units, 4) densely clustered motor unit fibers, and 5) increased size but decreased number of motor units. The simulations indicated that the widespread dispersion of motor unit fibers facilitates complete capillary (MVU) perfusion of muscle at low levels of activity. The efficacy by which muscle fiber activity induced perfusion was reduced 7- to 14-fold under conditions that decreased the dispersion of active fibers, increased the size of motor units, or reversed the sequence of motor unit recruitment. Such conditions are similar to those that arise in neuromuscular disorders, with aging, or during electrical stimulation of muscle, respectively.

  13. The impact of sample non-normality on ANOVA and alternative methods.

    Science.gov (United States)

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  14. A Stokes drift approximation based on the Phillips spectrum

    Science.gov (United States)

    Breivik, Øyvind; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.

    2016-04-01

    A new approximation to the Stokes drift velocity profile based on the exact solution for the Phillips spectrum is explored. The profile is compared with the monochromatic profile and the recently proposed exponential integral profile. ERA-Interim spectra and spectra from a wave buoy in the central North Sea are used to investigate the behavior of the profile. It is found that the new profile has a much stronger gradient near the surface and lower normalized deviation from the profile computed from the spectra. Based on estimates from two open-ocean locations, an average value has been estimated for a key parameter of the profile. Given this parameter, the profile can be computed from the same two parameters as the monochromatic profile, namely the transport and the surface Stokes drift velocity.

  15. How Do Volcanoes Affect Human Life? Integrated Unit.

    Science.gov (United States)

    Dayton, Rebecca; Edwards, Carrie; Sisler, Michelle

    This packet contains a unit on teaching about volcanoes. The following question is addressed: How do volcanoes affect human life? The unit covers approximately three weeks of instruction and strives to present volcanoes in an holistic form. The five subject areas of art, language arts, mathematics, science, and social studies are integrated into…

  16. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  17. Radix-16 Combined Division and Square Root Unit

    DEFF Research Database (Denmark)

    Nannarelli, Alberto

    2011-01-01

    Division and square root, based on the digitrecurrence algorithm, can be implemented in a combined unit. Several implementations of combined division/square root units have been presented mostly for radices 2 and 4. Here, we present a combined radix-16 unit obtained by overlapping two radix-4...... result digit selection functions, as it is normally done for division only units. The latency of the unit is reduced by retiming and low power methods are applied as well. The proposed unit is compared to a radix-4 combined division/square root unit, and to a radix-16 unit, obtained by cascading two...

  18. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  19. Posterior urethral valves: Morphological normalization of posterior urethra after fulguration is a significant factor in prognosis

    Directory of Open Access Journals (Sweden)

    Menon Prema

    2010-01-01

    Full Text Available Aim: To assess the changes in urethral morphology 3 months post fulguration of posterior urethral valves (PUVs on micturating cystourethrogram (MCUG and correlate these changes with the overall clinical status of the patient. Materials and Methods: A total of 217 children, managed for PUVs during a period of 6 years in a single surgical unit were prospectively studied. The ratio of the diameters of the prostatic and bulbar urethras (PU/BU was calculated on the pre- and post-fulguration MCUG films. They were categorized into three groups based on the degree of normalization of posterior urethra (post-fulguration PU/BU ratio. Results: Group A: Of the 133 patients, 131 had normal urinary stream and 4 (3% had nocturnal enuresis. Vesicoureteral reflux (VUR, initially seen in 83 units (31% units, regressed completely at a mean duration of 6 months in 41 units (49%. Of the 152 non-VUR, hydroureteronephrosis (HUN units, 11 were poorly functioning kidneys. Persistent slow but unobstructed drainage was seen in 23 units (16% over a period of 1.5-5 years (mean 2.5 years. Group B: All the 11 patients had a normal stream. Four (36.4% had daytime frequency for a mean duration of 1 year and one (9% had nocturnal enuresis for 1 year. Grade IV-V VUR was seen in five patients (three bilateral, which regressed completely by 3 months in five units (62.5%. In the non-VUR, HUN patients, slow (but unobstructed drainage was persistent in two units (14% at 3 years. Group C: Of the 16 patients, only 5 (31.3% were asymptomatic. Six patients (nine units had persistent VUR for 6 months to 3 years. Of the 20 units with HUN, 17 (85% were persistent at 1-4 years (mean 2 years. Eight patients (50% required a second fulguration while 3 (18.7% required urethral dilatation for stricture following which all parameters improved. Conclusions: Adequacy of fulguration should be assessed by a properly performed MCUG. A postop PU/BU ratio >3 SD (1.92 should alert to an incomplete

  20. Control survey of normal reference ranges adopted for serum thyroxine binding globulin, thyroxine, triiodothyronine in Japan

    International Nuclear Information System (INIS)

    Sugisaki, Hajime; Kameyama, Mayumi; Shibata, Kyoko

    1985-01-01

    A survey using questionnaires was made on 152 facilities from July through September 1984 to examine normal reference ranges of serum thyroxine binding globulin (TBG), thyroxine (TT 4 ), and triiodothyronine (TT 3 ). Normal reference ranges of TBG were in good agreement with each other, with the exception of four facilities showing high upper limits. An average value of the upper and lower limits in 83 facilities was 13.7 +- 1.9 μg/ml; and the standard deviation was 28.6 +- 2.8 μg/ml. Differences (approximately 10 %) in coefficient of variation were comparable to those (5.7-9.6 %) obtained from the previous survey. There were approximately 10 % differences in coefficient of variation for both TT 4 and TT 3 . (Namekawa, K.)

  1. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...

  2. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  3. The COBE normalization for standard cold dark matter

    Science.gov (United States)

    Bunn, Emory F.; Scott, Douglas; White, Martin

    1995-01-01

    The Cosmic Background Explorer Satellite (COBE) detection of microwave anisotropies provides the best way of fixing the amplitude of cosmological fluctuations on the largest scales. This normalization is usually given for an n = 1 spectrum, including only the anisotropy caused by the Sachs-Wolfe effect. This is certainly not a good approximation for a model containing any reasonable amount of baryonic matter. In fact, even tilted Sachs-Wolfe spectra are not a good fit to models like cold dark matter (CDM). Here, we normalize standard CDM (sCDM) to the two-year COBE data and quote the best amplitude in terms of the conventionally used measures of power. We also give normalizations for some specific variants of this standard model, and we indicate how the normalization depends on the assumed values on n, Omega(sub B) and H(sub 0). For sCDM we find the mean value of Q = 19.9 +/- 1.5 micro-K, corresponding to sigma(sub 8) = 1.34 +/- 0.10, with the normalization at large scales being B = (8.16 +/- 1.04) x 10(exp 5)(Mpc/h)(exp 4), and other numbers given in the table. The measured rms temperature fluctuation smoothed on 10 deg is a little low relative to this normalization. This is mainly due to the low quadrupole in the data: when the quadrupole is removed, the measured value of sigma(10 deg) is quite consistent with the best-fitting the mean value of Q. The use of the mean value of Q should be preferred over sigma(10 deg), when its value can be determined for a particular theory, since it makes full use of the data.

  4. On the efficient simulation of the left-tail of the sum of correlated log-normal variates

    KAUST Repository

    Alouini, Mohamed-Slim; Rached, Nadhir B.; Kammoun, Abla; Tempone, Raul

    2018-01-01

    The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However

  5. radiation dosimetry in cases of normal and emergency situations

    International Nuclear Information System (INIS)

    Morsi, T.M.

    2010-01-01

    The use of radioactive materials in various fields of medicine, industry, agriculture and researches has been increasing steadily during the last few decades. A lot of radiation sources, radiopharmaceuticals, labeled compounds and other radioactive materials are sold and used throughout the world each year. Historically, accidents have occurred during the production, transport and use of radioactive materials. If an accident does occur, it is necessary to cope with it as soon as possible in order to control radiological human exposures and contamination of the environment and to restore normal conditions. Examination of individuals that deal with radioactive isotopes should be carried out in cases of nuclear medicine units, and in other applications including radiotherapy unit and gamma irradiation facility. Identification of the feasibility and efficiency of the counting detectors of internal and external radiation dosimetry, and preparedness in normal and emergency situations are included in the present work. Furthermore, this study also deals with the use of thermoluminescent dosimeters for radiation dose estimation for applications of gamma irradiation, and cobalt-60 treatment unit. Hence, operator dose can be estimated in case of malfunction or stuck of the radioactive source. Three methods were used to measure the radiation dose: (1) TL dosimeters with Harshaw (TLD- 4000) reader were used for measurement of external exposures. (2) FASTSCAN and (3) ACUUSCAN II whole body counters were used for measurement of internal exposures.

  6. Motor unit firing intervals and other parameters of electrical activity in normal and pathological muscle

    DEFF Research Database (Denmark)

    Fuglsang-Frederiksen, Anders; Smith, T; Høgenhaven, H

    1987-01-01

    The analysis of the firing intervals of motor units has been suggested as a diagnostic tool in patients with neuromuscular disorders. Part of the increase in number of turns seen in patients with myopathy could be secondary to the decrease in motor unit firing intervals at threshold force...

  7. The application of EMI units for diagnosis of the liver diseases

    International Nuclear Information System (INIS)

    Maeda, Hiroko; Kawai, Takeshi; Kanasaki, Yoshiki; Akagi, Hiroaki

    1979-01-01

    The application of EMI units for diagnosis of the liver diseases was studied. Cases in this report were included 16 normal cases, 20 metastatic liver cancer, 20 primary liver cancer and 9 liver cysts. EMI units of 320 x 320 matrix were changed to those of 64 x 64 matrix, averaging of 25 points with another computer system. Using the EMI units of 64 x 64 matrix, digital expression, histogram, and MAP expression in the resion of interest (R.O.I.) were printed out automatically. Two kinds of R.O.I. were set up, that is, R.O.I.-1 was the area of the liver including hepatic lesions, and R.O.I.-2 was the area of the spleen. The peak values of the EMI units were 0.23 +- 3.51 in the liver cysts, 13.9 +- 3.37 in the metastatic liver cancers, 16.9 +- 3.75 in the primary liver cancers, and 24.7 +- 2.98 in the normal cases. The peak value of the EMI units in the liver cysts was clearly separated from others, but in the primary and metastatic cancers and normal cases, the peak values were overlapped. The EMI units of normal livers were higher than those of spleens, and those of hepatic lesions were lower, in the same slice of the CT scans. Therefore, if the EMI units of liver were lower than those of spleen, it was thought that the presence of hepatic abnormality was suspected. Correct diagnosis rates were 56.9% in the readings of MAP expressions, 67.7% in CT images, and 76.9% in both. In conclusion, correct diagnosis rate of CT images becomes better when combined with the expressions of EMI units. (author)

  8. Analytical formulae in fractionated irradiation of normal tissue

    International Nuclear Information System (INIS)

    Kozubek, S.

    1982-01-01

    The new conception of the modeling of the cell tissue kinetics after fractionated irradiation is proposed. The formulae given earlier are compared with experimental data on various normal tissues and further adjustments are considered. The tissues are shown to exhibit several general patterns of behaviour. The repopulation, if it takes place, seems to start after some time, independently of fractionation in first approximation and can be treated as simple autogenesis. The results are compared with the commonly used NSD conception and the well-known Cohen cell tissue kinetic model

  9. Approximate optimal tracking control for near-surface AUVs with wave disturbances

    Science.gov (United States)

    Yang, Qing; Su, Hao; Tang, Gongyou

    2016-10-01

    This paper considers the optimal trajectory tracking control problem for near-surface autonomous underwater vehicles (AUVs) in the presence of wave disturbances. An approximate optimal tracking control (AOTC) approach is proposed. Firstly, a six-degrees-of-freedom (six-DOF) AUV model with its body-fixed coordinate system is decoupled and simplified and then a nonlinear control model of AUVs in the vertical plane is given. Also, an exosystem model of wave disturbances is constructed based on Hirom approximation formula. Secondly, the time-parameterized desired trajectory which is tracked by the AUV's system is represented by the exosystem. Then, the coupled two-point boundary value (TPBV) problem of optimal tracking control for AUVs is derived from the theory of quadratic optimal control. By using a recently developed successive approximation approach to construct sequences, the coupled TPBV problem is transformed into a problem of solving two decoupled linear differential sequences of state vectors and adjoint vectors. By iteratively solving the two equation sequences, the AOTC law is obtained, which consists of a nonlinear optimal feedback item, an expected output tracking item, a feedforward disturbances rejection item, and a nonlinear compensatory term. Furthermore, a wave disturbances observer model is designed in order to solve the physically realizable problem. Simulation is carried out by using the Remote Environmental Unit (REMUS) AUV model to demonstrate the effectiveness of the proposed algorithm.

  10. The efficiency of Flory approximation

    International Nuclear Information System (INIS)

    Obukhov, S.P.

    1984-01-01

    The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)

  11. Impaired neural networks for approximate calculation in dyscalculic children: a functional MRI study

    Directory of Open Access Journals (Sweden)

    Dosch Mengia

    2006-09-01

    Full Text Available Abstract Background Developmental dyscalculia (DD is a specific learning disability affecting the acquisition of mathematical skills in children with otherwise normal general intelligence. The goal of the present study was to examine cerebral mechanisms underlying DD. Methods Eighteen children with DD aged 11.2 ± 1.3 years and twenty age-matched typically achieving schoolchildren were investigated using functional magnetic resonance imaging (fMRI during trials testing approximate and exact mathematical calculation, as well as magnitude comparison. Results Children with DD showed greater inter-individual variability and had weaker activation in almost the entire neuronal network for approximate calculation including the intraparietal sulcus, and the middle and inferior frontal gyrus of both hemispheres. In particular, the left intraparietal sulcus, the left inferior frontal gyrus and the right middle frontal gyrus seem to play crucial roles in correct approximate calculation, since brain activation correlated with accuracy rate in these regions. In contrast, no differences between groups could be found for exact calculation and magnitude comparison. In general, fMRI revealed similar parietal and prefrontal activation patterns in DD children compared to controls for all conditions. Conclusion In conclusion, there is evidence for a deficient recruitment of neural resources in children with DD when processing analog magnitudes of numbers.

  12. Normalization of energy-dependent gamma survey data.

    Science.gov (United States)

    Whicker, Randy; Chambers, Douglas

    2015-05-01

    Instruments and methods for normalization of energy-dependent gamma radiation survey data to a less energy-dependent basis of measurement are evaluated based on relevant field data collected at 15 different sites across the western United States along with a site in Mongolia. Normalization performance is assessed relative to measurements with a high-pressure ionization chamber (HPIC) due to its "flat" energy response and accurate measurement of the true exposure rate from both cosmic and terrestrial radiation. While analytically ideal for normalization applications, cost and practicality disadvantages have increased demand for alternatives to the HPIC. Regression analysis on paired measurements between energy-dependent sodium iodide (NaI) scintillation detectors (5-cm by 5-cm crystal dimensions) and the HPIC revealed highly consistent relationships among sites not previously impacted by radiological contamination (natural sites). A resulting generalized data normalization factor based on the average sensitivity of NaI detectors to naturally occurring terrestrial radiation (0.56 nGy hHPIC per nGy hNaI), combined with the calculated site-specific estimate of cosmic radiation, produced reasonably accurate predictions of HPIC readings at natural sites. Normalization against two to potential alternative instruments (a tissue-equivalent plastic scintillator and energy-compensated NaI detector) did not perform better than the sensitivity adjustment approach at natural sites. Each approach produced unreliable estimates of HPIC readings at radiologically impacted sites, though normalization against the plastic scintillator or energy-compensated NaI detector can address incompatibilities between different energy-dependent instruments with respect to estimation of soil radionuclide levels. The appropriate data normalization method depends on the nature of the site, expected duration of the project, survey objectives, and considerations of cost and practicality.

  13. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    Science.gov (United States)

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. A General Mathematical Algorithm for Predicting the Course of Unfused Tetanic Contractions of Motor Units in Rat Muscle.

    Directory of Open Access Journals (Sweden)

    Rositsa Raikova

    Full Text Available An unfused tetanus of a motor unit (MU evoked by a train of pulses at variable interpulse intervals is the sum of non-equal twitch-like responses to these stimuli. A tool for a precise prediction of these successive contractions for MUs of different physiological types with different contractile properties is crucial for modeling the whole muscle behavior during various types of activity. The aim of this paper is to develop such a general mathematical algorithm for the MUs of the medial gastrocnemius muscle of rats. For this purpose, tetanic curves recorded for 30 MUs (10 slow, 10 fast fatigue-resistant and 10 fast fatigable were mathematically decomposed into twitch-like contractions. Each contraction was modeled by the previously proposed 6-parameter analytical function, and the analysis of these six parameters allowed us to develop a prediction algorithm based on the following input data: parameters of the initial twitch, the maximum force of a MU and the series of pulses. Linear relationship was found between the normalized amplitudes of the successive contractions and the remainder between the actual force levels at which the contraction started and the maximum tetanic force. The normalization was made according to the amplitude of the first decomposed twitch. However, the respective approximation lines had different specific angles with respect to the ordinate. These angles had different and non-overlapping ranges for slow and fast MUs. A sensitivity analysis concerning this slope was performed and the dependence between the angles and the maximal fused tetanic force normalized to the amplitude of the first contraction was approximated by a power function. The normalized MU contraction and half-relaxation times were approximated by linear functions depending on the normalized actual force levels at which each contraction starts. The normalization was made according to the contraction time of the first contraction. The actual force levels

  15. Determination of Lineaments of the Sea of Marmara using Normalized Derivatives and Analytic Signals

    International Nuclear Information System (INIS)

    Oruc, B.

    2007-01-01

    The normalized derivatives and analytic signals calculated from magnetic anomaly map present useful results for the structural interpretation. The effectiveness of the methods on the solutions of lineaments has been tested for the edges of the thin-plate model. In the field data, magnetic anomaly map observed in the middle section of Marmara Sea has been used. The approximate solutions have been obtained for the lineaments of the area related in North Anatolia Fault from the characteristic images of the normalized derivatives and horizontal derivative analytic signals

  16. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  17. Bent approximations to synchrotron radiation optics

    International Nuclear Information System (INIS)

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors

  18. A study to assess the domestic violence in mental illness & normal married women

    Directory of Open Access Journals (Sweden)

    Jyoti Srivastava, Indira Sharma, Anuradha Khanna

    2014-07-01

    Full Text Available Background: Domestic violence against women is the most pervasive human rights violation in the world today. According to UNiTE to End Violence against Women (2009 by UN Women, In the United States, one-third of women murdered each year are killed by intimate partners. In South Africa, a woman is killed every 6 hours by an intimate partner. The Objective: To assess the magnitude and causes of domestic violence with mental illness & normal women. Material & Methods: The sample of study comprised of 50 women with mental illness and 50 normal women. Mental illness patients diagnosed according to with Axis one psychiatric Disorder DSM IV-TR, who were selected from the Psychiatry OPD and ward of the S.S. Hospital, BHU and normal women were be selected from the accompany with patients of Sir Sunder Lal Hospital. The patients were assessed on the structured questionnaire on Domestic Violence. Results – The domestic violence present in married women with mental illness was 72% and normal women were 36%. Perceived causes of domestic violence in married women with mental illness were more compared to those with normal women. The health care personnel should be given an opportunity to update their knowledge regarding domestic violence and there is need education for domestic violence and cessation, so that they can help the women to protect/prevent domestic violence.

  19. INTOR cost approximation

    International Nuclear Information System (INIS)

    Knobloch, A.F.

    1980-01-01

    A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de

  20. Updated US and Canadian Normalization Factors for TRACI 2.1

    Science.gov (United States)

    The objectives of this study is to update the normalization factors (NFs) of U.S.EPA’s TRACI 2.1 LCIA method (Bare, 2012) for the United States (US) and US-Canadian (US-CA) regions. This is done for the reference year 2008. This was deemed necessary to maintain the representative...

  1. A unified approach to the Darwin approximation

    International Nuclear Information System (INIS)

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-01-01

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting

  2. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  3. Effect of Image Linearization on Normalized Compression Distance

    Science.gov (United States)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  4. Mean field dynamics of networks of delay-coupled noisy excitable units

    Energy Technology Data Exchange (ETDEWEB)

    Franović, Igor, E-mail: franovic@ipb.ac.rs [Scientific Computing Laboratory, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Todorović, Kristina; Burić, Nikola [Department of Physics and Mathematics, Faculty of Pharmacy, University of Belgrade, Vojvode Stepe 450, Belgrade (Serbia); Vasović, Nebojša [Department of Applied Mathematics, Faculty of Mining and Geology, University of Belgrade, PO Box 162, Belgrade (Serbia)

    2016-06-08

    We use the mean-field approach to analyze the collective dynamics in macroscopic networks of stochastic Fitzhugh-Nagumo units with delayed couplings. The conditions for validity of the two main approximations behind the model, called the Gaussian approximation and the Quasi-independence approximation, are examined. It is shown that the dynamics of the mean-field model may indicate in a self-consistent fashion the parameter domains where the Quasi-independence approximation fails. Apart from a network of globally coupled units, we also consider the paradigmatic setup of two interacting assemblies to demonstrate how our framework may be extended to hierarchical and modular networks. In both cases, the mean-field model can be used to qualitatively analyze the stability of the system, as well as the scenarios for the onset and the suppression of the collective mode. In quantitative terms, the mean-field model is capable of predicting the average oscillation frequency corresponding to the global variables of the exact system.

  5. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  6. Insight into structural phase transitions from the decoupled anharmonic mode approximation.

    Science.gov (United States)

    Adams, Donat J; Passerone, Daniele

    2016-08-03

    We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T  =  0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.

  7. Computational modeling of fully-ionized, magnetized plasmas using the fluid approximation

    Science.gov (United States)

    Schnack, Dalton

    2005-10-01

    Strongly magnetized plasmas are rich in spatial and temporal scales, making a computational approach useful for studying these systems. The most accurate model of a magnetized plasma is based on a kinetic equation that describes the evolution of the distribution function for each species in six-dimensional phase space. However, the high dimensionality renders this approach impractical for computations for long time scales in relevant geometry. Fluid models, derived by taking velocity moments of the kinetic equation [1] and truncating (closing) the hierarchy at some level, are an approximation to the kinetic model. The reduced dimensionality allows a wider range of spatial and/or temporal scales to be explored. Several approximations have been used [2-5]. Successful computational modeling requires understanding the ordering and closure approximations, the fundamental waves supported by the equations, and the numerical properties of the discretization scheme. We review and discuss several ordering schemes, their normal modes, and several algorithms that can be applied to obtain a numerical solution. The implementation of kinetic parallel closures is also discussed [6].[1] S. Chapman and T.G. Cowling, ``The Mathematical Theory of Non-Uniform Gases'', Cambridge University Press, Cambridge, UK (1939).[2] R.D. Hazeltine and J.D. Meiss, ``Plasma Confinement'', Addison-Wesley Publishing Company, Redwood City, CA (1992).[3] L.E. Sugiyama and W. Park, Physics of Plasmas 7, 4644 (2000).[4] J.J. Ramos, Physics of Plasmas, 10, 3601 (2003).[5] P.J. Catto and A.N. Simakov, Physics of Plasmas, 11, 90 (2004).[6] E.D. Held et al., Phys. Plasmas 11, 2419 (2004)

  8. Layers of Cold Dipolar Molecules in the Harmonic Approximation

    DEFF Research Database (Denmark)

    R. Armstrong, J.; Zinner, Nikolaj Thomas; V. Fedorov, D.

    2012-01-01

    We consider the N-body problem in a layered geometry containing cold polar molecules with dipole moments that are polarized perpendicular to the layers. A harmonic approximation is used to simplify the hamiltonian and bound state properties of the two-body inter-layer dipolar potential are used...... to adjust this effective interaction. To model the intra-layer repulsion of the polar molecules, we introduce a repulsive inter-molecule potential that can be parametrically varied. Single chains containing one molecule in each layer, as well as multi-chain structures in many layers are discussed...... and their energies and radii determined. We extract the normal modes of the various systems as measures of their volatility and eventually of instability, and compare our findings to the excitations in crystals. We find modes that can be classified as either chains vibrating in phase or as layers vibrating against...

  9. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  10. SI units for radiation measurements : for or against

    International Nuclear Information System (INIS)

    Nagaratnam, A.; Reddy, A.R.

    1975-01-01

    The historical evolution of the International System of Units (SI) is traced and concepts regarding radiation quantities and units as given by the ICRU are presented. Implications of the changeover to SI units for radiation measurement from the conventional system of familiar units like curie, roentgen, rad and rem are discussed. The familiar units will be kept for the time being along with SI units. In order to avoid confusion in the changeover period, new names, namely, becquerel and gray have been suggested by the authors for the SI units for activity and absorbed dose respectively. One becquerel will be 1 nuclear transformation per second and is approximately equal to 2.703 x 10 -11 Ci. One gray will be 1 joule per kilogram and is exactly equal to 100 rad. (M.G.B.)

  11.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  12. Born approximation to a perturbative numerical method for the solution of the Schroedinger equation

    International Nuclear Information System (INIS)

    Adam, Gh.

    1978-01-01

    A step function perturbative numerical method (SF-PN method) is developed for the solution of the Cauchy problem for the second order liniar differential equation in normal form. An important point stressed in the present paper, which seems to have been previously ignored in the literature devoted to the PN methods, is the close connection between the first order perturbation theory of the PN approach and the wellknown Born approximation, and, in general, the connection between the varjous orders of the PN corrections and the Neumann series. (author)

  13. New Riemannian Priors on the Univariate Normal Model

    Directory of Open Access Journals (Sweden)

    Salem Said

    2014-07-01

    Full Text Available The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ ∈ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ with hyperparameters θ - ∈ Θ and γ > 0, whose density with respect to Riemannian volume is proportional to exp(−d2(θ, θ - /2γ2, where d2(θ, θ - is the square of Rao’s Riemannian distance. The distributions G( θ - , γ are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ, as shown in the paper, and its dispersion away from θ - is given by γ.  Therefore, one thinks of members of the class represented by G( θ - , γ as being centered around θ - and  lying within a typical  distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that  this  leads  to  an  improvement  in  performance  over  the  use  of  conjugate  priors.

  14. Exact and approximate multiple diffraction calculations

    International Nuclear Information System (INIS)

    Alexander, Y.; Wallace, S.J.; Sparrow, D.A.

    1976-08-01

    A three-body potential scattering problem is solved in the fixed scatterer model exactly and approximately to test the validity of commonly used assumptions of multiple scattering calculations. The model problem involves two-body amplitudes that show diffraction-like differential scattering similar to high energy hadron-nucleon amplitudes. The exact fixed scatterer calculations are compared to Glauber approximation, eikonal-expansion results and a noneikonal approximation

  15. On Covering Approximation Subspaces

    Directory of Open Access Journals (Sweden)

    Xun Ge

    2009-06-01

    Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

  16. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  17. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  18. Growing Old in Public: A Modular Teaching Unit on Stereotypes.

    Science.gov (United States)

    Detzner, Daniel F.

    A college level unit which investigates stereotypes of aging in the United States is described. The three-class unit serves as an introduction to the study of social gerontology. Its purpose is to address issues of negative stereotypes of old age reinforced by the media and by our cultural roots; the lack of knowledge about the normal changes that…

  19. Asymptotic normalization coefficients for 10B→9Be+p

    International Nuclear Information System (INIS)

    Mukhamedzhanov, A.M.; Clark, H.L.; Gagliardi, C.A.; Lui, Y.; Trache, L.; Tribble, R.E.; Xu, H.M.; Zhou, X.G.; Burjan, V.; Cejpek, J.; Kroha, V.; Carstoiu, F.

    1997-01-01

    The differential cross sections for the reactions 9 Be( 10 B, 10 B) 9 Be and 9 Be( 10 B, 9 Be) 10 B have been measured at an incident energy of 100 MeV. The elastic scattering data have been used to determine the optical model parameters for the 9 Be+ 10 B system at this energy. These parameters are then used in distorted-wave Born approximation (DWBA) calculations to predict the cross sections of the 9 Be( 10 B, 9 Be) 10 B proton exchange reaction, populating the ground and low-lying states in 10 B. By normalizing the theoretical DWBA proton exchange cross sections to the experimental ones, the asymptotic normalization coefficients (ANC's), defining the normalization of the tail of the 10 B bound state wave functions in the two-particle channel 9 Be+p, have been found. The ANC for the virtual decay 10 B(g.s.)→ 9 Be+p will be used in an analysis of the 10 B( 7 Be, 8 B) 9 Be reaction to extract the ANC's for 8 B→ 7 Be +p. These ANC's determine the normalization of the 7 Be(p,γ) 8 B radiative capture cross section at very low energies, which is crucially important for nuclear astrophysics. copyright 1997 The American Physical Society

  20. Comparison of the Born series and rational approximants in potential scattering. [Pade approximants, Yikawa and exponential potential

    Energy Technology Data Exchange (ETDEWEB)

    Garibotti, C R; Grinstein, F F [Rosario Univ. Nacional (Argentina). Facultad de Ciencias Exactas e Ingenieria

    1976-05-08

    It is discussed the real utility of Born series for the calculation of atomic collision processes in the Born approximation. It is suggested to make use of Pade approximants and it is shown that this approach provides very fast convergent sequences over all the energy range studied. Yukawa and exponential potential are explicitly considered and the results are compared with high-order Born approximation.

  1. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    Science.gov (United States)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  2. Use of a pitot-static probe for determining wing section drag in flight at Mach numbers from 0.5 to approximately 1.0

    Science.gov (United States)

    Montoya, L. C.; Economu, M. A.; Cissell, R. E.

    1974-01-01

    The use of a pitot-static probe to determine wing section drag at speeds from Mach 0.5 to approximately 1.0 was evaluated in flight. The probe unit is described and operational problems are discussed. Typical wake profiles and wing section drag coefficients are presented. The data indicate that the pitot-static probe gave reliable results up to speeds of approximately 1.0.

  3. Analysis of corrections to the eikonal approximation

    Science.gov (United States)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  4. United Nuclear Industries, Inc. reactor and fuel production facilities 1975 environmental release report

    International Nuclear Information System (INIS)

    Cucchiara, A.L.

    1976-01-01

    During calendar year 1975, an estimated total of 3,000,000 pounds of waste materials and approximately 150 curies of radionuclides were discharged to the environs in liquid effluent streams emanating from United Nuclear Industries, Inc., operated facilities. During the same period, approximately 1,700,000 pounds of reported waste materials, including 34,000 curies of reported radionuclides, were discharged to the atmosphere from United Nuclear Industries, Inc., operated facilities. Superscript numbers reference explanatory notes contained at the end of the report

  5. Sandstone-filled normal faults: A case study from central California

    Science.gov (United States)

    Palladino, Giuseppe; Alsop, G. Ian; Grippa, Antonio; Zvirtes, Gustavo; Phillip, Ruy Paulo; Hurst, Andrew

    2018-05-01

    Despite the potential of sandstone-filled normal faults to significantly influence fluid transmissivity within reservoirs and the shallow crust, they have to date been largely overlooked. Fluidized sand, forcefully intruded along normal fault zones, markedly enhances the transmissivity of faults and, in general, the connectivity between otherwise unconnected reservoirs. Here, we provide a detailed outcrop description and interpretation of sandstone-filled normal faults from different stratigraphic units in central California. Such faults commonly show limited fault throw, cm to dm wide apertures, poorly-developed fault zones and full or partial sand infill. Based on these features and inferences regarding their origin, we propose a general classification that defines two main types of sandstone-filled normal faults. Type 1 form as a consequence of the hydraulic failure of the host strata above a poorly-consolidated sandstone following a significant, rapid increase of pore fluid over-pressure. Type 2 sandstone-filled normal faults form as a result of regional tectonic deformation. These structures may play a significant role in the connectivity of siliciclastic reservoirs, and may therefore be crucial not just for investigation of basin evolution but also in hydrocarbon exploration.

  6. Limitations of the acoustic approximation for seismic crosshole tomography

    Science.gov (United States)

    Marelli, Stefano; Maurer, Hansruedi

    2010-05-01

    Modelling and inversion of seismic crosshole data is a challenging task in terms of computational resources. Even with the significant increase in power of modern supercomputers, full three-dimensional elastic modelling of high-frequency waveforms generated from hundreds of source positions in several boreholes is still an intractable task. However, it has been recognised that full waveform inversion offers substantially more information compared with traditional travel time tomography. A common strategy to reduce the computational burden for tomographic inversion is to approximate the true elastic wave propagation by acoustic modelling. This approximation assumes that the solid rock units can be treated like fluids (with no shear wave propagation) and is generally considered to be satisfactory so long as only the earliest portions of the recorded seismograms are considered. The main assumption is that most of the energy in the early parts of the recorded seismograms is carried by the faster compressional (P-) waves. Although a limited number of studies exist on the effects of this approximation for surface/marine synthetic reflection seismic data, and show it to be generally acceptable for models with low to moderate impedance contrasts, to our knowledge no comparable studies have been published on the effects for cross-borehole transmission data. An obvious question is whether transmission tomography should be less affected by elastic effects than surface reflection data when only short time windows are applied to primarily capture the first arriving wavetrains. To answer this question we have performed 2D and 3D investigations on the validity of the acoustic approximation for an elastic medium and using crosshole source-receiver configurations. In order to generate consistent acoustic and elastic data sets, we ran the synthetic tests using the same finite-differences time-domain elastic modelling code for both types of simulations. The acoustic approximation was

  7. Forest carbon management in the United States: 1600-2100

    Science.gov (United States)

    Richard A. Birdsey; Kurt Pregitzer; Alan Lucier

    2006-01-01

    This paper reviews the effects of past forest management on carbon stocks in the United States, and the challenges for managing forest carbon resources in the 21st century. Forests in the United States were in approximate carbon balance with the atmosphere from 1600-1800. Utilization and land clearing caused a large pulse of forest carbon emissions during the 19th...

  8. Description of a Normal-Force In-Situ Turbulence Algorithm for Airplanes

    Science.gov (United States)

    Stewart, Eric C.

    2003-01-01

    A normal-force in-situ turbulence algorithm for potential use on commercial airliners is described. The algorithm can produce information that can be used to predict hazardous accelerations of airplanes or to aid meteorologists in forecasting weather patterns. The algorithm uses normal acceleration and other measures of the airplane state to approximate the vertical gust velocity. That is, the fundamental, yet simple, relationship between normal acceleration and the change in normal force coefficient is exploited to produce an estimate of the vertical gust velocity. This simple approach is robust and produces a time history of the vertical gust velocity that would be intuitively useful to pilots. With proper processing, the time history can be transformed into the eddy dissipation rate that would be useful to meteorologists. Flight data for a simplified research implementation of the algorithm are presented for a severe turbulence encounter of the NASA ARIES Boeing 757 research airplane. The results indicate that the algorithm has potential for producing accurate in-situ turbulence measurements. However, more extensive tests and analysis are needed with an operational implementation of the algorithm to make comparisons with other algorithms or methods.

  9. Ancilla-approximable quantum state transformations

    International Nuclear Information System (INIS)

    Blass, Andreas; Gurevich, Yuri

    2015-01-01

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation

  10. Ancilla-approximable quantum state transformations

    Energy Technology Data Exchange (ETDEWEB)

    Blass, Andreas [Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 (United States); Gurevich, Yuri [Microsoft Research, Redmond, Washington 98052 (United States)

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  11. Trinucleon asymptotic normalization constants including Coulomb effects

    International Nuclear Information System (INIS)

    Friar, J.L.; Gibson, B.F.; Lehman, D.R.; Payne, G.L.

    1982-01-01

    Exact theoretical expressions for calculating the trinucleon S- and D-wave asymptotic normalization constants, with and without Coulomb effects, are presented. Coordinate-space Faddeev-type equations are used to generate the trinucleon wave functions, and integral relations for the asymptotic norms are derived within this framework. The definition of the asymptotic norms in the presence of the Coulomb interaction is emphasized. Numerical calculations are carried out for the s-wave NN interaction models of Malfliet and Tjon and the tensor force model of Reid. Comparison with previously published results is made. The first estimate of Coulomb effects for the D-wave asymptotic norm is given. All theoretical values are carefully compared with experiment and suggestions are made for improving the experimental situation. We find that Coulomb effects increase the 3 He S-wave asymptotic norm by less than 1% relative to that of 3 H, that Coulomb effects decrease the 3 He D-wave asymptotic norm by approximately 8% relative to that of 3 H, and that the distorted-wave Born approximation D-state parameter, D 2 , is only 1% smaller in magnitude for 3 He than for 3 H due to compensating Coulomb effects

  12. Normalization of the collage regions of iterated function systems

    Science.gov (United States)

    Zhang, Zhengbing; Zhang, Wei

    2012-11-01

    Fractal graphics, generated with iterated function systems (IFS), have been applied in broad areas. Since the collage regions of different IFS may be different, it is difficult to respectively show the attractors of iterated function systems in a same region on a computer screen using one program without modifying the display parameters. An algorithm is proposed in this paper to solve this problem. A set of transforms are repeatedly applied to modify the coefficients of the IFS so that the collage region of the resulted IFS changes toward the unit square. Experimental results demonstrate that the collage region of any IFS can be normalized to the unit square with the proposed method.

  13. Recognition of computerized facial approximations by familiar assessors.

    Science.gov (United States)

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  14. Model-free methods of analyzing domain motions in proteins from simulation : A comparison of normal mode analysis and molecular dynamics simulation of lysozyme

    NARCIS (Netherlands)

    Hayward, S.; Kitao, A.; Berendsen, H.J.C.

    Model-free methods are introduced to determine quantities pertaining to protein domain motions from normal mode analyses and molecular dynamics simulations, For the normal mode analysis, the methods are based on the assumption that in low frequency modes, domain motions can be well approximated by

  15. Signs of Gas Trapping in Normal Lung Density Regions in Smokers.

    Science.gov (United States)

    Bodduluri, Sandeep; Reinhardt, Joseph M; Hoffman, Eric A; Newell, John D; Nath, Hrudaya; Dransfield, Mark T; Bhatt, Surya P

    2017-12-01

    A substantial proportion of subjects without overt airflow obstruction have significant respiratory morbidity and structural abnormalities as visualized by computed tomography. Whether regions of the lung that appear normal using traditional computed tomography criteria have mild disease is not known. To identify subthreshold structural disease in normal-appearing lung regions in smokers. We analyzed 8,034 subjects with complete inspiratory and expiratory computed tomographic data participating in the COPDGene Study, including 103 lifetime nonsmokers. The ratio of the mean lung density at end expiration (E) to end inspiration (I) was calculated in lung regions with normal density (ND) by traditional thresholds for mild emphysema (-910 Hounsfield units) and gas trapping (-856 Hounsfield units) to derive the ND-E/I ratio. Multivariable regression analysis was used to measure the associations between ND-E/I, lung function, and respiratory morbidity. The ND-E/I ratio was greater in smokers than in nonsmokers, and it progressively increased from mild to severe chronic obstructive pulmonary disease severity. A proportion of 26.3% of smokers without airflow obstruction had ND-E/I greater than the 90th percentile of normal. ND-E/I was independently associated with FEV 1 (adjusted β = -0.020; 95% confidence interval [CI], -0.032 to -0.007; P = 0.001), St. George's Respiratory Questionnaire scores (adjusted β = 0.952; 95% CI, 0.529 to 1.374; P smokers without airflow obstruction, and it is associated with respiratory morbidity. Clinical trial registered with www.clinicaltrials.gov (NCT00608764).

  16. Design of the Acoustic Signal Receiving Unit of Acoustic Telemetry While Drilling

    Directory of Open Access Journals (Sweden)

    Li Zhigang

    2016-01-01

    Full Text Available Signal receiving unit is one of the core units of the acoustic telemetry system. A new type of acoustic signal receiving unit is designed to solve problems of the existing devices. The unit is a short joint in whole. It not only can receive all the acoustic signals transmitted along the drill string, without losing any signal, but will not bring additional vibration and interference. In addition, the structure of the amplitude transformer is designed, which can amplify the signal amplitude and improve the receiving efficiency. The design of the wireless communication module makes the whole device can be used in normal drilling process when the drill string is rotating. So, it does not interfere with the normal drilling operation.

  17. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  18. Approximating The DCM

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg

    2005-01-01

    The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...

  19. An approximation for kanban controlled assembly systems

    NARCIS (Netherlands)

    Topan, E.; Avsar, Z.M.

    2011-01-01

    An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated

  20. Estimated United States Transportation Energy Use 2005

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C A; Simon, A J; Belles, R D

    2011-11-09

    A flow chart depicting energy flow in the transportation sector of the United States economy in 2005 has been constructed from publicly available data and estimates of national energy use patterns. Approximately 31,000 trillion British Thermal Units (trBTUs) of energy were used throughout the United States in transportation activities. Vehicles used in these activities include automobiles, motorcycles, trucks, buses, airplanes, rail, and ships. The transportation sector is powered primarily by petroleum-derived fuels (gasoline, diesel and jet fuel). Biomass-derived fuels, electricity and natural gas-derived fuels are also used. The flow patterns represent a comprehensive systems view of energy used within the transportation sector.

  1. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  2. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  3. Normal gravity field in relativistic geodesy

    Science.gov (United States)

    Kopeikin, Sergei; Vlasov, Igor; Han, Wen-Biao

    2018-02-01

    Modern geodesy is subject to a dramatic change from the Newtonian paradigm to Einstein's theory of general relativity. This is motivated by the ongoing advance in development of quantum sensors for applications in geodesy including quantum gravimeters and gradientometers, atomic clocks and fiber optics for making ultra-precise measurements of the geoid and multipolar structure of the Earth's gravitational field. At the same time, very long baseline interferometry, satellite laser ranging, and global navigation satellite systems have achieved an unprecedented level of accuracy in measuring 3-d coordinates of the reference points of the International Terrestrial Reference Frame and the world height system. The main geodetic reference standard to which gravimetric measurements of the of Earth's gravitational field are referred is a normal gravity field represented in the Newtonian gravity by the field of a uniformly rotating, homogeneous Maclaurin ellipsoid of which mass and quadrupole momentum are equal to the total mass and (tide-free) quadrupole moment of Earth's gravitational field. The present paper extends the concept of the normal gravity field from the Newtonian theory to the realm of general relativity. We focus our attention on the calculation of the post-Newtonian approximation of the normal field that is sufficient for current and near-future practical applications. We show that in general relativity the level surface of homogeneous and uniformly rotating fluid is no longer described by the Maclaurin ellipsoid in the most general case but represents an axisymmetric spheroid of the fourth order with respect to the geodetic Cartesian coordinates. At the same time, admitting a post-Newtonian inhomogeneity of the mass density in the form of concentric elliptical shells allows one to preserve the level surface of the fluid as an exact ellipsoid of rotation. We parametrize the mass density distribution and the level surface with two parameters which are

  4. Noncontrast computed tomographic Hounsfield unit evaluation of cerebral venous thrombosis: a quantitative evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Besachio, David A. [University of Utah, Department of Radiology, Salt Lake City (United States); United States Navy, Bethesda, MD (United States); Quigley, Edward P.; Shah, Lubdha M.; Salzman, Karen L. [University of Utah, Department of Radiology, Salt Lake City (United States)

    2013-08-15

    Our objective is to determine the utility of noncontrast Hounsfield unit values, Hounsfield unit values corrected for the patient's hematocrit, and venoarterial Hounsfield unit difference measurements in the identification of intracranial venous thrombosis on noncontrast head computed tomography. We retrospectively reviewed noncontrast head computed tomography exams performed in both normal patients and those with cerebral venous thrombosis, acquiring Hounsfield unit values in normal and thrombosed cerebral venous structures. Also, we acquired Hounsfield unit values in the internal carotid artery for comparison to thrombosed and nonthrombosed venous structures and compared the venous Hounsfield unit values to the patient's hematocrit. A significant difference is identified between Hounsfield unit values in thrombosed and nonthrombosed venous structures. Applying Hounsfield unit threshold values of greater than 65, a Hounsfield unit to hematocrit ratio of greater than 1.7, and venoarterial difference values greater than 15 alone and in combination, the majority of cases of venous thrombosis are identifiable on noncontrast head computed tomography. Absolute Hounsfield unit values, Hounsfield unit to hematocrit ratios, and venoarterial Hounsfield unit value differences are a useful adjunct in noncontrast head computed tomographic evaluation of cerebral venous thrombosis. (orig.)

  5. Noncontrast computed tomographic Hounsfield unit evaluation of cerebral venous thrombosis: a quantitative evaluation

    International Nuclear Information System (INIS)

    Besachio, David A.; Quigley, Edward P.; Shah, Lubdha M.; Salzman, Karen L.

    2013-01-01

    Our objective is to determine the utility of noncontrast Hounsfield unit values, Hounsfield unit values corrected for the patient's hematocrit, and venoarterial Hounsfield unit difference measurements in the identification of intracranial venous thrombosis on noncontrast head computed tomography. We retrospectively reviewed noncontrast head computed tomography exams performed in both normal patients and those with cerebral venous thrombosis, acquiring Hounsfield unit values in normal and thrombosed cerebral venous structures. Also, we acquired Hounsfield unit values in the internal carotid artery for comparison to thrombosed and nonthrombosed venous structures and compared the venous Hounsfield unit values to the patient's hematocrit. A significant difference is identified between Hounsfield unit values in thrombosed and nonthrombosed venous structures. Applying Hounsfield unit threshold values of greater than 65, a Hounsfield unit to hematocrit ratio of greater than 1.7, and venoarterial difference values greater than 15 alone and in combination, the majority of cases of venous thrombosis are identifiable on noncontrast head computed tomography. Absolute Hounsfield unit values, Hounsfield unit to hematocrit ratios, and venoarterial Hounsfield unit value differences are a useful adjunct in noncontrast head computed tomographic evaluation of cerebral venous thrombosis. (orig.)

  6. Grandeurs, dimensions, et conversions d'unités Quantities, Dimensions and Units Conversions

    Directory of Open Access Journals (Sweden)

    Bronner C.

    2006-11-01

    Full Text Available Il n'est plus besoin de démontrer que le Système International d'Unités (SI est un fait. Il est normalement enseigné dans les lycées et universités. Les scientifiques l'utilisent systématiquement dans leurs publications; les nouveaux livres et banques de données s'en servent couramment. Cependant, certaines industries (la pétrolière entre autres continuent à se servir de systèmes techniques. On trouve dans des archives de vieilles données ou des spécifications d'usines qui ont été faites dans d'autres systèmes. Il n'est pas rare, dans la vie professionnelle, de rencontrer un problème où une unité doit être transformée. Cet article souhaite aider les personnes qui doivent résoudre ce genre d'écueil en leur présentant une vision globale des différents systèmes d'unités qui ont historiquement existé. Une définition des unités, soit traditionnelles, soit étranges, accompagne également l'article. L'aide la plus importante sera sans doute trouvée dans le programme inclus dans une disquette. Un logiciel complet, sous Windows, permettra à quiconque de résoudre, sans erreur, son problème de conversion d'unité. La technique de conversion, basée sur l'utilisation du calcul dimensionnel, et utilisée dans le programme, est également expliquée. It is not longer necessary to prove that the International System of Units (SI is a fact. It is normally used in schools and universities. Scientists systematically use SI in their publications. New books and databases are commonly built on it. Meanwhile, some industries (oil companies among others keep on using technical systems. It is possible to find old data and plant designs that use other systems. It is not uncommon, in professional life, to find a problem in which units need to be transformed. This article aims to help people who have to solve this kind of difficulty, by giving an overview of the different unit systems. A description of traditional and unusual units is

  7. Simultaneous misalignment correction for approximate circular cone-beam computed tomography

    International Nuclear Information System (INIS)

    Kyriakou, Y; Hillebrand, L; Ertel, D; Kalender, W A; Lapp, R M

    2008-01-01

    Currently, CT scanning is often performed using flat detectors which are mounted on C-arm units or dedicated gantries as in radiation therapy or micro CT. For perspective cone-beam backprojection of the Feldkamp type (FDK) the geometry of an approximately circular scan trajectory has to be available for reconstruction. If the system or the scan geometry is afflicted with geometrical instabilities, referred to as misalignment, a non-perfect approximate circular scan is the case. Reconstructing a misaligned scan without knowledge of the true trajectory results in severe artefacts in the CT images. Unlike current methods which use a pre-scan calibration of the geometry for defined scan protocols and calibration phantoms, we propose a real-time iterative restoration of reconstruction geometry by means of entropy minimization. Entropy minimization is performed combining a simplex algorithm for multi-parameter optimization and iterative graphics card (GPU)-based FDK-reconstructions. Images reconstructed with the misaligned geometry were used as an input for the entropy minimization algorithm. A simplex algorithm changes the geometrical parameters of the source and detector with respect to the reduction of entropy. In order to reduce the size of the high-dimensional space required for minimization, the trajectory was described by only eight fix points. A virtual trajectory is generated for each iteration using a least-mean-squares algorithm to calculate an approximately circular path including these points. Entropy was minimal for the ideal dataset, whereas strong misalignment resulted in a higher entropy value. For the datasets used in this study, the simplex algorithm required 64-200 iterations to achieve an entropy value equivalent to the ideal dataset, depending on the grade of misalignment using random initialization conditions. The use of the GPU reduced the time per iteration as compared to a quad core CPU-based backprojection by a factor of 10 resulting in a total

  8. Collagenase chemonucleolysis: a long term radiographic study in normal dogs

    International Nuclear Information System (INIS)

    Atilola, M.A.O.; Cockshutt, J.R.; Mclaughlin, R.; Cochrane, S.M.; Pennock, P.W.

    1993-01-01

    Five clinically normal five year old dogs were used in this study. From a randomized table intervertebral discs were each injected with either collagenase or calcium chloride diluent. The surgically exposed cervical discs were injected with 50 units whereas thoracic and lumbar discs were injected under fluoroscopic guidance with 100 units of the enzyme. Postinjection radiographs revealed significant (p ltoreq .05) disc space narrowing in enzyme injected discs. The cervical discs had the highest frequency of radiographic narrowing (87%) followed by the thoracic (70%) and lumbar (53%) discs. Spondylosis deformans developed at the sites of cervical enzyme injections. None of the dogs had neurologic abnormalities one year postinjection

  9. Cloning and characterization of the complementary DNA for the B chain of normal human serum C1q.

    Science.gov (United States)

    Reid, K B; Bentley, D R; Wood, K J

    1984-09-06

    Normal human C1q is a serum glycoprotein of 460 kDa containing 18 polypeptide chains (6A, 6B, 6C) each 226 amino acids long and each containing an N-terminal collagen-like domain and a C-terminal globular domain. Two unusual forms of C1q have been described: a genetically defective form, which has a molecular mass of approximately 160 kDa and is found in the sera of homozygotes for the defect who show a marked susceptibility to immune complex related disease; a fibroblast form, shown to be synthesized and secreted, in vitro, with a molecular mass of about 800 kDa and with chains approximately 16 kDa greater than those of normal C1q. A higher than normal molecular mass form of C1q has also been described in human colostrum and a form of C1q has been claimed to represent one of the types of Fc receptor on guinea-pig macrophages. To initiate studies, at the genomic level, on these various forms of C1q, and to investigate the possible relation between the C1q genes and the procollagen genes, the complementary DNA corresponding to the B chain of normal C1q has been cloned and characterized.

  10. Experimental Investigation of Double Effect Evaporative Cooling Unit

    Directory of Open Access Journals (Sweden)

    Ahmed Abd Mohammad Saleh

    2018-03-01

    Full Text Available This work presents the experimental investigation of double effect evaporative cooling unit with approximate capacity 7 kW. The unit consisted of two stages, the sensible heat exchanger and the cooling tower composing the external indirect regenerative evaporative cooling stage where a direct evaporative cooler represent the second stage. Testing results showed a maximum capacity and lowest supplied air temperature when the water flow rate in heat exchanger was 0.1 L/s. The experiment recorded the unit daily readings at two airflow rates (0.425 m3/s, 0.48 m3/s. The reading shows that unit inlet DBT is effect positively on unit wet bulb effectiveness and unit COP at constant humidity ratio. The air extraction ratio effected positively on the unit wet bulb effectiveness within a certain limit where maximum COP recorded 11.4 when the extraction ratio equal to 40%.

  11. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  12. Improved Dutch Roll Approximation for Hypersonic Vehicle

    Directory of Open Access Journals (Sweden)

    Liang-Liang Yin

    2014-06-01

    Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.

  13. A compilation of radioelement concentrations in granitic rocks of the contiguous United States

    International Nuclear Information System (INIS)

    Stuckless, J.S.; VanTrump, G. Jr.

    1982-01-01

    Concentration data for uranium, thorium, and potassium have been compiled for approximately 2,500 granitic samples from the contiguous United States. Uranium and thorium concentrations and ratios involving these elements exhibit a log-normal distribution with statistical parameters. In order to check for a bias in the results due to high concentrations of data in anomalous or heavily sampled areas, the data were reevaluated by averaging all analyses within a 0.5 0 latitude by 0.5 0 longitude grid. The resulting data set contains 330 entries for which radioelements are log-normally distributed. Mean values are not significantly different from those of the ungridded data, but standard deviations are lower by as much as nearly 50 percent. The areal distribution of anomalously high values (more than one standard deviation greater than the geometric mean) does not delineate large uranium districts by either treatment of the data. There is sufficient information for approximately 1,500 samples to permit subdivision of the granites by degree of alumina saturation. Relative to the six variables listed above, peraluminous samples have slightly lower mean values, but the differences are not statistically significant. Standard deviations are also largest for the peraluminous granites with α for Th/U nearly 3 times larger for peraluminous granite than for metaluminous granite. Examination of the variations in Th/U ratios for a few specific granites for which isotopic data are available suggests that variability is caused by late-stage magmatic or secondary processes that may be associated with ore-forming processes. Therefore, although anomalous radioelement concentrations in granitic rocks do not seem to be useful in delineating large uranium provinces with sediment-hosted deposits, highly variable uranium concentrations or Th/U ratios in granitic rocks may be helpful in the search for uranium deposits

  14. Studies of normal deformation in {sup 151}Dy

    Energy Technology Data Exchange (ETDEWEB)

    Nisius, D.; Janssens, R.V.F.; Crowell, B. [and others

    1995-08-01

    The wealth of data collected in the study of superdeformation in {sup 151}Dy allowed for new information to be obtained on the normally deformed structures in this nucleus. At high spin several new yrast states have been identified for the first time. They were associated with single-particle excitations. Surprisingly, a sequence was identified with energy spacings characteristic of a rotational band of normal ({beta}2 {approximately} 0.2) deformation. The bandhead spin appears to be 15/2{sup -} and the levels extend up to a spin of 87/2{sup -}. A clear backbend is present at intermediate spins. While a similar band based on a bandhead of 6{sup +} is known in {sup 152}Dy, calculations suggest that this collective prolate band should not be seen in {sup 151}Dy. In the experiment described earlier in this report that is aimed at determining the deformations associated with the SD bands in this nucleus and {sup 152}Dy, the deformation associated with this band will be determined. This will provide further insight into the origin of this band.

  15. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  16. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  17. On Nash-Equilibria of Approximation-Stable Games

    Science.gov (United States)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  18. Approximating Multivariate Normal Orthant Probabilities Using the Clark Algorithm.

    Science.gov (United States)

    1987-07-15

    Kent Eaton Army Research Institute Dr. Hans Crombag 5001 Eisenhower Avenue University of Leyden Alexandria, VA 22333 Education Research Center...Boerhaavelaan 2 Dr. John M. Eddins 2334 EN Leyden University of Illinois The NETHERLANDS 252 Engineering Research Laboratory Mr. Timothy Davey 103 South...Education and Training Ms. Kathleen Moreno Naval Air Station Navy Personnel R&D Center Pensacola, FL 32508 Code 62 San Diego, CA 92152-6800 Dr. Gary Marco

  19. Position and width of normal adult optic chiasm as measured in coronal MRI

    International Nuclear Information System (INIS)

    Kim, Myung Soon; Park, Jin Sook

    1994-01-01

    To evaluate the position and transverse dimension of the adult optic chiasm in normal Korean adult. The authors analysed 3D coronal volume images (TR/TE=30/13, flip angle=30 .deg. ) of 136 normal adult subjects without known visual abnormality. All MRI examinations were performed using a 0.5T system. MRI was reviewed retrospectively to determine the position (horizontal and tilted) of the potic chiosm and the transverse dimension of the optic chiasm was measured. Seventy- five (55%) of the 136 normal subjects had horizontal position, and sixty-one (45%) had tilted position. Thirty- eight (62%) of 61 with tilted position showed higher position on the right side, and twenty-three (38%) showed higher position on the side. The average transverse dimension(mean SD) was 15.2 ± 0.7mm in men and 14.6 ± 1.0mm in women. The difference of transverse dimension between men and women was statistically significant. Tilted position of the adult optic chiasm on coronal MRI was seen in approximately half of normal adults. The average of transverse dimension of normal optic chiasm was 15mm

  20. Multi-source waveform inversion of marine streamer data using the normalized wavefield

    KAUST Repository

    Choi, Yun Seok

    2012-01-01

    Even though the encoded multi-source approach dramatically reduces the computational cost of waveform inversion, it is generally not applicable to marine streamer data. This is because the simultaneous-sources modeled data cannot be muted to comply with the configuration of the marine streamer data, which causes differences in the number of stacked-traces, or energy levels, between the modeled and observed data. Since the conventional L2 norm does not account for the difference in energy levels, multi-source inversion based on the conventional L2 norm does not work for marine streamer data. In this study, we propose the L2, approximated L2, and L1 norm using the normalized wavefields for the multi-source waveform inversion of marine streamer data. Since the normalized wavefields mitigate the different energy levels between the observed and modeled wavefields, the multi-source waveform inversion using the normalized wavefields can be applied to marine streamer data. We obtain the gradient of the objective functions using the back-propagation algorithm. To conclude, the gradient of the L2 norm using the normalized wavefields is exactly the same as that of the global correlation norm. In the numerical examples, the new objective functions using the normalized wavefields generate successful results whereas conventional L2 norm does not.

  1. Legendre-tau approximations for functional differential equations

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  2. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  3. Local density approximations for relativistic exchange energies

    International Nuclear Information System (INIS)

    MacDonald, A.H.

    1986-01-01

    The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented

  4. Normalized modes at selected points without normalization

    Science.gov (United States)

    Kausel, Eduardo

    2018-04-01

    As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á

  5. The organization of human epidermis: functional epidermal units and phi proportionality.

    Science.gov (United States)

    Hoath, Steven B; Leahy, D G

    2003-12-01

    The concept that mammalian epidermis is structurally organized into functional epidermal units has been proposed on the basis of stratum corneum (SC) architecture, proliferation kinetics, melanocyte:keratinocyte ratios (1:36), and, more recently, Langerhans cell: epidermal cell ratios (1:53). This article examines the concept of functional epidermal units in human skin in which the maintenance of phi (1.618034) proportionality provides a central organizing principle. The following empirical measurements were used: 75,346 nucleated epidermal cells per mm2, 1394 Langerhans cells per mm2, 1999 melanocytes per mm2, 16 (SC) layers, 900-microm2 corneocyte surface area, 17,778 corneocytes per mm2, 14-d (SC) turnover time, and 93,124 per mm2 total epidermal cells. Given these empirical data: (1) the number of corneocytes is a mean proportional between the sum of the Langerhans cell + melanocyte populations and the number of epidermal cells, 3393/17,778-17,778/93,124; (2) the ratio of nucleated epidermal cells over corneocytes is phi proportional, 75,346/17,778 approximately phi3; (3) assuming similar 14-d turnover times for the (SC) and Malpighian epidermis, the number of corneocytes results from subtraction of a cellular fraction equal to approximately 2/phi2 x the number of living cells, 75,436 - (2/phi2 x 75,346) approximately 17,778; and (4) if total epidermal turnover time equals (SC) turnover time x the ratio of living/dead cells, then compartmental turnover times are unequal (14 d for (SC) to 45.3 d for nucleated epidermis approximately 1/2phi) and cellular replacement rates are 52.9 corneocytes/69.3 keratinocytes per mm2 per h approximately 2/phi2. These empirically derived equivalences provide logicomathematical support for the presence of functional epidermal units in human skin. Validation of a phi proportional unit architecture in human epidermis will be important for tissue engineering of skin and the design of instruments for skin measurement.

  6. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    Science.gov (United States)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  7. SEE rate estimation based on diffusion approximation of charge collection

    Science.gov (United States)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  8. Skin flora: Differences between people affected by Albinism and those with normally pigmented skin in Northern Tanzania - cross sectional study.

    Science.gov (United States)

    Kiprono, Samson K; Masenga, John E; Chaula, Baraka M; Naafs, Bernard

    2012-07-30

    Skin flora varies from one site of the body to another. Individual's health, age and gender determine the type and the density of skin flora. A 1  cm² of the skin on the sternum was rubbed with sterile cotton swab socked in 0.9% normal saline and plated on blood agar. This was cultured at 35 °C. The bacteria were identified by culturing on MacConkey agar, coagulase test, catalase test and gram staining. Swabs were obtained from 66 individuals affected by albinism and 31 individuals with normal skin pigmentation. Those with normal skin were either relatives or staying with the individuals affected by albinism who were recruited for the study. The mean age of the 97 recruited individuals was 30.6 (SD ± 14.9) years. The mean of the colony forming units was 1580.5 per cm2. Those affected by albinism had a significantly higher mean colony forming units (1680  CFU per cm²) as compared with 453.5  CFU per cm² in those with normally pigmented skin (p = 0.023). The skin type and the severity of sun- damaged skin was significantly associated with a higher number of colony forming units (p = 0.038). Individuals affected by albinism have a higher number of colony forming units which is associated with sun- damaged skin.

  9. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  10. Loss of residual heat removal system: Diablo Canyon, Unit 2, April 10, 1987

    International Nuclear Information System (INIS)

    1987-06-01

    This report presents the findings of an NRC Augmented Inspection Team (AIT) investigation into the circumstances associated with the loss of residual heat removal (RHR) system capability for a period of approximately one and one-half hours at the Diablo Canyon, Unit 2 reactor facility on April 10, 1987. This event occurred while the Diablo Canyon, Unit 2, a pressurized water reactor, was shutdown with the reactor coolant system (RCS) water level drained to approximately mid-level of the hot leg piping. The reactor containment building equipment hatch was removed at the time of the event, and plant personnel were in the process of removing the primary side manways to gain access into the steam generator channel head areas. Thus, two fission product barriers were breached throughout the event. The RCS temperature increased from approximately 87 0 F to bulk boiling conditions without RCS temperature indication available to the plant operators. The RCS was subsequently pressurized to approximately 7 to 10 psig. The NRC AIT members concluded that the Diablo Canyon, Unit 2 plant was, at the time of the event, in a condition not previously analyzed by the NRC staff. The AIT findings from this event appear significant and generic to other pressurized water reactor facilities licensed by the NRC

  11. Saddlepoint approximation methods in financial engineering

    CERN Document Server

    Kwok, Yue Kuen

    2018-01-01

    This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables.  The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...

  12. Approximating centrality in evolving graphs: toward sublinearity

    Science.gov (United States)

    Priest, Benjamin W.; Cybenko, George

    2017-05-01

    The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.

  13. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  14. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  15. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  16. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use ... Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, People's Republic of China ...

  17. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.

    2008-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  18. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.; Holub, J.; Zdárek, J.

    2006-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  19. Approximation of the semi-infinite interval

    Directory of Open Access Journals (Sweden)

    A. McD. Mercer

    1980-01-01

    Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.

  20. Rational approximations for tomographic reconstructions

    International Nuclear Information System (INIS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-01-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)

  1. 'LTE-diffusion approximation' for arc calculations

    International Nuclear Information System (INIS)

    Lowke, J J; Tanaka, M

    2006-01-01

    This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode

  2. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  3. Normal modes of Bardeen discs

    International Nuclear Information System (INIS)

    Verdaguer, E.

    1983-01-01

    The short wavelength normal modes of self-gravitating rotating polytropic discs in the Bardeen approximation are studied. The discs' oscillations can be seen in terms of two types of modes: the p-modes whose driving forces are pressure forces and the r-modes driven by Coriolis forces. As a consequence of differential rotation coupling between the two takes place and some mixed modes appear, their properties can be studied under the assumption of weak coupling and it is seen that they avoid the crossing of the p- and r-modes. The short wavelength analysis provides a basis for the classification of the modes, which can be made by using the properties of their phase diagrams. The classification is applied to the large wavelength modes of differentially rotating discs with strong coupling and to a uniformly rotating sequence with no coupling, which have been calculated in previous papers. Many of the physical properties and qualitative features of these modes are revealed by the analysis. (author)

  4. Nonlinear approximation with general wave packets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...

  5. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are

  6. Approximation properties of haplotype tagging

    Directory of Open Access Journals (Sweden)

    Dreiseitl Stephan

    2006-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.

  7. Theory of normal and superconducting properties of fullerene-based solids

    International Nuclear Information System (INIS)

    Cohen, M.L.

    1992-10-01

    Recent experiments on the normal-state and superconducting properties of fullerene-based solids are used to constrain the proposal theories of the electronic nature of these materials. In general, models of superconductivity based on electron pairing induced by phonons are consistent with electronic band theory. The latter experiments also yield estimates of the parameters characterizing these type H superconductors. It is argued that, at this point, a ''standard model'' of phonons interacting with itinerant electrons may be a good first approximation for explaining the properties of the metallic fullerenes

  8. Approximation for the adjoint neutron spectrum

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)

  9. Operator approximant problems arising from quantum theory

    CERN Document Server

    Maher, Philip J

    2017-01-01

    This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.

  10. A comparison of various "housekeeping" probes for northern analysis of normal and osteoarthritic articular cartilage RNA.

    Science.gov (United States)

    Matyas, J R; Huang, D; Adams, M E

    1999-01-01

    Several approaches are commonly used to normalize variations in RNA loading on Northern blots, including: ethidium bromide (EthBr) fluorescence of 18S or 28S rRNA or autoradiograms of radioactive probes hybridized with constitutively expressed RNAs such as elongation factor-1alpha (ELF), glyceraldehyde-3-phosphate dehydrogenase (G3PDH), actin, 18S or 28S rRNA, or others. However, in osteoarthritis (OA) the amount of total RNA changes significantly and none of these RNAs has been clearly demonstrated to be expressed at a constant level, so it is unclear if any of these approaches can be used reliably for normalizing RNA extracted from osteoarthritic cartilage. Total RNA was extracted from normal and osteoarthritic cartilage and assessed by EthBr fluorescence. RNA was then transferred to a nylon membrane hybridized with radioactive probes for ELF, G3PDH, Max, actin, and an oligo-dT probe. The autoradiographic signal across the six lanes of a gel was quantified by scanning densitometry. When compared on the basis of total RNA, the coefficient of variation was lowest for 28S ethidium bromide fluorescence and oligo-dT (approximately 7%), followed by 18S ethidium bromide fluorescence and G3PDH (approximately 13%). When these values were normalized to DNA concentration, the coefficient of variation exceeded 50% for all signals. Total RNA and the signals for 18S, 28S rRNA, and oligo-dT all correlated highly. These data indicate that osteoarthritic chondrocytes express similar ratios of mRNA to rRNA and mRNA to total RNA as do normal chondrocytes. Of all the "housekeeping" probes, G3PDH correlated best with the measurements of RNA. All of these "housekeeping" probes are expressed at greater levels by osteoarthritic chondrocytes when compared with normal chondrocytes. Thus, while G3PDH is satisfactory for evaluating the amount of RNA loaded, its level of expression is not the same in normal and osteoarthritic chondrocytes.

  11. Neuro-fuzzy modelling of hydro unit efficiency

    International Nuclear Information System (INIS)

    Iliev, Atanas; Fushtikj, Vangel

    2003-01-01

    This paper presents neuro-fuzzy method for modeling of the hydro unit efficiency. The proposed method uses the characteristics of the fuzzy systems as universal function approximates, as well the abilities of the neural networks to adopt the parameters of the membership's functions and rules in the consequent part of the developed fuzzy system. Developed method is practically applied for modeling of the efficiency of unit which will be installed in the hydro power plant Kozjak. Comparison of the performance of the derived neuro-fuzzy method with several classical polynomials models is also performed. (Author)

  12. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-01-01

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  13. Quirks of Stirling's Approximation

    Science.gov (United States)

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  14. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-06-23

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  15. Beaver Valley Unit 1, United States of America, 2007. Annex III. Description of Selected Open Phase Events

    International Nuclear Information System (INIS)

    2016-01-01

    On 27 Nov. 2007, during a non-routine walkdown of the off-site switchyard to investigate line voltage differences, the licensee discovered that the Phase A conductor of a 138 kV off-site power circuit of the Beaver Valley Power Station Unit 1 had broken off in the switchyard. This break occurred between the off-site feeder breaker and the line running on-site to the A train system station service transformer, located inside the site security fence. The terminal broke on the switchyard side of a revenue-metering current transformer/voltage transformer installed in 2006 to track the station’s power usage through this line. During normal power operation, no appreciable current goes through this 138 kV line because the unit generator normally powers the station buses (loads). The licensee determined that the break on the 138 kV Phase A had occurred 26 days earlier and, therefore, had not been restored within 72 h as required by technical specifications.

  16. Forward coronary flow normally seen in systole is the result of both forward and concealed back flow

    NARCIS (Netherlands)

    Spaan, J. A.; Breuls, N. P.; Laird, J. D.

    1981-01-01

    Normally systolic coronary blood flow is almost entirely forward. As perfusion pressure was lowered through the autoregulatory range in open-chest dogs, net systolic back flow appeared at approximately 70 mm Hg. Imposing a series resistance (Rs), which impedes both forward and back flow, abolished

  17. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  18. Diophantine approximation and Dirichlet series

    CERN Document Server

    Queffélec, Hervé

    2013-01-01

    This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...

  19. APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kambo, N. S.

    2012-11-01

    Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.

  20. The evolution of voids in the adhesion approximation

    Science.gov (United States)

    Sahni, Varun; Sathyaprakah, B. S.; Shandarin, Sergei F.

    1994-08-01

    We apply the adhesion approximation to study the formation and evolution of voids in the universe. Our simulations-carried out using 1283 particles in a cubical box with side 128 Mpc-indicate that the void spectrum evolves with time and that the mean void size in the standard Cosmic Background Explorer Satellite (COBE)-normalized cold dark matter (CDM) model with H50 = 1 scals approximately as bar D(z) = bar Dzero/(1+2)1/2, where bar Dzero approximately = 10.5 Mpc. Interestingly, we find a strong correlation between the sizes of voids and the value of the primordial gravitational potential at void centers. This observation could in principle, pave the way toward reconstructing the form of the primordial potential from a knowledge of the observed void spectrum. Studying the void spectrum at different cosmological epochs, for spectra with a built in k-space cutoff we find that the number of voids in a representative volume evolves with time. The mean number of voids first increases until a maximum value is reached (indicating that the formation of cellular structure is complete), and then begins to decrease as clumps and filaments erge leading to hierarchical clustering and the subsequent elimination of small voids. The cosmological epoch characterizing the completion of cellular structure occurs when the length scale going nonlinear approaches the mean distance between peaks of the gravitaional potential. A central result of this paper is that voids can be populated by substructure such as mini-sheets and filaments, which run through voids. The number of such mini-pancakes that pass through a given void can be measured by the genus characteristic of an individual void which is an indicator of the topology of a given void in intial (Lagrangian) space. Large voids have on an average a larger measure than smaller voids indicating more substructure within larger voids relative to smaller ones. We find that the topology of individual voids is strongly epoch dependent

  1. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  2. Normalized Paper Credit Assignment: A Solution for the Ethical Dilemma Induced by Multiple Important Authors.

    Science.gov (United States)

    Fang, Hui

    2017-09-21

    With the growth of research collaborations, the average number of authors per article and the phenomenon of equally important authorships have increased. The essence of the phenomenon of equally important authorships is the approximately equal importance of authors, both because of the difficulties in comparing authors' contributions to a paper and some actual research evaluation practices, which (approximately) give full paper credit only to the most important authors. A mechanism for indicating that various authors contributed equally is required to maintain and strengthen collaboration. However, the phenomenon of multiple important authors can cause unfair comparisons among the research contributions and abilities of authors of different papers. This loophole may be exploited. Normalizing the credit assigned to a given paper's authors is an easy way to solve this ethical dilemma. This approach enables fair comparisons of the contributions by the authors of different articles and suppresses unethical behaviour in author listings. Bibliometric researchers have proposed mature methods of normalized paper credit assignment that would be easy to use given the current level of computer adoption.

  3. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  4. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  5. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  6. Dosimetry of normal and wedge fields for a cobalt-60 teletherapy unit

    International Nuclear Information System (INIS)

    Tripathi, U.B.; Kelkar, N.Y.

    1980-01-01

    A simple analytical method for computation of dose distributions for normal and wedge fields is described and the use of the method in planning radiation treatment is outlined. Formulas has been given to compute: (1) depth dose along central axis of cobalt-60 beam, (2) dose to off-axis points, and (3) dose distribution for a wedge field. Good agreement has been found between theoretical and experimental values. With the help of these formulae, the dose at any point can be easily and accurately calculated and radiotherapy can be planned for tumours of very odd shape and sizes. The limitation of the method is that the formulae have been derived for 50% field definition. For cobalt-60 machine having any other field definition, appropriate correction factors have to be applied. (M.G.B.)

  7. Improved radiative corrections for (e,e'p) experiments: Beyond the peaking approximation and implications of the soft-photon approximation

    International Nuclear Information System (INIS)

    Weissbach, F.; Hencken, K.; Rohe, D.; Sick, I.; Trautmann, D.

    2006-01-01

    Analyzing (e,e ' p) experimental data involves corrections for radiative effects which change the interaction kinematics and which have to be carefully considered in order to obtain the desired accuracy. Missing momentum and energy due to bremsstrahlung have so far often been incorporated into the simulations and the experimental analyses using the peaking approximation. It assumes that all bremsstrahlung is emitted in the direction of the radiating particle. In this article we introduce a full angular Monte Carlo simulation method which overcomes this approximation. As a test, the angular distribution of the bremsstrahlung photons is reconstructed from H(e,e ' p) data. Its width is found to be underestimated by the peaking approximation and described much better by the approach developed in this work. The impact of the soft-photon approximation on the photon angular distribution is found to be minor as compared to the impact of the peaking approximation. (orig.)

  8. Solutions to the linearized Navier-Stokes equations for channel flow via the WKB approximation

    Science.gov (United States)

    Leonard, Anthony

    2017-11-01

    Progress on determining semi-analytical solutions to the linearized Navier-Stokes equations for incompressible channel flow, laminar and turbulent, is reported. Use of the WKB approximation yields, e.g., solutions to initial-value problem for the inviscid Orr-Sommerfeld equation in terms of the Bessel functions J+ 1 / 3 ,J- 1 / 3 ,J1 , and Y1 and their modified counterparts for any given wave speed c = ω /kx and k⊥ ,(k⊥2 =kx2 +kz2) . Of particular note to be discussed is a sequence i = 1 , 2 , . . . of homogeneous inviscid solutions with complex k⊥ i for each speed c, (0 < c <=Umax), in the downstream direction. These solutions for the velocity component normal to the wall v are localized in the plane parallel to the wall. In addition, for limited range of negative c, (- c * <= c <= 0) , we have found upstream-traveling homogeneous solutions with real k⊥(c) . In both cases the solutions for v serve as a source for corresponding solutions to the inviscid Squire equation for the vorticity component normal to the wall ωy.

  9. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  10. Design parameters for toroidal and bobbin magnetics. [conversion from English to metric units

    Science.gov (United States)

    Mclyman, W. T.

    1974-01-01

    The adoption by NASA of the metric system for dimensioning to replace long-used English units imposes a requirement on the U.S. transformer designer to convert from the familiar units to the less familiar metric equivalents. Material is presented to assist in that transition in the field of transformer design and fabrication. The conversion data makes it possible for the designer to obtain a fast and close approximation of significant parameters such as size, weight, and temperature rise. Nomographs are included to provide a close approximation for breadboarding purposes. For greater convenience, derivations of some of the parameters are also presented.

  11. Mochovce Unit 3 and 4 Completion

    International Nuclear Information System (INIS)

    Aquilanti, G.

    2007-01-01

    Purpose of the Feasibility Study was to define in detail all technical, economic, financial, legal and authorization aspects of Mochovce NPP Unit 3 and 4 completion in order to provide Slovenske Elektrarne, a. s. (SE) and ENEL Top Management with all the necessary information for a final decision on Mochovce Unit 3 and 4. Feasibility study has started in January 2006. SE had the commitment to complete the Feasibility Study within 12 months from Closing of SE acquisition (April 2007). In order not to delay completion of Mochovce Unit 3 and 4, SE has decided to perform, in parallel to the Feasibility study, also all design and permitting activities which are required for the completion of Plant. This has involved anticipation of expenses for approximately 700 MSKK (or approx. 20 MEuro). SE was able to announce the positive decision about completion on Mochovce NPP Unit 3 and 4, two months in advance of the deadline.

  12. Toward a consistent random phase approximation based on the relativistic Hartree approximation

    International Nuclear Information System (INIS)

    Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.

    1992-01-01

    We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data

  13. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  14. Do brain image databanks support understanding of normal ageing brain structure? A systematic review

    International Nuclear Information System (INIS)

    Dickie, David Alexander; Job, Dominic E.; Wardlaw, Joanna M.; Poole, Ian; Ahearn, Trevor S.; Staff, Roger T.; Murray, Alison D.

    2012-01-01

    To document accessible magnetic resonance (MR) brain images, metadata and statistical results from normal older subjects that may be used to improve diagnoses of dementia. We systematically reviewed published brain image databanks (print literature and Internet) concerned with normal ageing brain structure. From nine eligible databanks, there appeared to be 944 normal subjects aged ≥60 years. However, many subjects were in more than one databank and not all were fully representative of normal ageing clinical characteristics. Therefore, there were approximately 343 subjects aged ≥60 years with metadata representative of normal ageing, but only 98 subjects were openly accessible. No databank had the range of MR image sequences, e.g. T2*, fluid-attenuated inversion recovery (FLAIR), required to effectively characterise the features of brain ageing. No databank supported random subject retrieval; therefore, manual selection bias and errors may occur in studies that use these subjects as controls. Finally, no databank stored results from statistical analyses of its brain image and metadata that may be validated with analyses of further data. Brain image databanks require open access, more subjects, metadata, MR image sequences, searchability and statistical results to improve understanding of normal ageing brain structure and diagnoses of dementia. (orig.)

  15. Effect of glycation of albumin on its renal clearance in normal and diabetic rats

    International Nuclear Information System (INIS)

    Layton, G.J.; Jerums, G.

    1988-01-01

    Two independent techniques have been used to study the renal clearances of nonenzymatically glycated albumin and nonglycated albumin in normal and streptozotocin-induced diabetic rats, 16 to 24 weeks after the onset of diabetes. In the first technique, serum and urinary endogenous glycated and nonglycated albumin were separated using m-aminophenylboronate affinity chromatography and subsequently quantified by radioimmunoassay. Endogenous glycated albumin was cleared approximately twofold faster than nonglycated albumin in normal and diabetic rats. However, no difference was observed in the glycated albumin/nonglycated albumin clearance ratios (Cga/Calb) in normal and diabetic rats, respectively (2.18 +/- 0.39 vs 1.83 +/- 0.22, P greater than 0.05). The second technique measured the renal clearance of injected 125I-labelled glycated albumin and 125I-labelled albumin. The endogenous results were supported by the finding that 125I-labelled glycated albumin was cleared more rapidly than 125I-labelled albumin in normal (P less than 0.01) and diabetic (P less than 0.05) rats. The Cga/Calb ratio calculated for the radiolabelled albumins was 1.4 and 2.0 in normal and diabetic rats, respectively. This evidence suggests that nonenzymatic glycation of albumin increases its renal clearance to a similar degree in normal and diabetic rats

  16. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    Science.gov (United States)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  17. Approximation algorithms for guarding holey polygons ...

    African Journals Online (AJOL)

    Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...

  18. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2010 Mathematics Subject Classification. 46L07. 1. Introduction. Given a countable discrete group G, some nice approximation properties for the reduced. C∗-algebras C∗ r (G) can give us the approximation properties of G. For example, Lance. [7] proved that the nuclearity of C∗ r (G) is equivalent to the amenability of G; ...

  19. Testing of the Engineering Model Electrical Power Control Unit for the Fluids and Combustion Facility

    Science.gov (United States)

    Kimnach, Greg L.; Lebron, Ramon C.; Fox, David A.

    1999-01-01

    The John H. Glenn Research Center at Lewis Field (GRC) in Cleveland, OH and the Sundstrand Corporation in Rockford, IL have designed and developed an Engineering Model (EM) Electrical Power Control Unit (EPCU) for the Fluids Combustion Facility, (FCF) experiments to be flown on the International Space Station (ISS). The EPCU will be used as the power interface to the ISS power distribution system for the FCF's space experiments'test and telemetry hardware. Furthermore. it is proposed to be the common power interface for all experiments. The EPCU is a three kilowatt 12OVdc-to-28Vdc converter utilizing three independent Power Converter Units (PCUs), each rated at 1kWe (36Adc @ 28Vdc) which are paralleled and synchronized. Each converter may be fed from one of two ISS power channels. The 28Vdc loads are connected to the EPCU output via 48 solid-state and current-limiting switches, rated at 4Adc each. These switches may be paralleled to supply any given load up to the 108Adc normal operational limit of the paralleled converters. The EPCU was designed in this manner to maximize allocated-power utilization. to shed loads autonomously, to provide fault tolerance. and to provide a flexible power converter and control module to meet various ISS load demands. Tests of the EPCU in the Power Systems Facility testbed at GRC reveal that the overall converted-power efficiency, is approximately 89% with a nominal-input voltage of 12OVdc and a total load in the range of 4O% to 110% rated 28Vdc load. (The PCUs alone have an efficiency of approximately 94.5%). Furthermore, the EM unit passed all flight-qualification level (and beyond) vibration tests, passed ISS EMI (conducted, radiated. and susceptibility) requirements. successfully operated for extended periods in a thermal/vacuum chamber, was integrated with a proto-flight experiment and passed all stability and functional requirements.

  20. De novo deposition of laminin-positive basement membrane in vitro by normal hepatocytes and during hepatocarcinogenesis

    DEFF Research Database (Denmark)

    Albrechtsen, R; Wewer, U M; Thorgeirsson, S S

    1988-01-01

    De novo formation of laminin-positive basement membranes was found to be a distinct morphologic feature of diethylnitrosamine/phenobarbital-induced hepatocellular carcinomas of the rat. The first appearance of extracellularly located laminin occurred in the preneoplastic liver lesions...... (corresponding to neoplastic nodules), and this feature became successively more prominent during the course of hepatocellular carcinoma development. Most groups of tumor cells were surrounded by laminin-positive basement membrane material. The laminin-positive material was also deposited along the sinusoids......, a location where no laminin was seen in normal rat liver. The amount of extractable laminin from hepatocellular carcinomas was significantly higher (approximately 100 ng per mg tissue) than that of normal liver tissue (less than 20 ng per mg). In vitro experiments demonstrated that normal and preneoplastic...

  1. Delayed growth in two German shepherd dog littermates with normal serum concentrations of growth hormone, thyroxine, and cortisol.

    Science.gov (United States)

    Randolph, J F; Miller, C L; Cummings, J F; Lothrop, C D

    1990-01-01

    Four German Shepherd Dogs from a litter of 10 were evaluated because of postnatal onset of proportionate growth stunting that clinically resembled well-documented hypopituitary dwarfism in that breed. Although 2 pups had histologic evidence of hypopituitarism, the remaining 2 pups had normal serum growth hormone concentration and adrenocorticotropin secretory capability, and normal adrenal function test and thyroid function study results. Furthermore, the initially stunted German Shepherd Dogs grew at a steady rate until at 1 year, body weight and shoulder height approximated normal measurements. Seemingly, delayed growth in these pups may represent one end of a clinical spectrum associated with hypopituitarism in German Shepherd Dogs.

  2. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  3. Approximate number word knowledge before the cardinal principle.

    Science.gov (United States)

    Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C

    2015-02-01

    Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  5. Planar undulator motion excited by a fixed traveling wave. Quasiperiodic averaging normal forms and the FEL pendulum

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, James A.; Heinemann, Klaus [New Mexico Univ., Albuquerque, NM (United States). Dept. of Mathematics and Statistics; Vogt, Mathias [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Gooden, Matthew [North Carolina State Univ., Raleigh, NC (United States). Dept. of Physics

    2013-03-15

    We present a mathematical analysis of planar motion of energetic electrons moving through a planar dipole undulator, excited by a fixed planar polarized plane wave Maxwell field in the X-Ray FEL regime. Our starting point is the 6D Lorentz system, which allows planar motions, and we examine this dynamical system as the wave length {lambda} of the traveling wave varies. By scalings and transformations the 6D system is reduced, without approximation, to a 2D system in a form for a rigorous asymptotic analysis using the Method of Averaging (MoA), a long time perturbation theory. The two dependent variables are a scaled energy deviation and a generalization of the so- called ponderomotive phase. As {lambda} varies the system passes through resonant and nonresonant (NR) zones and we develop NR and near-to-resonant (NtoR) MoA normal form approximations. The NtoR normal forms contain a parameter which measures the distance from a resonance. For a special initial condition, for the planar motion and on resonance, the NtoR normal form reduces to the well known FEL pendulum system. We then state and prove NR and NtoR first-order averaging theorems which give explicit error bounds for the normal form approximations. We prove the theorems in great detail, giving the interested reader a tutorial on mathematically rigorous perturbation theory in a context where the proofs are easily understood. The proofs are novel in that they do not use a near identity transformation and they use a system of differential inequalities. The NR case is an example of quasiperiodic averaging where the small divisor problem enters in the simplest possible way. To our knowledge the planar prob- lem has not been analyzed with the generality we aspire to here nor has the standard FEL pendulum system been derived with associated error bounds as we do here. We briefly discuss the low gain theory in light of our NtoR normal form. Our mathematical treatment of the noncollective FEL beam dynamics problem in

  6. Planar undulator motion excited by a fixed traveling wave. Quasiperiodic averaging normal forms and the FEL pendulum

    International Nuclear Information System (INIS)

    Ellison, James A.; Heinemann, Klaus; Gooden, Matthew

    2013-03-01

    We present a mathematical analysis of planar motion of energetic electrons moving through a planar dipole undulator, excited by a fixed planar polarized plane wave Maxwell field in the X-Ray FEL regime. Our starting point is the 6D Lorentz system, which allows planar motions, and we examine this dynamical system as the wave length λ of the traveling wave varies. By scalings and transformations the 6D system is reduced, without approximation, to a 2D system in a form for a rigorous asymptotic analysis using the Method of Averaging (MoA), a long time perturbation theory. The two dependent variables are a scaled energy deviation and a generalization of the so- called ponderomotive phase. As λ varies the system passes through resonant and nonresonant (NR) zones and we develop NR and near-to-resonant (NtoR) MoA normal form approximations. The NtoR normal forms contain a parameter which measures the distance from a resonance. For a special initial condition, for the planar motion and on resonance, the NtoR normal form reduces to the well known FEL pendulum system. We then state and prove NR and NtoR first-order averaging theorems which give explicit error bounds for the normal form approximations. We prove the theorems in great detail, giving the interested reader a tutorial on mathematically rigorous perturbation theory in a context where the proofs are easily understood. The proofs are novel in that they do not use a near identity transformation and they use a system of differential inequalities. The NR case is an example of quasiperiodic averaging where the small divisor problem enters in the simplest possible way. To our knowledge the planar prob- lem has not been analyzed with the generality we aspire to here nor has the standard FEL pendulum system been derived with associated error bounds as we do here. We briefly discuss the low gain theory in light of our NtoR normal form. Our mathematical treatment of the noncollective FEL beam dynamics problem in the

  7. Transport and magnetic resonance in normal and superfluid Fermi liquids

    International Nuclear Information System (INIS)

    Smith, H.

    1976-10-01

    This thesis provides a framework for a series of 19 papers published by the author in a study of transport and magnetic resonance in normal and superfluid Fermi liquids. The Boltzmann equation and methods for its solution are discussed. Electron-electron scattering in metals, with particular emphasis on alkali metals, is considered. Transport in a normal uncharged Fermi liquid such as pure 3 He at temperatures well below its degeneracy temperature of approximately 1 K or mixtures of 3 He in 4 He with degeneracy temperatures ranging typically from 100 to 200 mk is discussed with emphasis on comparison with experiments with the aim of testing models of the particle-particle scattering amplitude. Transport and magnetic resonance in superfluid 3 He is considered. The phenomenological treatment of relaxation is reviewed and the magnitude of the phenomenlogical relaxation time close to Tsub(c) is derived for the case of longitudinal resonance. Comments are made on non-linear magnetic resonance and textures and spin waves. (B.R.H.)

  8. Pawlak algebra and approximate structure on fuzzy lattice.

    Science.gov (United States)

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.

  9. Dynamical cluster approximation plus semiclassical approximation study for a Mott insulator and d-wave pairing

    Science.gov (United States)

    Kim, SungKun; Lee, Hunpyo

    2017-06-01

    Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.

  10. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  11. Device for controlling the hydraulic lifter of drilling unit

    Energy Technology Data Exchange (ETDEWEB)

    Kraskov, P N

    1981-04-10

    A device is suggested for controlling the hydraulic lifter of a drilling unit. It contains a throttling valve with cylinder for servocontrol, mechanism for assigning the program for lowering velocity connected to the power cylinder, and oil tank. In order to improve the reliable concentration of the drilling unit by guaranteeing possible alternation for halting descent when the string falls on a projection in the well, the device is equipped with a normally open two-position hydraulically controlled distributor with spring return connected to the working surface of the power cylinder and valve connected to it with logical function of ILI for hydraulic control of the normally opened two-position distributor. The latter connects the working cavity of the servocontrol cylinder with the oil tank.

  12. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  13. Uniform analytic approximation of Wigner rotation matrices

    Science.gov (United States)

    Hoffmann, Scott E.

    2018-02-01

    We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.

  14. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  15. Approximation Properties of Certain Summation Integral Type Operators

    Directory of Open Access Journals (Sweden)

    Patel P.

    2015-03-01

    Full Text Available In the present paper, we study approximation properties of a family of linear positive operators and establish direct results, asymptotic formula, rate of convergence, weighted approximation theorem, inverse theorem and better approximation for this family of linear positive operators.

  16. Frequency of hepatitis C viral RNA in anti-hepatitis C virus non-reactive blood donors with normal alanine aminotransferase

    International Nuclear Information System (INIS)

    Ali, N.; Moinuddin, A.; Ahmed, S.A.

    2010-01-01

    To determine the frequency of HCV RNA in an anti-HCV non-reactive blood donor population with normal ALT, and its cost effectiveness. Study Design: An observational study. Place and Duration of Study: Baqai Institute of Haematology, Baqai Medical University, Karachi, and Combined Military Hospital, Malir Cantt, Karachi, from May 2006 to April 2008. Methodology: After initial interview and mini-medical examination, demographic data of blood donors was recorded, and anti-HCV, HBsAg and HIV were screened by third generation ELISA. Those reactive to anti-HCV, HbsAg and/or HIV were excluded. Four hundred consecutive donors with ALT within the reference range of 15-41 units/L were included in study. HCV RNA RT-PCR was performed on 5 sample mini-pools using Bio-Rad Real time PCR equipment. Results: All 400 donors were male, with mean age 27 years SD + 6.2. ALT of blood donors varied between 15-41 U/L with mean of 31.5+6.4 U/L, HCV RNA was detected in 2/400 (0.5%) blood donors. Screening one blood bag for HCV RNA costs Rs 4,000.00 equivalent to 50 US dollars, while screening through 5 sample mini-pools was Rs. 800.00 equivalent to approximately 10 US dollars. Conclusion: HCV RNA frequency was 0.5% (2/400) in the studied anti-HCV non-reactive normal ALT blood donors. Screening through mini-pools is more cost-effective. (author)

  17. Semiclassical initial value approximation for Green's function.

    Science.gov (United States)

    Kay, Kenneth G

    2010-06-28

    A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.

  18. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  19. Synchronization of low- and high-threshold motor units.

    Science.gov (United States)

    Defreitas, Jason M; Beck, Travis W; Ye, Xin; Stock, Matt S

    2014-04-01

    We examined the degree of synchronization for both low- and high-threshold motor unit (MU) pairs at high force levels. MU spike trains were recorded from the quadriceps during high-force isometric leg extensions. Short-term synchronization (between -6 and 6 ms) was calculated for every unique MU pair for each contraction. At high force levels, earlier recruited motor unit pairs (low-threshold) demonstrated relatively low levels of short-term synchronization (approximately 7.3% extra firings than would have been expected by chance). However, the magnitude of synchronization increased significantly and linearly with mean recruitment threshold (reaching 22.1% extra firings for motor unit pairs recruited above 70% MVC). Three potential mechanisms that could explain the observed differences in synchronization across motor unit types are proposed and discussed. Copyright © 2013 Wiley Periodicals, Inc.

  20. The adiabatic approximation in multichannel scattering

    International Nuclear Information System (INIS)

    Schulte, A.M.

    1978-01-01

    Using two-dimensional models, an attempt has been made to get an impression of the conditions of validity of the adiabatic approximation. For a nucleon bound to a rotating nucleus the Coriolis coupling is neglected and the relation between this nuclear Coriolis coupling and the classical Coriolis force has been examined. The approximation for particle scattering from an axially symmetric rotating nucleus based on a short duration of the collision, has been combined with an approximation based on the limitation of angular momentum transfer between particle and nucleus. Numerical calculations demonstrate the validity of the new combined method. The concept of time duration for quantum mechanical collisions has also been studied, as has the collective description of permanently deformed nuclei. (C.F.)

  1. Minimal entropy approximation for cellular automata

    International Nuclear Information System (INIS)

    Fukś, Henryk

    2014-01-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)

  2. Function approximation using combined unsupervised and supervised learning.

    Science.gov (United States)

    Andras, Peter

    2014-03-01

    Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.

  3. Hardness and Approximation for Network Flow Interdiction

    OpenAIRE

    Chestnut, Stephen R.; Zenklusen, Rico

    2015-01-01

    In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...

  4. Approximate reasoning in physical systems

    International Nuclear Information System (INIS)

    Mutihac, R.

    1991-01-01

    The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)

  5. Face Recognition using Approximate Arithmetic

    DEFF Research Database (Denmark)

    Marso, Karol

    Face recognition is image processing technique which aims to identify human faces and found its use in various different fields for example in security. Throughout the years this field evolved and there are many approaches and many different algorithms which aim to make the face recognition as effective...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....

  6. Multiobjective Optimization of a Counterrotating Type Pump-Turbine Unit Operated at Turbine Mode

    Directory of Open Access Journals (Sweden)

    Jin-Hyuk Kim

    2014-05-01

    Full Text Available A multiobjective optimization for improving the turbine output and efficiency of a counterrotating type pump-turbine unit operated at turbine mode was carried out in this work. The blade geometry of both the runners was optimized using a hybrid multiobjective evolutionary algorithm coupled with a surrogate model. Three-dimensional Reynolds-averaged Navier-Stokes equations with the shear stress transport turbulence model were discretized by finite volume approximations and solved on hexahedral grids to analyze the flow in the pump-turbine unit. As major hydrodynamic performance parameters, the turbine output and efficiency were selected as objective functions with two design variables related to the hub profiles of both the runner blades. These objectives were numerically assessed at twelve design points selected by Latin hypercube sampling in the design space. Response surface approximation models for the objectives were constructed based on the objective function values at the design points. A fast nondominated sorting genetic algorithm for the local search coupled with the response surface approximation models was applied to determine the global Pareto-optimal solutions. The trade-off between the two objectives was determined and described with respect to the Pareto-optimal solutions. The results of this work showed that the turbine outputs and efficiencies of optimized pump-turbine units were simultaneously improved in comparison to the reference unit.

  7. Correlated random-phase approximation from densities and in-medium matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Trippel, Richard; Roth, Robert [Institut fuer Kernphysik, Technische Universitaet Darmstadt (Germany)

    2016-07-01

    The random-phase approximation (RPA) as well as the second RPA (SRPA) are established tools for the study of collective excitations in nuclei. Addressing the well known lack of correlations, we derived a universal framework for a fully correlated RPA based on the use of one- and two-body densities. We apply densities from coupled cluster theory and investigate the impact of correlations. As an alternative approach to correlations we use matrix elements transformed via in-medium similarity renormalization group (IM-SRG) in combination with RPA and SRPA. We find that within SRPA the use of IM-SRG matrix elements leads to the disappearance of instabilities of low-lying states. For the calculations we use normal-ordered two- plus three-body interactions derived from chiral effective field theory. We apply different Hamiltonians to a number of doubly-magic nuclei and calculate electric transition strengths.

  8. Normal Strength Steel Fiber Reinforced Concrete Subjected to Explosive Loading

    OpenAIRE

    Mohammed Alias Yusof; Norazman Norazman; Ariffin Ariffin; Fauzi Mohd Zain; Risby Risby; CP Ng

    2011-01-01

    This paper presents the results of an experimental investigation on the behavior of plain reinforced concrete and Normal strength steel fiber reinforced concrete panels (SFRC) subjected to explosive loading. The experiment were performed by the Blast Research Unit Faculty of Engineering, University Pertahanan Nasional Malaysia A total of 8 reinforced concrete panels of 600mm x 600mm x 100mm were tested. The steel fiber reinforced concrete panels incorporated three different volume fraction, 0...

  9. Stochastic quantization and mean field approximation

    International Nuclear Information System (INIS)

    Jengo, R.; Parga, N.

    1983-09-01

    In the context of the stochastic quantization we propose factorized approximate solutions for the Fokker-Planck equation for the XY and Zsub(N) spin systems in D dimensions. The resulting differential equation for a factor can be solved and it is found to give in the limit of t→infinity the mean field or, in the more general case, the Bethe-Peierls approximation. (author)

  10. Approximative solutions of stochastic optimization problem

    Czech Academy of Sciences Publication Activity Database

    Lachout, Petr

    2010-01-01

    Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf

  11. Quantum teleportation via noisy bipartite and tripartite accelerating quantum states: beyond the single mode approximation

    Science.gov (United States)

    Zounia, M.; Shamirzaie, M.; Ashouri, A.

    2017-09-01

    In this paper quantum teleportation of an unknown quantum state via noisy maximally bipartite (Bell) and maximally tripartite (Greenberger-Horne-Zeilinger (GHZ)) entangled states are investigated. We suppose that one of the observers who would receive the sent state accelerates uniformly with respect to the sender. The interactions of the quantum system with its environment during the teleportation process impose noises. These (unital and nonunital) noises are: phase damping, phase flip, amplitude damping and bit flip. In expressing the modes of the Dirac field used as qubits, in the accelerating frame, the so-called single mode approximation is not imposed. We calculate the fidelities of teleportation, and discuss their behaviors using suitable plots. The effects of noise, acceleration and going beyond the single mode approximation are discussed. Although the Bell states bring higher fidelities than GHZ states, the global behaviors of the two quantum systems with respect to some noise types, and therefore their fidelities, are different.

  12. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  13. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  14. Fragmentation of eastern United States forest types

    Science.gov (United States)

    Kurt H. Riitters; John W. Coulston

    2013-01-01

    Fragmentation is a continuing threat to the sustainability of forests in the Eastern United States, where land use changes supporting a growing human population are the primary driver of forest fragmentation (Stein and others 2009). While once mostly forested, approximately 40 percent of the original forest area has been converted to other land uses, and most of the...

  15. Thin-wall approximation in vacuum decay: A lemma

    Science.gov (United States)

    Brown, Adam R.

    2018-05-01

    The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.

  16. Parametric design studies of toroidal magnetic energy storage units

    International Nuclear Information System (INIS)

    Herring, J.S.

    1990-01-01

    Superconducting magnetic energy storage (SMES) units have a number of advantages as storage devices. Electrical current is the input, output and stored medium, allowing for completely solid-state energy conversion. The magnets themselves have no moving parts. The round-trip efficiency is higher than those for batteries, compressed air or pumped hydro. Output power can be very high, allowing complete discharge of the unit within a few seconds. Finally, the unit can be designed for a very large number of cycles, limited basically by fatigue in the structural components. A small systems code has been written to produce and evaluate self-consistent designs for toroidal superconducting energy storage units. The units can use either low temperature or high temperature superconductors. The coils have 'D' shape where the conductor and its stabilizer/structure is loaded only in tension and the centering forces are borne by a bucking cylinder. The coils are convectively cooled from a cryogenic reservoir in the bore of the coils. The coils are suspended in a cylindrical metal shell which protects the magnet during rail, automotive or shipboard use. It is important to note that the storage unit does not rely on its surroundings for structural support, other than normal gravity and inertial loads. This paper presents designs for toroidal energy storage units produced by the systems code. A wide range of several parameters have been considered, resulting in units storing from 1 MJ to 72 GJ. Maximum fields range from 5 t to 20 T. The masses and volumes of the coils, bucking cylinder, coolant, insulation and outer shell are calculated. For unattended use, the allowable operating time using only the boiloff of the cryogenic fluid for refrigeration is calculated. For larger units, the coils have been divided into modules suitable for normal truck or rail transport. 8 refs., 5 tabs

  17. Parametric design studies of toroidal magnetic energy storage units

    Science.gov (United States)

    Herring, J. Stephen

    Superconducting magnetic energy storage (SMES) units have a number of advantages as storage devices. Electrical current is the input, output and stored medium, allowing for completely solid-state energy conversion. The magnets themselves have no moving parts. The round trip efficiency is higher than those for batteries, compressed air or pumped hydro. Output power can be very high, allowing complete discharge of the unit within a few seconds. Finally, the unit can be designed for a very large number of cycles, limited basically by fatigue in the structural components. A small systems code was written to produce and evaluate self-consistent designs for toroidal superconducting energy storage units. The units can use either low temperature or high temperature superconductors. The coils have D shape where the conductor and its stabilizer/structure is loaded only in tension and the centering forces are borne by a bucking cylinder. The coils are convectively cooled from a cryogenic reservoir in the bore of the coils. The coils are suspended in a cylindrical metal shell which protects the magnet during rail, automotive or shipboard use. It is important to note that the storage unit does not rely on its surroundings for structural support, other than normal gravity and inertial loads. Designs are presented for toroidal energy storage units produced by the systems code. A wide range of several parameters have been considered, resulting in units storing from 1 MJ to 72 GJ. Maximum fields range from 5 T to 20 T. The masses and volumes of the coils, bucking cylinder, coolant, insulation and outer shell are calculated. For unattended use, the allowable operating time using only the boiloff of the cryogenic fluid for refrigeration is calculated. For larger units, the coils were divided into modules suitable for normal truck or rail transport.

  18. The contemporary cement cycle of the United States

    Science.gov (United States)

    Kapur, A.; Van Oss, H. G.; Keoleian, G.; Kesler, S.E.; Kendall, A.

    2009-01-01

    A country-level stock and flow model for cement, an important construction material, was developed based on a material flow analysis framework. Using this model, the contemporary cement cycle of the United States was constructed by analyzing production, import, and export data for different stages of the cement cycle. The United States currently supplies approximately 80% of its cement consumption through domestic production and the rest is imported. The average annual net addition of in-use new cement stock over the period 2000-2004 was approximately 83 million metric tons and amounts to 2.3 tons per capita of concrete. Nonfuel carbon dioxide emissions (42 million metric tons per year) from the calcination phase of cement manufacture account for 62% of the total 68 million tons per year of cement production residues. The end-of-life cement discards are estimated to be 33 million metric tons per year, of which between 30% and 80% is recycled. A significant portion of the infrastructure in the United States is reaching the end of its useful life and will need to be replaced or rehabilitated; this could require far more cement than might be expected from economic forecasts of demand for cement. ?? 2009 Springer Japan.

  19. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  20. An improved saddlepoint approximation.

    Science.gov (United States)

    Gillespie, Colin S; Renshaw, Eric

    2007-08-01

    Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.

  1. Topology, calculus and approximation

    CERN Document Server

    Komornik, Vilmos

    2017-01-01

    Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...

  2. Comparison of four support-vector based function approximators

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2004-01-01

    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been

  3. Recursive B-spline approximation using the Kalman filter

    Directory of Open Access Journals (Sweden)

    Jens Jauch

    2017-02-01

    Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.

  4. Approximate Computing Techniques for Iterative Graph Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram

    2017-12-18

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.

  5. Conditional Density Approximations with Mixtures of Polynomials

    DEFF Research Database (Denmark)

    Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre

    2015-01-01

    Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...

  6. Mathematical analysis, approximation theory and their applications

    CERN Document Server

    Gupta, Vijay

    2016-01-01

    Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.

  7. On Love's approximation for fluid-filled elastic tubes

    International Nuclear Information System (INIS)

    Caroli, E.; Mainardi, F.

    1980-01-01

    A simple procedure is set up to introduce Love's approximation for wave propagation in thin-walled fluid-filled elastic tubes. The dispersion relation for linear waves and the radial profile for fluid pressure are determined in this approximation. It is shown that the Love approximation is valid in the low-frequency regime. (author)

  8. WKB approximation in atomic physics

    International Nuclear Information System (INIS)

    Karnakov, Boris Mikhailovich

    2013-01-01

    Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  10. Healthcare Needs of Homeless Youth in the United States

    OpenAIRE

    TERRY, Marisa J; BEDI, Gurpreet; PATEL, Neil

    2010-01-01

    Approximately 1.6 - 2.8 million youth at any given time in the United States are considered homeless and at high risk for poor social and health outcomes. It is estimated that in the United States homelessness overall is expected to rise 10 -20 percent in the next year. While governmental and private programs exist to address the tribulations faced by homeless persons, youth continue to be underserved. The 2009, $787 billion economic stimulus package includes $1.5 billion to address issues...

  11. A Radix-10 Digit-Recurrence Division Unit: Algorithm and Architecture

    DEFF Research Database (Denmark)

    Lang, Tomas; Nannarelli, Alberto

    2007-01-01

    In this work, we present a radix-10 division unit that is based on the digit-recurrence algorithm. The previous decimal division designs do not include recent developments in the theory and practice of this type of algorithm, which were developed for radix-2^k dividers. In addition to the adaptat...... dynamic range of significant) and it has a shorter latency than a radix-10 unit based on the Newton-Raphson approximation....

  12. SFU-driven transparent approximation acceleration on GPUs

    NARCIS (Netherlands)

    Li, A.; Song, S.L.; Wijtvliet, M.; Kumar, A.; Corporaal, H.

    2016-01-01

    Approximate computing, the technique that sacrifices certain amount of accuracy in exchange for substantial performance boost or power reduction, is one of the most promising solutions to enable power control and performance scaling towards exascale. Although most existing approximation designs

  13. Calculation of normal modes of the closed waveguides in general vector case

    Science.gov (United States)

    Malykh, M. D.; Sevastianov, L. A.; Tiutiunnik, A. A.

    2018-04-01

    The article is devoted to the calculation of normal modes of the closed waveguides with an arbitrary filling ɛ, μ in the system of computer algebra Sage. Maxwell equations in the cylinder are reduced to the system of two bounded Helmholtz equations, the notion of weak solution of this system is given and then this system is investigated as a system of ordinary differential equations. The normal modes of this system are an eigenvectors of a matrix pencil. We suggest to calculate the matrix elements approximately and to truncate the matrix by usual way but further to solve the truncated eigenvalue problem exactly in the field of algebraic numbers. This approach allows to keep the symmetry of the initial problem and in particular the multiplicity of the eigenvalues. In the work would be presented some results of calculations.

  14. Reaffirming normal: the high risk of pathologizing healthy adults when interpreting the MMPI-2-RF.

    Science.gov (United States)

    Odland, Anthony P; Lammy, Andrew B; Perle, Jonathan G; Martin, Phillip K; Grote, Christopher L

    2015-01-01

    Monte Carlo simulations were utilized to determine the proportion of the normal population expected to have scale elevations on the MMPI-2-RF when multiple scores are interpreted. Results showed that when all 40 MMPI-2-RF scales are simultaneously considered, approximately 70% of normal adults are likely to have at least one scale elevation at or above 65 T, and as many as 20% will have five or more elevated scales. When the Restructured Clinical (RC) Scales are under consideration, 34% of normal adults have at least one elevated score. Interpretation of the Specific Problem Scales and Personality Psychopathology Five Scales--Revised also yielded higher than expected rates of significant scores, with as many as one in four normal adults possibly being miscategorized as having features of a personality disorder by the latter scales. These findings are consistent with the growing literature on rates of apparently abnormal scores in the normal population due to multiple score interpretation. Findings are discussed in relation to clinical assessment, as well as in response to recent work suggesting that the MMPI-2-RF's multiscale composition does not contribute to high rates of elevated scores.

  15. Approximate Networking for Universal Internet Access

    Directory of Open Access Journals (Sweden)

    Junaid Qadir

    2017-12-01

    Full Text Available Despite the best efforts of networking researchers and practitioners, an ideal Internet experience is inaccessible to an overwhelming majority of people the world over, mainly due to the lack of cost-efficient ways of provisioning high-performance, global Internet. In this paper, we argue that instead of an exclusive focus on a utopian goal of universally accessible “ideal networking” (in which we have a high throughput and quality of service as well as low latency and congestion, we should consider providing “approximate networking” through the adoption of context-appropriate trade-offs. In this regard, we propose to leverage the advances in the emerging trend of “approximate computing” that rely on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power, and performance efficiency of systems by orders of magnitude by embracing output errors in resilient applications. Furthermore, we propose to extend the dimensions of approximate computing towards various knobs available at network layers. Approximate networking can be used to provision “Global Access to the Internet for All” (GAIA in a pragmatically tiered fashion, in which different users around the world are provided a different context-appropriate (but still contextually functional Internet experience.

  16. Variational Gaussian approximation for Poisson data

    Science.gov (United States)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  17. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  18. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    W. Romeijnders; L. Stougie (Leen); M. van der Vlerk

    2014-01-01

    htmlabstractApproximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value.

  19. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    Romeijnders, W.; Stougie, L.; van der Vlerk, M.H.

    2014-01-01

    Approximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value. However,

  20. Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed

    Science.gov (United States)

    Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy

    2015-09-01

    Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.

  1. Magnus approximation in the adiabatic picture

    International Nuclear Information System (INIS)

    Klarsfeld, S.; Oteo, J.A.

    1991-01-01

    A simple approximate nonperturbative method is described for treating time-dependent problems that works well in the intermediate regime far from both the sudden and the adiabatic limits. The method consists of applying the Magnus expansion after transforming to the adiabatic basis defined by the eigenstates of the instantaneous Hamiltonian. A few exactly soluble examples are considered in order to assess the domain of validity of the approximation. (author) 32 refs., 4 figs

  2. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  3. How Long Is a Normal Labor?

    DEFF Research Database (Denmark)

    Hildingsson, Ingegerd; Blix, Ellen; Hegaard, Hanne

    2015-01-01

    OBJECTIVE: Normal progress of labor is a subject for discussion among professionals. The aim of this study was to assess the duration of labor in women with a planned home birth and spontaneous onset who gave birth at home or in hospital after transfer. METHODS: This is a population-based study...... of home births in four Nordic countries (Denmark, Iceland, Norway, and Sweden). All midwives assisting at a home birth from 2008 to 2013 were asked to provide information about home births using a questionnaire. RESULTS: Birth data from 1,612 women, from Denmark (n = 1,170), Norway (n = 263), Sweden (n...... = 138), and Iceland (n = 41) were included. The total median duration from onset of labor until the birth of the baby was approximately 14 hours for primiparas and 7.25 hours for multiparas. The duration of the different phases varied between countries. Blood loss more than 1,000 mL and perineal...

  4. Space-efficient path-reporting approximate distance oracles

    DEFF Research Database (Denmark)

    Elkin, Michael; Neiman, Ofer; Wulff-Nilsen, Christian

    2016-01-01

    We consider approximate path-reporting distance oracles, distance labeling and labeled routing with extremely low space requirements, for general undirected graphs. For distance oracles, we show how to break the nlog⁡n space bound of Thorup and Zwick if approximate paths rather than distances need...

  5. Perhitungan Iuran Normal Program Pensiun dengan Asumsi Suku Bunga Mengikuti Model Vasicek

    Directory of Open Access Journals (Sweden)

    I Nyoman Widana

    2017-12-01

    Full Text Available Labor has a very important role for national development. One way to optimize their productivity is to guarantee a certainty to earn income after retirement. Therefore the government and the private sector must have a program that can ensure the sustainability of this financial support. One option is a pension plan. The purpose of this study is to calculate the  normal cost  with the interest rate assumed to follow the Vasicek model and analyze the normal contribution of the pension program participants. Vasicek model is used to match with  the actual conditions. The method used in this research is the Projected Unit Credit Method and the Entry Age Normal method. The data source of this research is lecturers of FMIPA Unud. In addition, secondary data is also used in the form of the interest  rate of Bank Indonesia for the period of January 2006-December 2015. The results of this study indicate that  the older the age of the participants, when starting the pension program, the greater the first year normal cost  and the smaller the benefit which he or she  will get. Then, normal cost with constant interest rate  greater than normal cost with Vasicek interest rate. This occurs because the Vasicek model predicts interest between 4.8879%, up to 6.8384%. While constant interest is only 4.25%.  In addition, using normal cost that proportional to salary, it is found that the older the age of the participants the greater the proportion of the salary for normal cost.

  6. The impact of normal saline on the incidence of exposure keratopathy in patients hospitalized in intensive care units

    Directory of Open Access Journals (Sweden)

    Zohreh Davoodabady

    2018-01-01

    Full Text Available Background: Patients in the intensive care unit (ICU have impaired ocular protective mechanisms that lead to an increased risk of ocular surface diseases including exposure keratopathy (EK. This study was designed to evaluate the effect of normal saline (NS on the incidence and severity of EK in critically ill patients. Materials and Methods: This single-blind randomized controlled trial was conducted on 50 patients admitted to ICUs. The participants were selected through purposive sampling. One eye of each patient, randomly was allocated to intervention group (standard care with NS and the other eye to control group (standard care. In each patient, one eye (control group randomly received standard care and the other eye (intervention group received NS every 6 h in addition to standard care. The presence and severity of keratopathy was assessed daily until day 7 of hospitalization using fluorescein and an ophthalmoscope with cobalt blue filter. Chi-square test was used for statistical analysis in SPSS software. Results: Before the study ( first day there were no statistically significant differences in the incidence and severity of EK between groups. Although, the incidence and severity of EK after the study (7th day was higher in the intervention group compared to the control group, their differences were not statistically significant. Although, the incidence and severity of EK, from the 1st day until the 7th, increased within both groups, this increase was statistically significant only in the intervention (NS group. Conclusions: The use of NS as eye care in patients hospitalized in ICUs can increase the incidence and severity of EK and is not recommended.

  7. Aspects of three field approximations: Darwin, frozen, EMPULSE

    International Nuclear Information System (INIS)

    Boyd, J.K.; Lee, E.P.; Yu, S.S.

    1985-01-01

    The traditional approach used to study high energy beam propagation relies on the frozen field approximation. A minor modification of the frozen field approximation yields the set of equations applied to the analysis of the hose instability. These models are constrasted with the Darwin field approximation. A statement is made of the Darwin model equations relevant to the analysis of the hose instability

  8. On Convex Quadratic Approximation

    NARCIS (Netherlands)

    den Hertog, D.; de Klerk, E.; Roos, J.

    2000-01-01

    In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of

  9. All-Norm Approximation Algorithms

    NARCIS (Netherlands)

    Azar, Yossi; Epstein, Leah; Richter, Yossi; Woeginger, Gerhard J.; Penttonen, Martti; Meineche Schmidt, Erik

    2002-01-01

    A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different ℓ p norms. We address this problem by introducing the concept of an All-norm ρ-approximation

  10. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  11. Making nuclear 'normal'

    International Nuclear Information System (INIS)

    Haehlen, Peter; Elmiger, Bruno

    2000-01-01

    The mechanics of the Swiss NPPs' 'come and see' programme 1995-1999 were illustrated in our contributions to all PIME workshops since 1996. Now, after four annual 'waves', all the country has been covered by the NPPs' invitation to dialogue. This makes PIME 2000 the right time to shed some light on one particular objective of this initiative: making nuclear 'normal'. The principal aim of the 'come and see' programme, namely to give the Swiss NPPs 'a voice of their own' by the end of the nuclear moratorium 1990-2000, has clearly been attained and was commented on during earlier PIMEs. It is, however, equally important that Swiss nuclear energy not only made progress in terms of public 'presence', but also in terms of being perceived as a normal part of industry, as a normal branch of the economy. The message that Swiss nuclear energy is nothing but a normal business involving normal people, was stressed by several components of the multi-prong campaign: - The speakers in the TV ads were real - 'normal' - visitors' guides and not actors; - The testimonials in the print ads were all real NPP visitors - 'normal' people - and not models; - The mailings inviting a very large number of associations to 'come and see' activated a typical channel of 'normal' Swiss social life; - Spending money on ads (a new activity for Swiss NPPs) appears to have resulted in being perceived by the media as a normal branch of the economy. Today we feel that the 'normality' message has well been received by the media. In the controversy dealing with antinuclear arguments brought forward by environmental organisations journalists nowadays as a rule give nuclear energy a voice - a normal right to be heard. As in a 'normal' controversy, the media again actively ask themselves questions about specific antinuclear claims, much more than before 1990 when the moratorium started. The result is that in many cases such arguments are discarded by journalists, because they are, e.g., found to be

  12. Approximate Noether symmetries and collineations for regular perturbative Lagrangians

    Science.gov (United States)

    Paliathanasis, Andronikos; Jamal, Sameerah

    2018-01-01

    Regular perturbative Lagrangians that admit approximate Noether symmetries and approximate conservation laws are studied. Specifically, we investigate the connection between approximate Noether symmetries and collineations of the underlying manifold. In particular we determine the generic Noether symmetry conditions for the approximate point symmetries and we find that for a class of perturbed Lagrangians, Noether symmetries are related to the elements of the Homothetic algebra of the metric which is defined by the unperturbed Lagrangian. Moreover, we discuss how exact symmetries become approximate symmetries. Finally, some applications are presented.

  13. Probability of Regenerating a Normal Limb After Bite Injury in the Mexican Axolotl (Ambystoma mexicanum).

    Science.gov (United States)

    Thompson, Sierra; Muzinic, Laura; Muzinic, Christopher; Niemiller, Matthew L; Voss, S Randal

    2014-06-01

    Multiple factors are thought to cause limb abnormalities in amphibian populations by altering processes of limb development and regeneration. We examined adult and juvenile axolotls ( Ambystoma mexicanum ) in the Ambystoma Genetic Stock Center (AGSC) for limb and digit abnormalities to investigate the probability of normal regeneration after bite injury. We observed that 80% of larval salamanders show evidence of bite injury at the time of transition from group housing to solitary housing. Among 717 adult axolotls that were surveyed, which included solitary-housed males and group-housed females, approximately half presented abnormalities, including examples of extra or missing digits and limbs, fused digits, and digits growing from atypical anatomical positions. Bite injury likely explains these limb defects, and not abnormal development, because limbs with normal anatomy regenerated after performing rostral amputations. We infer that only 43% of AGSC larvae will present four anatomically normal looking adult limbs after incurring a bite injury. Our results show regeneration of normal limb anatomy to be less than perfect after bite injury.

  14. Power enhancing by reversing mode sequence in tuned mass-spring unit attached vibration energy harvester

    Directory of Open Access Journals (Sweden)

    Jae Eun Kim

    2013-07-01

    Full Text Available We propose a vibration energy harvester consisting of an auxiliary frequency-tuned mass unit and a piezoelectric vibration energy harvesting unit for enhancing output power. The proposed integrated system is so configured that its out-of-phase mode can appear at the lowest eigenfrequency unlike in the conventional system using a tuned unit. Such an arrangement makes the resulting system distinctive: enhanced output power at or near the target operating frequency and very little eigenfrequency separation, not observed in conventional eigenfrequency-tuned vibration energy harvesters. The power enhancement of the proposed system is theoretically examined with and without tip mass normalization or footprint area normalization.

  15. Development of waste unit for use in shallow land burial

    International Nuclear Information System (INIS)

    Brodersen, K.

    1986-01-01

    A hexagonal waste unit has been developed for use in shallow land burial of low- and medium-level radioactive waste. The waste units used as overpack on empty standard 210 1 drums have been tested for tightness and mechanical resistance. Experimental burial of 21 empty full-size units has demonstrated the emplacement of the containers and the sealing of the crevises between them with molten bitumen. The development of the experimental burial with time is being followed. Three different conceptual designs for advanced burial systems using the hexagonal standard units are described. The outer barrier is a thick concrete structure covered by 2, 10 or 20 m soil, respectively. The waste units were cast from a normal high-quality concrete as well as from Densit, a new, very strong and impermeable type of concrete prepared by the combined use of silica-fume (microsilica) and a superplastizicer as additives. The migration of Cl - , Cs + and tritiated water was found to be much slower in Densit than in normal concrete. In combination with leaching measurements for Cs + from the same materials the results are used to present some theoretical considerations concerning transport through solution-filled pore systems as dependent on pore-size distribution, tortuosity, etc. A method based on neutron-activated cement cast in form of thin plates has been developed and used to study the dissolution chemistry of concrete. A preliminary model is presented. Indications for precipitation mechanisms were obtained. Densit was demonstrated to ensure a high degree of corrosion protection for steel reinforcement. The reason is mainly the high electrical resistivity combined with low diffusive transport in the material. The pozzolanic reaction results in somewhat lower pH in the pore water than in normal concrete, but the effect is not so pronounced that the passivation of steel reinforcement is endangered

  16. A new mathematical approximation of sunlight attenuation in rocks for surface luminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Laskaris, Nikolaos, E-mail: nick.laskaris@gmail.com [University of the Aegean, Department of Mediterranean Studies, Laboratory of Archaeometry, 1 Demokratias Avenue, Rhodes 85100 (Greece); Liritzis, Ioannis, E-mail: liritzis@rhodes.aegean.gr [University of the Aegean, Department of Mediterranean Studies, Laboratory of Archaeometry, 1 Demokratias Avenue, Rhodes 85100 (Greece)

    2011-09-15

    The attenuation of sunlight through different rock surfaces and the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals clock resetting derived from sunlight induced eviction of electrons from electron traps, is a prerequisite criterion for potential dating. The modeling of change of residual luminescence as a function of two variables, the solar radiation path length (or depth) and exposure time offers further insight into the dating concept. The double exponential function modeling based on the Lambert-Beer law, valid under certain assumptions, constructed by a quasi-manual equation fails to offer a general and statistically sound expression of the best fit for most rock types. A cumulative log-normal distribution fitting provides a most satisfactory mathematical approximation for marbles, marble schists and granites, where absorption coefficient and residual luminescence parameters are defined per each type of rock or marble quarry. The new model is applied on available data and age determination tests. - Highlights: > Study of aattenuation of sunlight through different rock surfaces. > Study of the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals as a function of depth. > A Cumulative Log-Normal Distribution fitting provides the most satisfactory modeling for marbles, marble schists and granites. > The new model (Cummulative Log-Norm Fitting) is applied on available data and age determination tests.

  17. A new mathematical approximation of sunlight attenuation in rocks for surface luminescence dating

    International Nuclear Information System (INIS)

    Laskaris, Nikolaos; Liritzis, Ioannis

    2011-01-01

    The attenuation of sunlight through different rock surfaces and the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals clock resetting derived from sunlight induced eviction of electrons from electron traps, is a prerequisite criterion for potential dating. The modeling of change of residual luminescence as a function of two variables, the solar radiation path length (or depth) and exposure time offers further insight into the dating concept. The double exponential function modeling based on the Lambert-Beer law, valid under certain assumptions, constructed by a quasi-manual equation fails to offer a general and statistically sound expression of the best fit for most rock types. A cumulative log-normal distribution fitting provides a most satisfactory mathematical approximation for marbles, marble schists and granites, where absorption coefficient and residual luminescence parameters are defined per each type of rock or marble quarry. The new model is applied on available data and age determination tests. - Highlights: → Study of aattenuation of sunlight through different rock surfaces. → Study of the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals as a function of depth. → A Cumulative Log-Normal Distribution fitting provides the most satisfactory modeling for marbles, marble schists and granites. → The new model (Cummulative Log-Norm Fitting) is applied on available data and age determination tests.

  18. Square well approximation to the optical potential

    International Nuclear Information System (INIS)

    Jain, A.K.; Gupta, M.C.; Marwadi, P.R.

    1976-01-01

    Approximations for obtaining T-matrix elements for a sum of several potentials in terms of T-matrices for individual potentials are studied. Based on model calculations for S-wave for a sum of two separable non-local potentials of Yukawa type form factors and a sum of two delta function potentials, it is shown that the T-matrix for a sum of several potentials can be approximated satisfactorily over all the energy regions by the sum of T-matrices for individual potentials. Based on this, an approximate method for finding T-matrix for any local potential by approximating it by a sum of suitable number of square wells is presented. This provides an interesting way to calculate the T-matrix for any arbitary potential in terms of Bessel functions to a good degree of accuracy. The method is applied to the Saxon-Wood potentials and good agreement with exact results is found. (author)

  19. Sonographic findings after total hip arthroplasty: normal and complications

    International Nuclear Information System (INIS)

    Lee, Kyoung Rok; Seon, Young Seok; Choi, Ji He; Kim, Sun Su; Kim, Se Jong; Park, Byong Lan; Kim, Byoung Geun

    2002-01-01

    The purpose of this study was to determine the efficacy of sonography in the evaluation of normal pseudocapsular morphology and the detection of complications after total hip arthroplasty. Between Janvary 1997 and June 2000, 47 patients (35 men and 12 women aged 24 to 84 (mean, 61) years) using real-time linear-array, convex US units with 3.5-MHz and 10-MHz transducers. Normal capsular morphology in 30 with total hip replacements, who had been asymptomatic for at least one year, was studied, and the prosthetic joint infection demonstrated in six of 17 who had experienced was confirmed at surgery or by US-guided aspiration. Sonograms indicated that a normal pseudocapsule lay straight over the neck of the prosthesis or was slightly convex toward the neck , and that the mean bone-to-pseudocapsule distance was 2.9 mm. However, in the 11 symptomatic patients in whom no evidence of infection was revealed by cultures, th mean distance was 4.7 mm; in the remaining six patients, whose joints were infected (a condition strongly suggested by the presence of extracapsular fluid), the mean distance was 5.5 mm, with no significant difference between the two groups. Sonography can be used to evaluate normal caspular morphology after total hip replacement and to diagnose infection around hip prostheses. In all patients in whom sonography revealed the presence of extra-articular fluid, infection had occurred

  20. Sonographic findings after total hip arthroplasty: normal and complications

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyoung Rok; Seon, Young Seok; Choi, Ji He; Kim, Sun Su; Kim, Se Jong; Park, Byong Lan; Kim, Byoung Geun [Kwangju Christian Hospital, Kwangju (Korea, Republic of)

    2002-04-01

    The purpose of this study was to determine the efficacy of sonography in the evaluation of normal pseudocapsular morphology and the detection of complications after total hip arthroplasty. Between Janvary 1997 and June 2000, 47 patients (35 men and 12 women aged 24 to 84 (mean, 61) years) using real-time linear-array, convex US units with 3.5-MHz and 10-MHz transducers. Normal capsular morphology in 30 with total hip replacements, who had been asymptomatic for at least one year, was studied, and the prosthetic joint infection demonstrated in six of 17 who had experienced was confirmed at surgery or by US-guided aspiration. Sonograms indicated that a normal pseudocapsule lay straight over the neck of the prosthesis or was slightly convex toward the neck , and that the mean bone-to-pseudocapsule distance was 2.9 mm. However, in the 11 symptomatic patients in whom no evidence of infection was revealed by cultures, th mean distance was 4.7 mm; in the remaining six patients, whose joints were infected (a condition strongly suggested by the presence of extracapsular fluid), the mean distance was 5.5 mm, with no significant difference between the two groups. Sonography can be used to evaluate normal caspular morphology after total hip replacement and to diagnose infection around hip prostheses. In all patients in whom sonography revealed the presence of extra-articular fluid, infection had occurred.

  1. Analysis of Advancement and Attrition in the Military Ceremonial Units

    National Research Council Canada - National Science Library

    Hostetler, Elizabeth

    1997-01-01

    ... their normal career progression on hold. Information on individuals who entered the military service during fiscal years 1986 to 1995 and were assigned to one of the ceremonial units was collected...

  2. Synthesis of a posterior indicator protein in normal embryos and double abdomens of Smittia sp. (Chironomidae, Diptera).

    Science.gov (United States)

    Jäckle, H; Kalthoff, K

    1980-01-01

    In embryos of the chironomid midge Smittia, synthesis of a posterior indicator protein designated PI1 (Mr approximately 50,000; pI approximately 5.5) forecasts development of an abdomen as opposed to head and thorax. The protein is synthesized several hours before germ anlage formation. In normal embryos at early blastoderm stages, synthesis of PI1 is restricted to posterior embryonic fragments but not to pole cells. In "double-abdomen" embryos, a mirror-image duplication of the abdomen is formed by cells that would otherwise develop into head and thorax. Embryos were programmed for double-abdomen development by UV irradiation of the anterior pole, and half of them were reprogrammed for normal development by subsequent exposure to visible light (photoreversal). Correspondingly, PI1 was synthesized in anterior fragments of UV-irradiated embryos but not after photoreversal. In a control experiment, UV irradiation of the posterior pole caused neither double-abdomen formation nor PI1 synthesis in anterior fragments. The identity of PI1 formed in anterior fragments of prospective double abdomens with the protein found in posterior fragments was revealed by two-dimensional gel electrophoresis and limited proteolysis. Suppression of PI1 synthesis in anterior fragments of normal embryos is ascribed to the activity of cytoplasmic ribonucleoprotein particles thought to act as anterior determinants. Images PMID:6935679

  3. Perturbations and quasi-normal modes of black holes in Einstein-Aether theory

    International Nuclear Information System (INIS)

    Konoplya, R.A.; Zhidenko, A.

    2007-01-01

    We develop a new method for calculation of quasi-normal modes of black holes, when the effective potential, which governs black hole perturbations, is known only numerically in some region near the black hole. This method can be applied to perturbations of a wide class of numerical black hole solutions. We apply it to the black holes in the Einstein-Aether theory, a theory where general relativity is coupled to a unit time-like vector field, in order to observe local Lorentz symmetry violation. We found that in the non-reduced Einstein-Aether theory, real oscillation frequency and damping rate of quasi-normal modes are larger than those of Schwarzschild black holes in the Einstein theory

  4. Uncertainty relations for approximation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2016-05-27

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  5. Uncertainty relations for approximation and estimation

    International Nuclear Information System (INIS)

    Lee, Jaeha; Tsutsui, Izumi

    2016-01-01

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  6. Diophantine approximation

    CERN Document Server

    Schmidt, Wolfgang M

    1980-01-01

    "In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)

  7. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  8. Approximation of the inverse G-frame operator

    Indian Academy of Sciences (India)

    ... projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.

  9. Salt dependence of compression normal forces of quenched polyelectrolyte brushes

    Science.gov (United States)

    Hernandez-Zapata, Ernesto; Tamashiro, Mario N.; Pincus, Philip A.

    2001-03-01

    We obtained mean-field expressions for the compression normal forces between two identical opposing quenched polyelectrolyte brushes in the presence of monovalent salt. The brush elasticity is modeled using the entropy of ideal Gaussian chains, while the entropy of the microions and the electrostatic contribution to the grand potential is obtained by solving the non-linear Poisson-Boltzmann equation for the system in contact with a salt reservoir. For the polyelectrolyte brush we considered both a uniformly charged slab as well as an inhomogeneous charge profile obtained using a self-consistent field theory. Using the Derjaguin approximation, we related the planar-geometry results to the realistic two-crossed cylinders experimental set up. Theoretical predictions are compared to experimental measurements(Marc Balastre's abstract, APS March 2001 Meeting.) of the salt dependence of the compression normal forces between two quenched polyelectrolyte brushes formed by the adsorption of diblock copolymers poly(tert-butyl styrene)-sodium poly(styrene sulfonate) [PtBs/NaPSS] onto an octadecyltriethoxysilane (OTE) hydrophobically modified mica, as well as onto bare mica.

  10. Optical approximation in the theory of geometric impedance

    International Nuclear Information System (INIS)

    Stupakov, G.; Bane, K.L.F.; Zagorodnov, I.

    2007-02-01

    In this paper we introduce an optical approximation into the theory of impedance calculation, one valid in the limit of high frequencies. This approximation neglects diffraction effects in the radiation process, and is conceptually equivalent to the approximation of geometric optics in electromagnetic theory. Using this approximation, we derive equations for the longitudinal impedance for arbitrary offsets, with respect to a reference orbit, of source and test particles. With the help of the Panofsky-Wenzel theorem we also obtain expressions for the transverse impedance (also for arbitrary offsets). We further simplify these expressions for the case of the small offsets that are typical for practical applications. Our final expressions for the impedance, in the general case, involve two dimensional integrals over various cross-sections of the transition. We further demonstrate, for several known axisymmetric examples, how our method is applied to the calculation of impedances. Finally, we discuss the accuracy of the optical approximation and its relation to the diffraction regime in the theory of impedance. (orig.)

  11. APPROXIMATION OF FREE-FORM CURVE – AIRFOIL SHAPE

    Directory of Open Access Journals (Sweden)

    CHONG PERK LIN

    2013-12-01

    Full Text Available Approximation of free-form shape is essential in numerous engineering applications, particularly in automotive and aircraft industries. Commercial CAD software for the approximation of free-form shape is based almost exclusively on parametric polynomial and rational parametric polynomial. The parametric curve is defined by vector function of one independent variable R(u = (x(u, y(u, z(u, where 0≤u≤1. Bézier representation is one of the parametric functions, which is widely used in the approximating of free-form shape. Given a string of points with the assumption of sufficiently dense to characterise airfoil shape, it is desirable to approximate the shape with Bézier representation. The expectation is that the representation function is close to the shape within an acceptable working tolerance. In this paper, the aim is to explore the use of manual and automated methods for approximating section curve of airfoil with Bézier representation.

  12. A 1.5 GFLOPS Reciprocal Unit for Computer Graphics

    DEFF Research Database (Denmark)

    Nannarelli, Alberto; Rasmussen, Morten Sleth; Stuart, Matthias Bo

    2006-01-01

    The reciprocal operation 1/d is a frequent operation performed in graphics processors (GPUs). In this work, we present the design of a radix-16 reciprocal unit based on the algorithm combining the traditional digit-by-digit algorithm and the approximation of the reciprocal by one Newton-Raphson i...

  13. On the non-existence of a Bartlett correction for unit root tests

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet; Wood, Andrew T.A.

    1997-01-01

    There has been considerable recent interest in testing for a unit root in autoregressive models, especially in the context of cointegration models in econometrics. The likelihood ratio test for a unit root has non-standard asymptotic behaviour. In particular, when the errors are Gaussian, the lim...... for improved distributional approximations, and the question of whether W admits a Bartlett correction is of interest. In this note we establish that a Bartlett correction does not exist in the simplest unit root model. © 1997 Elsevier Science B.V....

  14. Conference on Abstract Spaces and Approximation

    CERN Document Server

    Szökefalvi-Nagy, B; Abstrakte Räume und Approximation; Abstract spaces and approximation

    1969-01-01

    The present conference took place at Oberwolfach, July 18-27, 1968, as a direct follow-up on a meeting on Approximation Theory [1] held there from August 4-10, 1963. The emphasis was on theoretical aspects of approximation, rather than the numerical side. Particular importance was placed on the related fields of functional analysis and operator theory. Thirty-nine papers were presented at the conference and one more was subsequently submitted in writing. All of these are included in these proceedings. In addition there is areport on new and unsolved problems based upon a special problem session and later communications from the partici­ pants. A special role is played by the survey papers also presented in full. They cover a broad range of topics, including invariant subspaces, scattering theory, Wiener-Hopf equations, interpolation theorems, contraction operators, approximation in Banach spaces, etc. The papers have been classified according to subject matter into five chapters, but it needs littl...

  15. Normal Pressure Hydrocephalus (NPH)

    Science.gov (United States)

    ... local chapter Join our online community Normal Pressure Hydrocephalus (NPH) Normal pressure hydrocephalus is a brain disorder ... Symptoms Diagnosis Causes & risks Treatments About Normal Pressure Hydrocephalus Normal pressure hydrocephalus occurs when excess cerebrospinal fluid ...

  16. Unit root tests for cross-sectionally dependent panels : The influence of observed factors

    NARCIS (Netherlands)

    Becheri, I.G.; Drost, F.C.; van den Akker, R.

    This paper considers a heterogeneous panel unit root model with cross-sectional dependence generated by a factor structure—the factor common to all units being an observed covariate. The model is shown to be Locally Asymptotically Mixed Normal (LAMN), with the random part of the limiting Fisher

  17. Kullback-Leibler divergence and the Pareto-Exponential approximation.

    Science.gov (United States)

    Weinberg, G V

    2016-01-01

    Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation.

  18. Diagonal Pade approximations for initial value problems

    International Nuclear Information System (INIS)

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab

  19. Frictional response of simulated faults to normal stresses perturbations probed with ultrasonic waves

    Science.gov (United States)

    Shreedharan, S.; Riviere, J.; Marone, C.

    2017-12-01

    We report on a suite of laboratory friction experiments conducted on saw-cut Westerly Granite surfaces to probe frictional response to step changes in normal stress and loading rate. The experiments are conducted to illuminate the fundamental processes that yield friction rate and state dependence. We quantify the microphysical frictional response of the simulated fault surfaces to normal stress steps, in the range of 1% - 600% step increases and decreases from a nominal baseline normal stress. We measure directly the fault slip rate and account for changes in slip rate with changes in normal stress and complement mechanical data acquisition by continuously probing the faults with ultrasonic pulses. We conduct the experiments at room temperature and humidity conditions in a servo controlled biaxial testing apparatus in the double direct shear configuration. The samples are sheared over a range of velocities, from 0.02 - 100 μm/s. We report observations of a transient shear stress and friction evolution with step increases and decreases in normal stress. Specifically, we show that, at low shear velocities and small increases in normal stress ( 5% increases), the shear stress evolves immediately with normal stress. We show that the excursions in slip rate resulting from the changes in normal stress must be accounted for in order to predict fault strength evolution. Ultrasonic wave amplitudes which first increase immediately in response to normal stress steps, then decrease approximately linearly to a new steady state value, in part due to changes in fault slip rate. Previous descriptions of frictional state evolution during normal stress perturbations have not adequately accounted for the effect of large slip velocity excursions. Here, we attempt to do so by using the measured ultrasonic amplitudes as a proxy for frictional state during transient shear stress evolution. Our work aims to improve understanding of induced and triggered seismicity with focus on

  20. Capacitive digital-to-analogue converters with least significant bit down in differential successive approximation register ADCs

    Directory of Open Access Journals (Sweden)

    Lei Sun

    2014-01-01

    Full Text Available This Letter proposes a least significant bit-down switching scheme in the capacitive digital-to-analogue converters (CDACs of successive approximation register analog-to-digital converter (ADC. Under the same unit capacitor, the chip area and the switching energy are halved without increasing the complexity of logic circuits. Compared with conventional CDAC, when it is applied to one of the most efficient switching schemes, V(cm-based structure, it achieves 93% less switching energy and 75% less chip area with the same differential non linearity (DNL/integral non linearity (INL performance.

  1. A test of the adhesion approximation for gravitational clustering

    Science.gov (United States)

    Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.

    1993-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  2. Hydrogen: Beyond the Classic Approximation

    International Nuclear Information System (INIS)

    Scivetti, Ivan

    2003-01-01

    The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position

  3. Myocardial thallium-201 kinetics in normal and ischemic myocardium

    International Nuclear Information System (INIS)

    Grunwald, A.M.; Watson, D.D.; Holzgrefe, H.H. Jr.; Irving, J.F.; Beller, G.A.

    1981-01-01

    The net myocardial accumulation of thallium-201 after injection depends upon the net balance between continuing myocardial extraction from low levels of recirculating thallium in the blood compartment and the net rate of efflux of thallium from the myocardium into the extracardiac blood pool. These experiments were designed to measure separately the myocardial extraction and intrinsic myocardial efflux of thallium-201 at normal and at reduced rates of myocardial blood flow. The average myocardial extraction fraction at normal blood flow in 10 anesthetized dogs was 82 +/- 6% (+/- SD) at normal coronary arterial perfusion pressures and increased insignificantly, to 85 +/- 7%, at coronary perfusion pressures of 10--35 mm Hg. At normal coronary arterial perfusion pressures in 12 additional dogs, the intrinsic thallium washout in the absence of systemic recirculation had a half-time (T 1/2) of 54 +/- 7 minutes. The intrinsic cellular washout rate began to increase as distal perfusion pressures fell below 60 mm Hg and increased markedly to a T 1/2 of 300 minutes at perfusion pressures of 25--30 mm Hg. A second, more rapid component of intrinsic thallium washout (T 1/2 2.5 minutes) representing approximately 7% of the total initially extracted myocardial thallium was observed. The faster washout component is presumed to be due to washout of interstitial thallium unextracted by myocardial cells, whereas the slower component is presumed due to intracellular washout. The net clearance time of thallium measured after i.v. injection is much longer than the intrinsic myocardial cellular washout rate because of continuous replacement of myocardial thallium from systemic recirculation. Myocardial redistribution of thallium-201 in states of chronically reduced perfusion cannot be the result of increased myocardial extraction efficiency, but rather, is the result of the slower intrinsic cellular washout rate at reduced perfusion levels

  4. Approximate soil-structure interaction with separation of base mat from soil (lifting-off)

    International Nuclear Information System (INIS)

    Wolf, J.P.

    1975-01-01

    In reactor buildings having a sheild-building (outer concrete shell) with a large mass, which is particularly the case if the plant is designed for airplane crash, large over-turning moments are developed by earthquake loading. In this paper, the standard linear elastic half-space theory is used in the soil-structure interaction model. For a circular base mat, if the overturning moment exceeds the product of the normal force (dead weight minus the effect of the vertical earthquake) and one-third of the radius, then tension will occur in the area of contact, assuming distribution of stress as in the static case. For a strip foundation the same occurs if the eccentricity of the normal force exceeds a quarter of the total width. As tension is incompatible with the constitutive law of soils, the base mat will become partially separated from the foundation. Assming that only normal stresses in compression and corresponding shear stresses (friction) can occur in the area of contact, a method of analyzing soil-structure interaction including lifting-off is derived, which otherwise is based on elastic behaviour of the soil. First a rigorous iterative procedure is outlined based on (complex) dynamic influence matrices of displacements on the surface of an elastic half-space at a certain distance from a rigid disc or strip. A similar, approximate method is then developed which is used throughout the paper. As an example the dynamic response of the reactor building of a 1000 Megawatt plant to earthquake motion is calculated. The results of the analysis, including lift-off, are compared to those of the linear case. (Auth.)

  5. Simultaneous approximation in scales of Banach spaces

    International Nuclear Information System (INIS)

    Bramble, J.H.; Scott, R.

    1978-01-01

    The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods

  6. On transparent potentials: a Born approximation study

    International Nuclear Information System (INIS)

    Coudray, C.

    1980-01-01

    In the frame of the scattering inverse problem at fixed energy, a class of potentials transparent in Born approximation is obtained. All these potentials are spherically symmetric and are oscillating functions of the reduced radial variable. Amongst them, the Born approximation of the transparent potential of the Newton-Sabatier method is found. In the same class, quasi-transparent potentials are exhibited. Very general features of potentials transparent in Born approximation are then stated. And bounds are given for the exact scattering amplitudes corresponding to most of the potentials previously exhibited. These bounds, obtained at fixed energy, and for large values of the angular momentum, are found to be independent on the energy

  7. An environmental assessment of United States drinking water watersheds

    Science.gov (United States)

    James Wickham; Timothy Wade; Kurt Riitters

    2011-01-01

    Abstract There is an emerging recognition that natural lands and their conservation are important elements of a sustainable drinking water infrastructure. We conducted a national, watershed-level environmental assessment of 5,265 drinking water watersheds using data on land cover, hydrography and conservation status. Approximately 78% of the conterminous United States...

  8. Approximate supernova remnant dynamics with cosmic ray production

    Science.gov (United States)

    Voelk, H. J.; Drury, L. O.; Dorfi, E. A.

    1985-01-01

    Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probably sources of Cosmic Rays. Recent shock acceleration models treating the Cosmic Rays (CR's) as test particles nb a prescribed Supernova Remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the Interstellar Medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation.

  9. Approximate supernova remnant dynamics with cosmic ray production

    International Nuclear Information System (INIS)

    Voelk, H.J.; Drury, L.O.; Dorfi, E.A.

    1985-01-01

    Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probable sources of cosmic rays. Recent shock acceleration models treating the cosmic rays (CR's) as test particles nb a prescribed supernova remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the interstellar medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation

  10. Data Normalization of (1)H NMR Metabolite Fingerprinting Data Sets in the Presence of Unbalanced Metabolite Regulation.

    Science.gov (United States)

    Hochrein, Jochen; Zacharias, Helena U; Taruttis, Franziska; Samol, Claudia; Engelmann, Julia C; Spang, Rainer; Oefner, Peter J; Gronwald, Wolfram

    2015-08-07

    Data normalization is an essential step in NMR-based metabolomics. Conducted properly, it improves data quality and removes unwanted biases. The choice of the appropriate normalization method is critical and depends on the inherent properties of the data set in question. In particular, the presence of unbalanced metabolic regulation, where the different specimens and cohorts under investigation do not contain approximately equal shares of up- and down-regulated features, may strongly influence data normalization. Here, we demonstrate the suitability of the Shapiro-Wilk test to detect such unbalanced regulation. Next, employing a Latin-square design consisting of eight metabolites spiked into a urine specimen at eight different known concentrations, we show that commonly used normalization and scaling methods fail to retrieve true metabolite concentrations in the presence of increasing amounts of glucose added to simulate unbalanced regulation. However, by learning the normalization parameters on a subset of nonregulated features only, Linear Baseline Normalization, Probabilistic Quotient Normalization, and Variance Stabilization Normalization were found to account well for different dilutions of the samples without distorting the true spike-in levels even in the presence of marked unbalanced metabolic regulation. Finally, the methods described were applied successfully to a real world example of unbalanced regulation, namely, a set of plasma specimens collected from patients with and without acute kidney injury after cardiac surgery with cardiopulmonary bypass use.

  11. Geometric convergence of some two-point Pade approximations

    International Nuclear Information System (INIS)

    Nemeth, G.

    1983-01-01

    The geometric convergences of some two-point Pade approximations are investigated on the real positive axis and on certain infinite sets of the complex plane. Some theorems concerning the geometric convergence of Pade approximations are proved, and bounds on geometric convergence rates are given. The results may be interesting considering the applications both in numerical computations and in approximation theory. As a specific case, the numerical calculations connected with the plasma dispersion function may be performed. (D.Gy.)

  12. Standard filter approximations for low power Continuous Wavelet Transforms.

    Science.gov (United States)

    Casson, Alexander J; Rodriguez-Villegas, Esther

    2010-01-01

    Analogue domain implementations of the Continuous Wavelet Transform (CWT) have proved popular in recent years as they can be implemented at very low power consumption levels. This is essential for use in wearable, long term physiological monitoring systems. Present analogue CWT implementations rely on taking mathematical a approximation of the wanted mother wavelet function to give a filter transfer function that is suitable for circuit implementation. This paper investigates the use of standard filter approximations (Butterworth, Chebyshev, Bessel) as an alternative wavelet approximation technique. This extends the number of approximation techniques available for generating analogue CWT filters. An example ECG analysis shows that signal information can be successfully extracted using these CWT approximations.

  13. Ordering, symbols and finite-dimensional approximations of path integrals

    International Nuclear Information System (INIS)

    Kashiwa, Taro; Sakoda, Seiji; Zenkin, S.V.

    1994-01-01

    We derive general form of finite-dimensional approximations of path integrals for both bosonic and fermionic canonical systems in terms of symbols of operators determined by operator ordering. We argue that for a system with a given quantum Hamiltonian such approximations are independent of the type of symbols up to terms of O(ε), where ε of is infinitesimal time interval determining the accuracy of the approximations. A new class of such approximations is found for both c-number and Grassmannian dynamical variables. The actions determined by the approximations are non-local and have no classical continuum limit except the cases of pq- and qp-ordering. As an explicit example the fermionic oscillator is considered in detail. (author)

  14. Hardness of approximation for strip packing

    DEFF Research Database (Denmark)

    Adamaszek, Anna Maria; Kociumaka, Tomasz; Pilipczuk, Marcin

    2017-01-01

    Strip packing is a classical packing problem, where the goal is to pack a set of rectangular objects into a strip of a given width, while minimizing the total height of the packing. The problem has multiple applications, for example, in scheduling and stock-cutting, and has been studied extensively......)-approximation by two independent research groups [FSTTCS 2016,WALCOM 2017]. This raises a questionwhether strip packing with polynomially bounded input data admits a quasi-polynomial time approximation scheme, as is the case for related twodimensional packing problems like maximum independent set of rectangles or two...

  15. Adaptive control using neural networks and approximate models.

    Science.gov (United States)

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  16. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Estrogen Receptor and Progesterone Receptor Expression in Normal Terminal Duct Lobular Units Surrounding Invasive Breast Cancer

    Science.gov (United States)

    Yang, Xiaohong R.; Figueroa, Jonine D.; Hewitt, Stephen M.; Falk, Roni T.; Pfeiffer, Ruth M.; Lissowska, Jolanta; Peplonska, Beata; Brinton, Louise A.; Garcia-Closas, Montserrat; Sherman, Mark E.

    2014-01-01

    Introduction Molecular and morphological alterations related to carcinogenesis have been found in terminal duct lobular units (TDLUs), the microscopic structures from which most breast cancer precursors and cancers develop, and therefore, analysis of these structures may reveal early changes in breast carcinogenesis and etiologic heterogeneity. Accordingly, we evaluated relationships of breast cancer risk factors and tumor pathology to estrogen receptor (ER) and progesterone receptor (PR) expression in TDLUs surrounding breast cancers. Methods We analyzed 270 breast cancer cases included in a population-based breast cancer case-control study conducted in Poland. TDLUs were mapped in relation to breast cancer: within the same block as the tumor (TDLU-T), proximal to tumor (TDLU-PT), or distant from (TDLU-DT). ER/PR was quantitated using image analysis of immunohistochemically stained TDLUs prepared as tissue microarrays. Results In surgical specimens containing ER-positive breast cancers, ER and PR levels were significantly higher in breast cancer cells than in normal TDLUs, and higher in TDLU-T than in TDLU-DT or TDLU-PT, which showed similar results. Analyses combining DT-/PT TDLUs within subjects demonstrated that ER levels were significantly lower in premenopausal women vs. postmenopausal women (odds ratio [OR]=0.38, 95% confidence interval [CI]=0.19, 0.76, P=0.0064) and among recent or current menopausal hormone therapy users compared with never users (OR=0.14, 95% CI=0.046–0.43, Ptrend=0.0006). Compared with premenopausal women, TDLUs of postmenopausal women showed lower levels of PR (OR=0.90, 95% CI=0.83–0.97, Ptrend=0.007). ER and PR expression in TDLUs was associated with epidermal growth factor receptor (EGFR) expression in invasive tumors (P=0.019 for ER and P=0.03 for PR), but not with other tumor features. Conclusions Our data suggest that TDLUs near breast cancers reflect field effects, whereas those at a distance demonstrate influences of breast

  18. Two-year outcome of normal-birth-weight infants admitted to a Singapore neonatal intensive care unit.

    Science.gov (United States)

    Lian, W B; Yeo, C L; Ho, L Y

    2002-03-01

    To describe the characteristics, the immediate and short-term outcome and predictors of mortality in normal-birth-weight (NBW) infants admitted to a tertiary neonatal intensive care unit (NICU) in Singapore. We retrospectively reviewed the medical records of 137 consecutive NBW infants admitted to the NICU of the Singapore General Hospital from January 1991 to December 1992. Data on the diagnoses, clinical presentation of illness, intervention received, complications and outcome as well as follow-up patterns for the first 2 years of life, were collected and analysed. NBW NICU infants comprised 1.8% of births in our hospital and 40.8% of all NICU admissions. The main reasons for NICU admissions were respiratory disorders (61.3%), congenital anomalies (15.3%) and asphyxia neonatorum (11.7%). Respiratory support was necessary in 81.8%. Among those ventilated, the only predictive factor contributing to mortality was the mean inspired oxygen concentration. The mortality rate was 11.7%. Causes of death included congenital anomalies (43.75%), asphyxia neonatorum (31.25%) and pulmonary failure secondary to meconium aspiration syndrome (12.5%). The median hospital stay among survivors (88.3%) was 11.0 (range, 4 to 70) days. Of 42 patients (out of 117 survivors) who received follow-up for at least 6 months, 39 infants did not have evidence of any major neurodevelopmental abnormalities at their last follow-up visit, prior to or at 2 years of age. Despite their short hospital stay (compared to very-low-birth-weight infants), the high volume of NBW admissions make the care of this population an important area for review to enhance advances in and hence, reduce the cost of NICU care. With improved antenatal diagnostic techniques (allowing earlier and more accurate diagnosis of congenital malformations) and better antenatal and perinatal care (allowing better management of at-risk pregnancies), it is anticipated that there should be a reduction in such admissions with better

  19. The Issue of Unit Constraints and the Non-Confiscatory Electricity Market

    DEFF Research Database (Denmark)

    Haji Bashi, Mazaher; Rahmati, Iman; Bak, Claus Leth

    2017-01-01

    Security constraint unit commitment is devised to drive the generation unit schedule in a deregulated environment. Generation bids, transmission system constraints and generation unit constraints are thoroughly considered in this optimization problem. It is acceptable that the transmission system...... normal condition constraints may affect the economic opportunities of the generation companies in the electricity market. Transmission system limitations are the inherent limits of the market environment but this is not true for the generation unit constraints. It means that the generation unit...... constraint of a certain player should not affect the economic opportunities of the rivals. If this happen, generation units can claim to the electricity market regulatory board. In this paper the effect of generation unit constraint on the market outcome is discussed. A fair mechanism is introduced in which...

  20. The Hartree-Fock seniority approximation

    International Nuclear Information System (INIS)

    Gomez, J.M.G.; Prieto, C.

    1986-01-01

    A new self-consistent method is used to take into account the mean-field and the pairing correlations in nuclei at the same time. We call it the Hartree-Fock seniority approximation, because the long-range and short-range correlations are treated in the frameworks of Hartree-Fock theory and the seniority scheme. The method is developed in detail for a minimum-seniority variational wave function in the coordinate representation for an effective interaction of the Skyrme type. An advantage of the present approach over the Hartree-Fock-Bogoliubov theory is the exact conservation of angular momentum and particle number. Furthermore, the computational effort required in the Hartree-Fock seniority approximation is similar to that ofthe pure Hartree-Fock picture. Some numerical calculations for Ca isotopes are presented. (orig.)

  1. On the efficient simulation of the left-tail of the sum of correlated log-normal variates

    KAUST Repository

    Alouini, Mohamed-Slim

    2018-04-04

    The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However, these methods are not accurate in the tail regions. These regions are of primordial interest as small probability values have to be evaluated with high precision. Variance reduction techniques are known to yield accurate, yet efficient, estimates of small probability values. Most of the existing approaches have focused on estimating the right-tail of the sum of log-normal random variables (RVs). Here, we instead consider the left-tail of the sum of correlated log-normal variates with Gaussian copula, under a mild assumption on the covariance matrix. We propose an estimator combining an existing mean-shifting importance sampling approach with a control variate technique. This estimator has an asymptotically vanishing relative error, which represents a major finding in the context of the left-tail simulation of the sum of log-normal RVs. Finally, we perform simulations to evaluate the performances of the proposed estimator in comparison with existing ones.

  2. HYPERVASCULAR LIVER LESIONS IN RADIOLOGICALLY NORMAL LIVER.

    Science.gov (United States)

    Amico, Enio Campos; Alves, José Roberto; Souza, Dyego Leandro Bezerra de; Salviano, Fellipe Alexandre Macena; João, Samir Assi; Liguori, Adriano de Araújo Lima

    2017-01-01

    The hypervascular liver lesions represent a diagnostic challenge. To identify risk factors for cancer in patients with non-hemangiomatous hypervascular hepatic lesions in radiologically normal liver. This prospective study included patients with hypervascular liver lesions in radiologically normal liver. The diagnosis was made by biopsy or was presumed on the basis of radiologic stability in follow-up period of one year. Cirrhosis or patients with typical imaging characteristics of haemangioma were excluded. Eighty-eight patients were included. The average age was 42.4. The lesions were unique and were between 2-5 cm in size in most cases. Liver biopsy was performed in approximately 1/3 of cases. The lesions were benign or most likely benign in 81.8%, while cancer was diagnosed in 12.5% of cases. Univariate analysis showed that age >45 years (p3 nodules (p=0.003) and elevated alkaline phosphatase (p=0.013) were significant risk factors for cancer. It is safe to observe hypervascular liver lesions in normal liver in patients up to 45 years, normal alanine aminotransaminase, up to three nodules and no personal history of cancer. Lesion biopsies are safe in patients with atypical lesions and define the treatment to be established for most of these patients. As lesões hepáticas hipervasculares representam um desafio diagnóstico. Identificar fatores de risco para câncer em pacientes portadores de lesão hepática hipervascular não-hemangiomatosa em fígado radiologicamente normal. Estudo prospectivo que incluiu pacientes com lesões hepáticas hipervasculares em que o diagnóstico final foi obtido por exame anatomopatológico ou, presumido a partir de seguimento mínimo de um ano. Diagnóstico prévio de cirrose ou radiológico de hemangioma foram considerados critérios de exclusão. Oitenta e oito pacientes foram incluídos. A relação mulher/homem foi de 5,3/1. A idade média foi de 42,4 anos. Na maior parte das vezes as lesões hepáticas foram únicas e com

  3. Appearance of normal brain maturation on 1.5-T MR images

    International Nuclear Information System (INIS)

    Barkovich, A.J.; Kjos, B.; Jackson, D.E. Jr.; Norman, D.

    1987-01-01

    To investigate the pattern of normal white-matter maturation as demonstrated by high-field-strength MR imaging, 82 normal infants were examined using a 1.5-T unit with spin-echo T1-weighted and T2-weighted pulse sequences. The infants ranged in age from 4 days to 2 years. The scans were assessed for qualitative changes of white matter relative to gray matter and correlated with the patient's age in 14 anatomic areas of the brain. The MR images showed that changes of brain maturation occur in an orderly manner, commencing in the brain stem and progressing to the cerebellum and the cerebrum. Changes from brain myelination were seen earlier on T1-weighted images than on T2-weighted images, possibly because of T1 shortening by the components of the developing myelin sheaths. The later changes on the T2-weighted images correlated best with the development of myelination, as demonstrated by histochemical methods. T1-weighted images were most useful to monitor normal brain development in the first 6 to 8 months of life; T2-weighted images were more useful after 6 months. The milestones in the MR appearance of normal maturation of the brain are presented. The milestones in the MR appearance of normal maturation of the brain are presented. Persistent areas of long T2 relaxation times are seen superior and dorsal to the ventricular trigone in all infants examined and should not be mistaken for ischemic change

  4. Quasi-fractional approximation to the Bessel functions

    International Nuclear Information System (INIS)

    Guerrero, P.M.L.

    1989-01-01

    In this paper the authors presents a simple Quasi-Fractional Approximation for Bessel Functions J ν (x), (- 1 ≤ ν < 0.5). This has been obtained by extending a method published which uses simultaneously power series and asymptotic expansions. Both functions, exact and approximated, coincide in at least two digits for positive x, and ν between - 1 and 0,4

  5. Scattering theory and effective medium approximations to heterogeneous materials

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1977-01-01

    The formal analogy existing between problems studied in the microscopic theory of disordered alloys and problems concerned with the effective (macroscopic) behavior of heterogeneous materials is discussed. Attention is focused on (1) analogous approximations (effective medium approximations) developed for the microscopic problems by scattering theory concepts and techniques, but for the macroscopic problems principally by intuitive means, (2) the link, provided by scattering theory, of the intuitively developed approximations to a well-defined perturbative analysis, (3) the possible presence of conditionally convergent integrals in effective medium approximations

  6. An Investigation into the Cost of Unit Testing on an Embedded System

    OpenAIRE

    Qiu, Wensi

    2011-01-01

    The quality of embedded software is important, especially for life-critical and mission-critical embedded systems. And software testing is a key activity to ensure the quality of embedded software. Both system testing and unit testing are vital to test embedded software. Unit testing is probably more important to ensure there are no latent faults. System testing is almost invariably done on a target system, but unit testing is normally done on a host system, as standard test frame...

  7. Approximate modal analysis using Fourier decomposition

    International Nuclear Information System (INIS)

    Kozar, Ivica; Jericevic, Zeljko; Pecak, Tatjana

    2010-01-01

    The paper presents a novel numerical approach for approximate solution of eigenvalue problem and investigates its suitability for modal analysis of structures with special attention on plate structures. The approach is based on Fourier transformation of the matrix equation into frequency domain and subsequent removal of potentially less significant frequencies. The procedure results in a much reduced problem that is used in eigenvalue calculation. After calculation eigenvectors are expanded and transformed back into time domain. The principles are presented in Jericevic [1]. Fourier transform can be formulated in a way that some parts of the matrix that should not be approximated are not transformed but are fully preserved. In this paper we present formulation that preserves central or edge parts of the matrix and compare it with the formulation that performs transform on the whole matrix. Numerical experiments on transformed structural dynamic matrices describe quality of the approximations obtained in modal analysis of structures. On the basis of the numerical experiments, from the three approaches to matrix reduction one is recommended.

  8. A Gaussian Approximation Potential for Silicon

    Science.gov (United States)

    Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor

    We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.

  9. Development of the relativistic impulse approximation

    International Nuclear Information System (INIS)

    Wallace, S.J.

    1985-01-01

    This talk contains three parts. Part I reviews the developments which led to the relativistic impulse approximation for proton-nucleus scattering. In Part II, problems with the impulse approximation in its original form - principally the low energy problem - are discussed and traced to pionic contributions. Use of pseudovector covariants in place of pseudoscalar ones in the NN amplitude provides more satisfactory low energy results, however, the difference between pseudovector and pseudoscalar results is ambiguous in the sense that it is not controlled by NN data. Only with further theoretical input can the ambiguity be removed. Part III of the talk presents a new development of the relativistic impulse approximation which is the result of work done in the past year and a half in collaboration with J.A. Tjon. A complete NN amplitude representation is developed and a complete set of Lorentz invariant amplitudes are calculated based on a one-meson exchange model and appropriate integral equations. A meson theoretical basis for the important pair contributions to proton-nucleus scattering is established by the new developments. 28 references

  10. Local approximation of a metapopulation's equilibrium.

    Science.gov (United States)

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  11. Periodontal Disease and Oral Hygiene Among Children. United States.

    Science.gov (United States)

    National Center for Health Statistics (DHEW/PHS), Hyattsville, MD.

    Statistical data presented on periodontal disease and oral hygiene among noninstitutionalized children, aged 6-11, in the United States are based on a probability sample of approximately 7,400 children involved in a national health survey during 1963-65. The report contains estimates of the Periodontal Index (PI) and the Simplified Oral Hygiene…

  12. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  13. Hierarchical Bayes Small Area Estimation under a Unit Level Model with Applications in Agriculture

    Directory of Open Access Journals (Sweden)

    Nageena Nazir

    2016-09-01

    Full Text Available To studied Bayesian aspect of small area estimation using Unit level model. In this paper we proposed and evaluated new prior distribution for the ratio of variance components in unit level model rather than uniform prior. To approximate the posterior moments of small area means, Laplace approximation method is applied. This choice of prior avoids the extreme skewness, usually present in the posterior distribution of variance components. This property leads to more accurate Laplace approximation. We apply the proposed model to the analysis of horticultural data and results from the model are compared with frequestist approach and with Bayesian model of uniform prior in terms of average relative bias, average squared relative bias and average absolute bias. The numerical results obtained highlighted the superiority of using the proposed prior over the uniform prior. Thus Bayes estimators (with new prior of small area means have good frequentist properties such as MSE and ARB as compared to other traditional methods viz., Direct, Synthetic and Composite estimators.

  14. Pion-nucleus cross sections approximation

    International Nuclear Information System (INIS)

    Barashenkov, V.S.; Polanski, A.; Sosnin, A.N.

    1990-01-01

    Analytical approximation of pion-nucleus elastic and inelastic interaction cross-section is suggested, with could be applied in the energy range exceeding several dozens of MeV for nuclei heavier than beryllium. 3 refs.; 4 tabs

  15. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization.

    Science.gov (United States)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-28

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  16. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization

    Science.gov (United States)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-01

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  17. Investigation of behavior of the dynamic contact angle on the basis of the Oberbeck-Boussinesq approximation of the Navier-Stokes equations

    Directory of Open Access Journals (Sweden)

    Goncharova Olga

    2016-01-01

    Full Text Available Flows of a viscous incompressible liquid with a thermocapillary boundary are investigated numerically on the basis of the mathematical model that consists of the Oberbeck-Boussinesq approximation of the Navier-Stokes equations, kinematic and dynamic conditions at the free boundary and of the slip boundary conditions at solid walls. We assume that the constant temperature is kept on the solid walls. On the thermocapillary gas-liquid interface the condition of the third order for temperature is imposed. The numerical algorithm based on a finite-difference scheme of the second order approximation on space and time has been constructed. The numerical experiments are performed for water under conditions of normal and low gravity for different friction coefficients and different values of the interphase heat transfer coefficient.

  18. An approximate analytical solution for describing surface runoff and sediment transport over hillslope

    Science.gov (United States)

    Tao, Wanghai; Wang, Quanjiu; Lin, Henry

    2018-03-01

    Soil and water loss from farmland causes land degradation and water pollution, thus continued efforts are needed to establish mathematical model for quantitative analysis of relevant processes and mechanisms. In this study, an approximate analytical solution has been developed for overland flow model and sediment transport model, offering a simple and effective means to predict overland flow and erosion under natural rainfall conditions. In the overland flow model, the flow regime was considered to be transitional with the value of parameter β (in the kinematic wave model) approximately two. The change rate of unit discharge with distance was assumed to be constant and equal to the runoff rate at the outlet of the plane. The excess rainfall was considered to be constant under uniform rainfall conditions. The overland flow model developed can be further applied to natural rainfall conditions by treating excess rainfall intensity as constant over a small time interval. For the sediment model, the recommended values of the runoff erosion calibration constant (cr) and the splash erosion calibration constant (cf) have been given in this study so that it is easier to use the model. These recommended values are 0.15 and 0.12, respectively. Comparisons with observed results were carried out to validate the proposed analytical solution. The results showed that the approximate analytical solution developed in this paper closely matches the observed data, thus providing an alternative method of predicting runoff generation and sediment yield, and offering a more convenient method of analyzing the quantitative relationships between variables. Furthermore, the model developed in this study can be used as a theoretical basis for developing runoff and erosion control methods.

  19. Construction: first of St. Lucie unit 2 successes

    International Nuclear Information System (INIS)

    Conway, W.F.

    1989-01-01

    The Nuclear Regulatory Commission (NRC) granted a full power operating license for St. Lucie Unit 2 on June 10, 1983, just six years after construction began. The industry average for nuclear power plant construction during this time was approximately ten years. The rate of completion had a positive effect on the cost of the facility. The price of the unit was $1.42 billion as compared to the $2 billion to $5 billion range experienced by other utilities for nuclear plants. These accomplishments were not serendipitous but the results of management techniques and personnel attitudes involved in the construction of the unit. More importantly, many of these same techniques and attitudes have now become part of a quality improvement program at St Lucie and are reflected in its performance indicators. This paper analyzes the construction success of St Lucie Unit 2 and demonstrates that excellent performance in the construction phase can be carried over to the operation of a facility

  20. Gemfibrozil-induced myositis in a patient with normal renal function.

    Science.gov (United States)

    Hahn, Martin; Sriharan, Kalavally; McFarland, M Shawn

    2010-01-01

    To describe a case of gemfibrozil monotherapy-induced myositis in a patient with normal renal function A 68-year-old white man presented to his primary care clinic complaining of a 6-month history of total body pain. His past medical history was significant for hypertension, diabetes mellitus, hyperlipidemia, gastroesophageal reflux disease, benign prostatic hypertrophy, arthritis, impotence, and pancreatic cancer that required excision of part of his pancreas. His home drug regimen included bupropion 75 mg twice daily, gemfibrozil 600 mg twice daily for the past 8 months, glimiperide 1 mg daily, insulin glargine 5 units at bedtime, insulin aspart 5 units in the evening, lisinopril 10 mg daily, omeprazole 40 mg daily, pregabalin 100 mg daily, and sildenafil 100 mg as needed. Laboratory test results were significant for elevated aspartate aminotransferase (AST) 78 U/L (reference range 15-46 U/L), alanine aminotransferase (ALT) 83 U/L (13-69 U/L), and creatine kinase (CK) 3495 U/L (55-170 U/L). Serum creatinine was normal at 1.19 mg/dL. The physician determined that the elevated CK indicated myositis secondary to gemfibrozil use, and gemfibrozil was subsequently discontinued. The patient returned 1 week later to repeat the laboratory tests. Results were CK 220 U/L, AST 26 U/L, ALT 43 U/L, and serum creatinine 1.28 mg/dL. The patient was asked to return in 3 weeks to repeat the laboratory tests. At that time, CK had continued to decrease to 142 U/L, and the AST and ALT had returned to normal, at 22 and 29 U/L, respectively. The patient reported complete resolution of total body pain 3 weeks after discontinuation of gemfibrozil. Follow-up 5 weeks after discontinuation revealed no change compared to the 3-week follow-up. Myositis most often produces weakness and elevated CK levels more than 10 times the upper limit of normal. The risk of developing myositis, myopathy, or rhabdomyolysis is low (1%) when fibrates such as gemfibrozil are used as monotherapy. Evaluation of

  1. Optimal Placement of Phasor Measurement Units with New Considerations

    DEFF Research Database (Denmark)

    Su, Chi; Chen, Zhe

    2010-01-01

    Conventional phasor measurement unit (PMU) placement methods normally use the number of PMU installations as the objective function which is to be minimized. However, the cost of one installation of PMU is not always the same in different locations. It depends on a number of factors. One of these......Conventional phasor measurement unit (PMU) placement methods normally use the number of PMU installations as the objective function which is to be minimized. However, the cost of one installation of PMU is not always the same in different locations. It depends on a number of factors. One...... of these factors is taken into account in the proposed PMU placement method in this paper, which is the number of adjacent branches to the PMU located buses. The concept of full topological observability is adopted and a version of binary particle swarm optimization (PSO) algorithm is utilized. Results from...

  2. Approximal morphology as predictor of approximal caries in primary molar teeth

    DEFF Research Database (Denmark)

    Cortes, A; Martignon, S; Qvist, V

    2018-01-01

    consent was given, participated. Upper and lower molar teeth of one randomly selected side received a 2-day temporarily separation. Bitewing radiographs and silicone impressions of interproximal area (IPA) were obtained. One-year procedures were repeated in 52 children (84%). The morphology of the distal...... surfaces of the first molar teeth and the mesial surfaces on the second molar teeth (n=208) was scored from the occlusal aspect on images from the baseline resin models resulting in four IPA variants: concave-concave; concave-convex; convex-concave, and convex-convex. Approximal caries on the surface...

  3. Finite Element Approximation of the FENE-P Model

    OpenAIRE

    Barrett , John ,; Boyaval , Sébastien

    2017-01-01

    We extend our analysis on the Oldroyd-B model in Barrett and Boyaval [1] to consider the finite element approximation of the FENE-P system of equations, which models a dilute polymeric fluid, in a bounded domain $D $\\subset$ R d , d = 2 or 3$, subject to no flow boundary conditions. Our schemes are based on approximating the pressure and the symmetric conforma-tion tensor by either (a) piecewise constants or (b) continuous piecewise linears. In case (a) the velocity field is approximated by c...

  4. Bicervical normal uterus with normal vagina | Okeke | Annals of ...

    African Journals Online (AJOL)

    To the best of our knowledge, only few cases of bicervical normal uterus with normal vagina exist in the literature; one of the cases had an anterior‑posterior disposition. This form of uterine abnormality is not explicable by the existing classical theory of mullerian anomalies and suggests that a complex interplay of events ...

  5. Lattice quantum chromodynamics with approximately chiral fermions

    International Nuclear Information System (INIS)

    Hierl, Dieter

    2008-05-01

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the Θ + pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  6. Lattice quantum chromodynamics with approximately chiral fermions

    Energy Technology Data Exchange (ETDEWEB)

    Hierl, Dieter

    2008-05-15

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  7. Renal glucose metabolism in normal physiological conditions and in diabetes.

    Science.gov (United States)

    Alsahli, Mazen; Gerich, John E

    2017-11-01

    The kidney plays an important role in glucose homeostasis via gluconeogenesis, glucose utilization, and glucose reabsorption from the renal glomerular filtrate. After an overnight fast, 20-25% of glucose released into the circulation originates from the kidneys through gluconeogenesis. In this post-absorptive state, the kidneys utilize about 10% of all glucose utilized by the body. After glucose ingestion, renal gluconeogenesis increases and accounts for approximately 60% of endogenous glucose release in the postprandial period. Each day, the kidneys filter approximately 180g of glucose and virtually all of this is reabsorbed into the circulation. Hormones (most importantly insulin and catecholamines), substrates, enzymes, and glucose transporters are some of the various factors influencing the kidney's role. Patients with type 2 diabetes have an increased renal glucose uptake and release in the fasting and the post-prandial states. Additionally, glucosuria in these patients does not occur at plasma glucose levels that would normally produce glucosuria in healthy individuals. The major abnormality of renal glucose metabolism in type 1 diabetes appears to be impaired renal glucose release during hypoglycemia. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  9. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  10. Methods of Approximation Theory in Complex Analysis and Mathematical Physics

    CERN Document Server

    Saff, Edward

    1993-01-01

    The book incorporates research papers and surveys written by participants ofan International Scientific Programme on Approximation Theory jointly supervised by Institute for Constructive Mathematics of University of South Florida at Tampa, USA and the Euler International Mathematical Instituteat St. Petersburg, Russia. The aim of the Programme was to present new developments in Constructive Approximation Theory. The topics of the papers are: asymptotic behaviour of orthogonal polynomials, rational approximation of classical functions, quadrature formulas, theory of n-widths, nonlinear approximation in Hardy algebras,numerical results on best polynomial approximations, wavelet analysis. FROM THE CONTENTS: E.A. Rakhmanov: Strong asymptotics for orthogonal polynomials associated with exponential weights on R.- A.L. Levin, E.B. Saff: Exact Convergence Rates for Best Lp Rational Approximation to the Signum Function and for Optimal Quadrature in Hp.- H. Stahl: Uniform Rational Approximation of x .- M. Rahman, S.K. ...

  11. Beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2013-01-01

    We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...... functional theory and the adiabatic connection fluctuation-dissipation theorem and contains no fitted parameters. The new kernel is shown to preserve the accurate description of dispersive interactions from RPA while significantly improving the description of short-range correlation in molecules, insulators......, and metals. For molecular atomization energies, the rALDA is a factor of 7 better than RPA and a factor of 4 better than the Perdew-Burke-Ernzerhof (PBE) functional when compared to experiments, and a factor of 3 (1.5) better than RPA (PBE) for cohesive energies of solids. For transition metals...

  12. Vacancy-rearrangement theory in the first Magnus approximation

    International Nuclear Information System (INIS)

    Becker, R.L.

    1984-01-01

    In the present paper we employ the first Magnus approximation (M1A), a unitarized Born approximation, in semiclassical collision theory. We have found previously that the M1A gives a substantial improvement over the first Born approximation (B1A) and can give a good approximation to a full coupled channels calculation of the mean L-shell vacancy probability per electron, p/sub L/, when the L-vacancies are accompanied by a K-shell vacancy (p/sub L/ is obtained experimentally from measurements of K/sub α/-satellite intensities). For sufficiently strong projectile-electron interactions (sufficiently large Z/sub p/ or small v) the M1A ceases to reproduce the coupled channels results, but it is accurate over a much wider range of Z/sub p/ and v than the B1A. 27 references

  13. Minimax rational approximation of the Fermi-Dirac distribution

    Science.gov (United States)

    Moussa, Jonathan E.

    2016-10-01

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ɛ-1)) poles to achieve an error tolerance ɛ at temperature β-1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. This is particularly beneficial when Δ ≫ Δocc, such as in electronic structure calculations that use a large basis set.

  14. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  15. Approximate Coulomb effects in the three-body scattering problem

    International Nuclear Information System (INIS)

    Haftel, M.I.; Zankel, H.

    1981-01-01

    From the momentum space Faddeev equations we derive approximate expressions which describe the Coulomb-nuclear interference in the three-body elastic scattering, rearrangement, and breakup problems and apply the formalism to p-d elastic scattering. The approximations treat the Coulomb interference as mainly a two-body effect, but we allow for the charge distribution of the deuteron in the p-d calculations. Real and imaginary parts of the Coulomb correction to the elastic scattering phase shifts are described in terms of on-shell quantities only. In the case of pure Coulomb breakup we recover the distorted-wave Born approximation result. Comparing the derived approximation with the full Faddeev p-d elastic scattering calculation, which includes the Coulomb force, we obtain good qualitative agreement in S and P waves, but disagreement in repulsive higher partial waves. The on-shell approximation investigated is found to be superior to other current approximations. The calculated differential cross sections at 10 MeV raise the question of whether there is a significant Coulomb-nuclear interference at backward angles

  16. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  17. Is Middle-Upper Arm Circumference "normally" distributed? Secondary data analysis of 852 nutrition surveys.

    Science.gov (United States)

    Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer

    2016-01-01

    Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the

  18. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  19. Compression-rate-dependent nonlinear mechanics of normal and impaired porcine knee joints.

    Science.gov (United States)

    Rodriguez, Marcel Leonardo; Li, LePing

    2017-11-14

    The knee joint performs mechanical functions with various loading and unloading processes. Past studies have focused on the kinematics and elastic response of the joint with less understanding of the rate-dependent load response associated with viscoelastic and poromechanical behaviors. Forty-five fresh porcine knee joints were used in the present study to determine the loading-rate-dependent force-compression relationship, creep and relaxation of normal, dehydrated and meniscectomized joints. The mechanical tests of all normal intact joints showed similar strong compression-rate-dependent behavior: for a given compression-magnitude up to 1.2 mm, the reaction force varied 6 times over compression rates. While the static response was essentially linear, the nonlinear behavior was boosted with the increased compression rate to approach the asymptote or limit at approximately 2 mm/s. On the other hand, the joint stiffness varied approximately 3 times over different joints, when accounting for the maturity and breed of the animals. Both a loss of joint hydration and a total meniscectomy greatly compromised the load support in the joint, resulting in a reduction of load support as much as 60% from the corresponding intact joint. However, the former only weakened the transient load support, but the latter also greatly weakened the equilibrium load support. A total meniscectomy did not diminish the compression-rate-dependence of the joint though. These findings are consistent with the fluid-pressurization loading mechanism, which may have a significant implication in the joint mechanical function and cartilage mechanobiology.

  20. Normal and obsessional jealousy: a study of a population of young adults.

    Science.gov (United States)

    Marazziti, Donatella; Di Nasso, Elena; Masala, Irene; Baroni, Stefano; Abelli, Marianna; Mengali, Francesco; Mungai, Francesco; Rucci, Paola

    2003-05-01

    Jealousy is a heterogenous emotion ranging from normality to pathology. Several problems still exist in the distinction between normal and pathological jealousy. With the present study, we aimed to contribute to the definition of the boundary between obsessional and normal jealousy by means of a specific self-report questionnaire developed by us. The questionnaire called "Questionnaire on the Affective Relationships" (QAR) and consisting of 30 items, was administered to 400 university students of both sexes and to 14 outpatients affected by obsessive-compulsive disorder (OCD) whose main obsession was jealousy. The total scores and single items were analysed and compared. Two hundred and forty-five, approximately 61% of the questionnaires, were returned. The statistical analyses showed that patients with OCD had higher total scores than healthy subjects; in addition, it was possible to identify an intermediate group of subjects, corresponding to 10% of the total, who were concerned by jealousy thoughts around the partner, but at a lower degree than patients, and that we called "healthy jealous subjects" because they had no other psychopathological trait. Significant differences were also observed for single items in the three groups. Our study showed that 10% of a population of university students, albeit normal, have jealousy thoughts around the partner, as emerged by the specific questionnaire developed by us. This instrument permitted to clearly distinguish these subjects from patients with OCD and healthy subjects with no jealousy concern.