WorldWideScience

Sample records for sample average approximation

  1. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  2. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  3. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  4. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  5. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  6. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  7. Sampling and Low-Rank Tensor Approximation of the Response Surface

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann Georg; El-Moselhy, Tarek A.

    2013-01-01

    Most (quasi)-Monte Carlo procedures can be seen as computing some integral over an often high-dimensional domain. If the integrand is expensive to evaluate-we are thinking of a stochastic PDE (SPDE) where the coefficients are random fields and the integrand is some functional of the PDE-solution-there is the desire to keep all the samples for possible later computations of similar integrals. This obviously means a lot of data. To keep the storage demands low, and to allow evaluation of the integrand at points which were not sampled, we construct a low-rank tensor approximation of the integrand over the whole integration domain. This can also be viewed as a representation in some problem-dependent basis which allows a sparse representation. What one obtains is sometimes called a "surrogate" or "proxy" model, or a "response surface". This representation is built step by step or sample by sample, and can already be used for each new sample. In case we are sampling a solution of an SPDE, this allows us to reduce the number of necessary samples, namely in case the solution is already well-represented by the low-rank tensor approximation. This can be easily checked by evaluating the residuum of the PDE with the approximate solution. The procedure will be demonstrated in the computation of a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. © Springer-Verlag Berlin Heidelberg 2013.

  8. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  9. Calculations of the properties of superconducting alloys via the average T-matrix approximation

    International Nuclear Information System (INIS)

    Chatterjee, P.

    1980-01-01

    The theoretical formula of McMillan, modified via the multiple-scattering theory by Gomersall and Gyorffy, has been very successful in computing the electron-phonon coupling constant (lambda) and the transition temperature (Tsub(c)) of many superconducting elements and compounds. For disordered solids, such as substitutional alloys, however, this theory fails because of the breakdown of the translational symmetry used in the multiple-scattering theory. Under these conditions the problem can still be solved if the t-matrix is averaged in the random phase approximation (average T-matrix approximation). Gomersall and Gyorffy's expression is reformulated for lambda in the random phase approximation. This theory is applied to calculate lambda and Tsub(c) of the binary substitutional NbMo alloy system at different concentrations. The results appear to be in fair agreement with experiments. (author)

  10. The average number of critical rank-one approximations to a tensor

    NARCIS (Netherlands)

    Draisma, J.; Horobet, E.

    2014-01-01

    Motivated by the many potential applications of low-rank multi-way tensor approximations, we set out to count the rank-one tensors that are critical points of the distance function to a general tensor v. As this count depends on v, we average over v drawn from a Gaussian distribution, and find

  11. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  12. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  13. Approximations for transport parameters and self-averaging properties for point-like injections in heterogeneous media

    International Nuclear Information System (INIS)

    Eberhard, Jens

    2004-01-01

    We focus on transport parameters in heterogeneous media with a flow modelled by an ensemble of periodic and Gaussian random fields. The parameters are determined by ensemble averages. We study to what extent these averages represent the behaviour in a single realization. We calculate the centre-of-mass velocity and the dispersion coefficient using approximations based on a perturbative expansion for the transport equation, and on the iterative solution of the Langevin equation. Compared with simulations, the perturbation theory reproduces the numerical results only poorly, whereas the iterative solution yields good results. Using these approximations, we investigate the self-averaging properties. The ensemble average of the velocity characterizes the behaviour of a realization for large times in both ensembles. The dispersion coefficient is not self-averaging in the ensemble of periodic fields. For the Gaussian ensemble the asymptotic dispersion coefficient is self-averaging. For finite times, however, the fluctuations are so large that the average does not represent the behaviour in a single realization

  14. Approximative Krieger-Nelkin orientation averaging and anisotropy of water molecules vibrations

    International Nuclear Information System (INIS)

    Markovic, M.I.

    1974-01-01

    Quantum-mechanics approach of water molecules dynamics should be taken into account for precise theoretical calculation of differential scattering cross sections of neutrons. Krieger and Nelkin have proposed an approximate method for averaging orientation of molecules regarding directions of incoming and scattered neutron. This paper shows that this approach can be successfully applied for general shape of water molecule vibration anisotropy

  15. Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients

    Directory of Open Access Journals (Sweden)

    Deming Yuan

    2014-01-01

    Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.

  16. Convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks

    Science.gov (United States)

    Long, Yin; Zhang, Xiao-Jun; Wang, Kui

    2018-05-01

    In this paper, convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks (RBDNs) are studied. First, we find and demonstrate that the average degree is convergent in the form of power law. Meanwhile, we discover that the ratios of the back items to front items of convergent reminder are independent of network link number for large network size, and we theoretically prove that the limit of the ratio is a constant. Moreover, since it is difficult to calculate the analytical solution of the average degree for large network sizes, we adopt numerical method to obtain approximate expression of the average degree to approximate its analytical solution. Finally, simulations are presented to verify our theoretical results.

  17. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  18. Dielectronic recombination of P5+ and Cl7+ in configuration-average, LS-coupling, and intermediate-coupling approximations

    International Nuclear Information System (INIS)

    Badnell, N.R.; Pindzola, M.S.

    1989-01-01

    We have calculated dielectronic recombination cross sections and rate coefficients for the Ne-like ions P 5+ and Cl 7+ in configuration-average, LS-coupling, and intermediate-coupling approximations. Autoionization into excited states reduces the cross sections and rate coefficients by substantial amounts in all three methods. There is only rough agreement between the configuration-average cross-section results and the corresponding intermediate-coupling results. There is good agreement, however, between the LS-coupling cross-section results and the corresponding intermediate-coupling results. The LS-coupling and intermediate-coupling rate coefficients agree to better than 5%, while the configuration-average rate coefficients are about 30% higher than the other two coupling methods. External electric field effects, as calculated in the configuration-average approximation, are found to be relatively small for the cross sections and completely negligible for the rate coefficients. Finally, the general formula of Burgess was found to overestimate the rate coefficients by roughly a factor of 5, mainly due to the neglect of autoionization into excited states

  19. Free-space optical communications with peak and average constraints: High SNR capacity approximation

    KAUST Repository

    Chaaban, Anas

    2015-09-07

    The capacity of the intensity-modulation direct-detection (IM-DD) free-space optical channel with both average and peak intensity constraints is studied. A new capacity lower bound is derived by using a truncated-Gaussian input distribution. Numerical evaluation shows that this capacity lower bound is nearly tight at high signal-to-noise ratio (SNR), while it is shown analytically that the gap to capacity upper bounds is a small constant at high SNR. In particular, the gap to the high-SNR asymptotic capacity of the channel under either a peak or an average constraint is small. This leads to a simple approximation of the high SNR capacity. Additionally, a new capacity upper bound is derived using sphere-packing arguments. This bound is tight at high SNR for a channel with a dominant peak constraint.

  20. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  1. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    Science.gov (United States)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  2. Vibrations in force-and-mass disordered alloys in the average local-information transfer approximation. Application to Al-Ag

    International Nuclear Information System (INIS)

    Czachor, A.

    1979-01-01

    The configuration-averaged displacement-displacement Green's function, derived in the locator-based approximation accounting for average transfer of information on local coupling and mass, has been applied to study the force-and-mass-disorder induced modifications of phonon dispersion relations in substitutional alloys of cubic structures. In this approach the translational invariance condition is obeyed whereas damping is neglected. The force-disorder was found to lead to additional splitting of phonon curves besides that due to mass-disorder, even in the small impurity-concentration case; at larger concentrations the number of splits (frequency gaps) should be still greater. The use of a quasi-locator in the Green's function derivation allows one to partly reconcile the present results with those of the average t-matrix approximation. The experimentally observed splitting in the [100]T phonon dispersion curve for Al-Ag alloys has been interpreted in terms of the above theory and of a quasi-mass of heavy impurity atoms. (Author)

  3. Chance constrained problems: penalty reformulation and performance of sample approximation technique

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin

    2012-01-01

    Roč. 48, č. 1 (2012), s. 105-122 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance constrained problems * penalty functions * asymptotic equivalence * sample approximation technique * investment problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-chance constrained problems penalty reformulation and performance of sample approximation technique.pdf

  4. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  5. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  6. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  7. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  8. General theory for calculating disorder-averaged Green's function correlators within the coherent potential approximation

    Science.gov (United States)

    Zhou, Chenyi; Guo, Hong

    2017-01-01

    We report a diagrammatic method to solve the general problem of calculating configurationally averaged Green's function correlators that appear in quantum transport theory for nanostructures containing disorder. The theory treats both equilibrium and nonequilibrium quantum statistics on an equal footing. Since random impurity scattering is a problem that cannot be solved exactly in a perturbative approach, we combine our diagrammatic method with the coherent potential approximation (CPA) so that a reliable closed-form solution can be obtained. Our theory not only ensures the internal consistency of the diagrams derived at different levels of the correlators but also satisfies a set of Ward-like identities that corroborate the conserving consistency of transport calculations within the formalism. The theory is applied to calculate the quantum transport properties such as average ac conductance and transmission moments of a disordered tight-binding model, and results are numerically verified to high precision by comparing to the exact solutions obtained from enumerating all possible disorder configurations. Our formalism can be employed to predict transport properties of a wide variety of physical systems where disorder scattering is important.

  9. Strips of hourly power options. Approximate hedging using average-based forward contracts

    International Nuclear Information System (INIS)

    Lindell, Andreas; Raab, Mikael

    2009-01-01

    We study approximate hedging strategies for a contingent claim consisting of a strip of independent hourly power options. The payoff of the contingent claim is a sum of the contributing hourly payoffs. As there is no forward market for specific hours, the fundamental problem is to find a reasonable hedge using exchange-traded forward contracts, e.g. average-based monthly contracts. The main result is a simple dynamic hedging strategy that reduces a significant part of the variance. The idea is to decompose the contingent claim into mathematically tractable components and to use empirical estimations to derive hedging deltas. Two benefits of the method are that the technique easily extends to more complex power derivatives and that only a few parameters need to be estimated. The hedging strategy based on the decomposition technique is compared with dynamic delta hedging strategies based on local minimum variance hedging, using a correlated traded asset. (author)

  10. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  11. New perspectives on approximation and sampling theory Festschrift in honor of Paul Butzer's 85th birthday

    CERN Document Server

    Schmeisser, Gerhard

    2014-01-01

    Paul Butzer, who is considered the academic father and grandfather of many prominent mathematicians, has established one of the best schools in approximation and sampling theory in the world. He is one of the leading figures in approximation, sampling theory, and harmonic analysis. Although on April 15, 2013, Paul Butzer turned 85 years old, remarkably, he is still an active research mathematician. In celebration of Paul Butzer’s 85th birthday, New Perspectives on Approximation and Sampling Theory is a collection of invited chapters on approximation, sampling, and harmonic analysis written by students, friends, colleagues, and prominent active mathematicians. Topics covered include approximation methods using wavelets, multi-scale analysis, frames, and special functions. New Perspectives on Approximation and Sampling Theory requires basic knowledge of mathematical analysis, but efforts were made to keep the exposition clear and the chapters self-contained. This volume will appeal to researchers and graduate...

  12. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  13. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  14. Super-sample covariance approximations and partial sky coverage

    Science.gov (United States)

    Lacasa, Fabien; Lima, Marcos; Aguena, Michel

    2018-04-01

    Super-sample covariance (SSC) is the dominant source of statistical error on large scale structure (LSS) observables for both current and future galaxy surveys. In this work, we concentrate on the SSC of cluster counts, also known as sample variance, which is particularly useful for the self-calibration of the cluster observable-mass relation; our approach can similarly be applied to other observables, such as galaxy clustering and lensing shear. We first examined the accuracy of two analytical approximations proposed in the literature for the flat sky limit, finding that they are accurate at the 15% and 30-35% level, respectively, for covariances of counts in the same redshift bin. We then developed a harmonic expansion formalism that allows for the prediction of SSC in an arbitrary survey mask geometry, such as large sky areas of current and future surveys. We show analytically and numerically that this formalism recovers the full sky and flat sky limits present in the literature. We then present an efficient numerical implementation of the formalism, which allows fast and easy runs of covariance predictions when the survey mask is modified. We applied our method to a mask that is broadly similar to the Dark Energy Survey footprint, finding a non-negligible negative cross-z covariance, i.e. redshift bins are anti-correlated. We also examined the case of data removal from holes due to, for example bright stars, quality cuts, or systematic removals, and find that this does not have noticeable effects on the structure of the SSC matrix, only rescaling its amplitude by the effective survey area. These advances enable analytical covariances of LSS observables to be computed for current and future galaxy surveys, which cover large areas of the sky where the flat sky approximation fails.

  15. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  16. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  17. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  18. A Bayesian Method for Weighted Sampling

    OpenAIRE

    Lo, Albert Y.

    1993-01-01

    Bayesian statistical inference for sampling from weighted distribution models is studied. Small-sample Bayesian bootstrap clone (BBC) approximations to the posterior distribution are discussed. A second-order property for the BBC in unweighted i.i.d. sampling is given. A consequence is that BBC approximations to a posterior distribution of the mean and to the sampling distribution of the sample average, can be made asymptotically accurate by a proper choice of the random variables that genera...

  19. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  20. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  1. Approximate Receding Horizon Approach for Markov Decision Processes: Average Award Case

    National Research Council Canada - National Science Library

    Chang, Hyeong S; Marcus, Steven I

    2002-01-01

    ...) with countable state space, finite action space, and bounded rewards that uses an approximate solution of a fixed finite-horizon sub-MDP of a given infinite-horizon MDP to create a stationary policy...

  2. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  3. A One-Electron Approximation to Domain Averaged Fermi hole Analysis

    Czech Academy of Sciences Publication Activity Database

    Cooper, D.L.; Ponec, Robert

    2008-01-01

    Roč. 10, č. 9 (2008), s. 1319-1329 ISSN 1463-9076 R&D Projects: GA AV ČR(CZ) IAA4072403 Institutional research plan: CEZ:AV0Z40720504 Keywords : domain-averaged fermi hole * comparisons Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.064, year: 2008

  4. Approximative Krieger-Nelkin orientation averaging and anisotropy of water molecules vibrations; Aproksimativno Krieger-Nelkinovo orijentacijsko usrednjenje i anozotropija vibracija molekula lake vode

    Energy Technology Data Exchange (ETDEWEB)

    Markovic, M I [Elektrothenicki fakultet, Belgrade (Yugoslavia)

    1974-07-01

    Quantum-mechanics approach of water molecules dynamics should be taken into account for precise theoretical calculation of differential scattering cross sections of neutrons. Krieger and Nelkin have proposed an approximate method for averaging orientation of molecules regarding directions of incoming and scattered neutron. This paper shows that this approach can be successfully applied for general shape of water molecule vibration anisotropy.

  5. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    Science.gov (United States)

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  6. The efficiency of Flory approximation

    International Nuclear Information System (INIS)

    Obukhov, S.P.

    1984-01-01

    The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)

  7. Direct measurement of fast transients by using boot-strapped waveform averaging

    Science.gov (United States)

    Olsson, Mattias; Edman, Fredrik; Karki, Khadga Jung

    2018-03-01

    An approximation to coherent sampling, also known as boot-strapped waveform averaging, is presented. The method uses digital cavities to determine the condition for coherent sampling. It can be used to increase the effective sampling rate of a repetitive signal and the signal to noise ratio simultaneously. The method is demonstrated by using it to directly measure the fluorescence lifetime from Rhodamine 6G by digitizing the signal from a fast avalanche photodiode. The obtained lifetime of 4.0 ns is in agreement with the known values.

  8. The importance of the sampling frequency in determining short-time-averaged irradiance and illuminance for rapidly changing cloud cover

    International Nuclear Information System (INIS)

    Delaunay, J.J.; Rommel, M.; Geisler, J.

    1994-01-01

    The sampling interval is an important parameter which must be chosen carefully, if measurements of the direct, global, and diffuse irradiance or illuminance are carried out to determine their averages over a given period. Using measurements from a day with rapidly moving clouds, we investigated the influence of the sampling interval on the uncertainly of the calculated 15-min averages. We conclude, for this averaging period, that the sampling interval should not exceed 60 s and 10 s for measurement of the diffuse and global components respectively, to reduce the influence of the sampling interval below 2%. For the direct component, even a 5 s sampling interval is too long to reach this influence level for days with extremely quickly changing insolation conditions. (author)

  9. A novel condition for stable nonlinear sampled-data models using higher-order discretized approximations with zero dynamics.

    Science.gov (United States)

    Zeng, Cheng; Liang, Shan; Xiang, Shuwen

    2017-05-01

    Continuous-time systems are usually modelled by the form of ordinary differential equations arising from physical laws. However, the use of these models in practice and utilizing, analyzing or transmitting these data from such systems must first invariably be discretized. More importantly, for digital control of a continuous-time nonlinear system, a good sampled-data model is required. This paper investigates the new consistency condition which is weaker than the previous similar results presented. Moreover, given the stability of the high-order approximate model with stable zero dynamics, the novel condition presented stabilizes the exact sampled-data model of the nonlinear system for sufficiently small sampling periods. An insightful interpretation of the obtained results can be made in terms of the stable sampling zero dynamics, and the new consistency condition is surprisingly associated with the relative degree of the nonlinear continuous-time system. Our controller design, based on the higher-order approximate discretized model, extends the existing methods which mainly deal with the Euler approximation. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  11. The effects of parameter estimation on minimizing the in-control average sample size for the double sampling X bar chart

    Directory of Open Access Journals (Sweden)

    Michael B.C. Khoo

    2013-11-01

    Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.

  12. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  13. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  14. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations

    International Nuclear Information System (INIS)

    Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George

    2016-01-01

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.

  15. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  16. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  17. Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2015-01-01

    Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

  18. Canonical sampling of a lattice gas

    International Nuclear Information System (INIS)

    Mueller, W.F.

    1997-01-01

    It is shown that a sampling algorithm, recently proposed in conjunction with a lattice-gas model of nuclear fragmentation, samples the canonical ensemble only in an approximate fashion. A residual weight factor has to be taken into account to calculate correct thermodynamic averages. Then, however, the algorithm is numerically inefficient. copyright 1997 The American Physical Society

  19. Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages

    Directory of Open Access Journals (Sweden)

    Maureen Fontaine

    2017-07-01

    Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.

  20. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  1. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  2. The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.

    Science.gov (United States)

    Macpherson, David A.; Even, William E.

    The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…

  3. Calculation of the average radiological detriment of two samples from a breast screening programme

    International Nuclear Information System (INIS)

    Ramos, M.; Sanchez, A.M.; Verdu, G.; Villaescusa, J.I.; Salas, M.D.; Cuevas, M.D.

    2002-01-01

    In 1992 started in the Comunidad Valenciana the Breast Cancer Screening Programme. The programme is oriented to asymptomatic women between 45 and 65 years old, with two mammograms in each breast for the first time that participate and a simple one in later interventions. Between November of 2000 and March of 2001 was extracted a first sample of 100 woman records for all units of the programme. The data extracted in each sample were the kV-voltage, the X-ray tube load and the breast thickness and age of the woman exposed, used directly in dose and detriment calculation. By means of MCNP-4B code and according to the European Protocol for the quality control of the physical and technical aspects of mammography screening, the average total and glandular doses were calculated, and later compared

  4. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  5. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  6. X-ray microanalytical surveys of minor element concentrations in unsectioned biological samples

    Science.gov (United States)

    Schofield, R. M. S.; Lefevre, H. W.; Overley, J. C.; Macdonald, J. D.

    1988-03-01

    Approximate concentration maps of small unsectioned biological samples are made using the pixel by pixel ratio of PIXE images to areal density images. Areal density images are derived from scanning transmission ion microscopy (STIM) proton energy-loss images. Corrections for X-ray production cross section variations, X-ray attenuation, and depth averaging are approximated or ignored. Estimates of the magnitude of the resulting error are made. Approximate calcium concentrations within the head of a fruit fly are reported. Concentrations in the retinula cell region of the eye average about 1 mg/g dry weight. Concentrations of zinc in the mandible of several ant species average about 40 mg/g. Zinc concentrations in the stomachs of these ants are at least 1 mg/g.

  7. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-01-01

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  8. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-06-23

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  9. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    Science.gov (United States)

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  10. A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2014-01-01

    Roč. 163, č. 2 (2014), s. 674-684 ISSN 0022-3239 Grant - others:PSF Organization(US) 012/300/02; CONACYT (México) and ASCR (Czech Republic)(MX) 171396 Institutional support: RVO:67985556 Keywords : Strong sample-path optimality * Lyapunov function condition * Stationary policy * Expected average reward criterion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.509, year: 2014 http://library.utia.cas.cz/separaty/2014/E/sladky-0432661.pdf

  11. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  12. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  13. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  14. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  15. Study of runaway electrons using the conditional average sampling method in the Damavand tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Pourshahab, B., E-mail: bpourshahab@gmail.com [University of Isfahan, Department of Nuclear Engineering, Faculty of Advance Sciences and Technologies (Iran, Islamic Republic of); Sadighzadeh, A. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of); Abdi, M. R., E-mail: r.abdi@phys.ui.ac.ir [University of Isfahan, Department of Physics, Faculty of Science (Iran, Islamic Republic of); Rasouli, C. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of)

    2017-03-15

    Some experiments for studying the runaway electron (RE) effects have been performed using the poloidal magnetic probes system installed around the plasma column in the Damavand tokamak. In these experiments, the so-called runaway-dominated discharges were considered in which the main part of the plasma current is carried by REs. The induced magnetic effects on the poloidal pickup coils signals are observed simultaneously with the Parail–Pogutse instability moments for REs and hard X-ray bursts. The output signals of all diagnostic systems enter the data acquisition system with 2 Msample/(s channel) sampling rate. The temporal evolution of the diagnostic signals is analyzed by the conditional average sampling (CAS) technique. The CASed profiles indicate RE collisions with the high-field-side plasma facing components at the instability moments. The investigation has been carried out for two discharge modes—low-toroidal-field (LTF) and high-toroidal-field (HTF) ones—related to both up and down limits of the toroidal magnetic field in the Damavand tokamak and their comparison has shown that the RE confinement is better in HTF discharges.

  16. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  17. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  18. Statistical trajectory of an approximate EM algorithm for probabilistic image processing

    International Nuclear Information System (INIS)

    Tanaka, Kazuyuki; Titterington, D M

    2007-01-01

    We calculate analytically a statistical average of trajectories of an approximate expectation-maximization (EM) algorithm with generalized belief propagation (GBP) and a Gaussian graphical model for the estimation of hyperparameters from observable data in probabilistic image processing. A statistical average with respect to observed data corresponds to a configuration average for the random-field Ising model in spin glass theory. In the present paper, hyperparameters which correspond to interactions and external fields of spin systems are estimated by an approximate EM algorithm. A practical algorithm is described for gray-level image restoration based on a Gaussian graphical model and GBP. The GBP approach corresponds to the cluster variation method in statistical mechanics. Our main result in the present paper is to obtain the statistical average of the trajectory in the approximate EM algorithm by using loopy belief propagation and GBP with respect to degraded images generated from a probability density function with true values of hyperparameters. The statistical average of the trajectory can be expressed in terms of recursion formulas derived from some analytical calculations

  19. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  20. Continuum orbital approximations in weak-coupling theories for inelastic electron scattering

    International Nuclear Information System (INIS)

    Peek, J.M.; Mann, J.B.

    1977-01-01

    Two approximations, motivated by heavy-particle scattering theory, are tested for weak-coupling electron-atom (ion) inelastic scattering theory. They consist of replacing the one-electron scattering orbitals by their Langer uniform approximations and the use of an average trajectory approximation which entirely avoids the necessity for generating continuum orbitals. Numerical tests for a dipole-allowed and a dipole-forbidden event, based on Coulomb-Born theory with exchange neglected, reveal the error trends. It is concluded that the uniform approximation gives a satisfactory prediction for traditional weak-coupling theories while the average approximation should be limited to collision energies exceeding at least twice the threshold energy. The accuracy for both approximations is higher for positive ions than for neutral targets. Partial-wave collision-strength data indicate that greater care should be exercised in using these approximations to predict quantities differential in the scattering angle. An application to the 2s 2 S-2p 2 P transition in Ne VIII is presented

  1. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  2. Polynomial approximation of functions in Sobolev spaces

    International Nuclear Information System (INIS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces

  3. How Many Conformations Need To Be Sampled To Obtain Converged QM/MM Energies? The Curse of Exponential Averaging.

    Science.gov (United States)

    Ryde, Ulf

    2017-11-14

    Combined quantum mechanical and molecular mechanical (QM/MM) calculations is a popular approach to study enzymatic reactions. They are often based on a set of minimized structures obtained on snapshots from a molecular dynamics simulation to include some dynamics of the enzyme. It has been much discussed how the individual energies should be combined to obtain a final estimate of the energy, but the current consensus seems to be to use an exponential average. Then, the question is how many snapshots are needed to reach a reliable estimate of the energy. In this paper, I show that the question can be easily be answered if it is assumed that the energies follow a Gaussian distribution. Then, the outcome can be simulated based on a single parameter, σ, the standard deviation of the QM/MM energies from the various snapshots, and the number of required snapshots can be estimated once the desired accuracy and confidence of the result has been specified. Results for various parameters are presented, and it is shown that many more snapshots are required than is normally assumed. The number can be reduced by employing a cumulant approximation to second order. It is shown that most convergence criteria work poorly, owing to the very bad conditioning of the exponential average when σ is large (more than ∼7 kJ/mol), because the energies that contribute most to the exponential average have a very low probability. On the other hand, σ serves as an excellent convergence criterion.

  4. High Resolution of the ECG Signal by Polynomial Approximation

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2006-04-01

    Full Text Available Averaging techniques as temporal averaging and space averaging have been successfully used in many applications for attenuating interference [6], [7], [8], [9], [10]. In this paper we introduce interference removing of the ECG signal by polynomial approximation, with smoothing discrete dependencies, to make up for averaging methods. The method is suitable for low-level signals of the electrical activity of the heart often less than 10 m V. Most low-level signals arising from PR, ST and TP segments which can be detected eventually and their physiologic meaning can be appreciated. Of special importance for the diagnostic of the electrical activity of the heart is the activity bundle of His between P and R waveforms. We have established an artificial sine wave to ECG signal between P and R wave. The aim focus is to verify the smoothing method by polynomial approximation if the SNR (signal-to-noise ratio is negative (i.e. a signal is lower than noise.

  5. A 12-bit spectroscopy analog-to-digital converter type SAA (Successive Approximation type with channel width Averaging) intended for multichannel pulse height analyzer SWAN-1 based on IBM PC/XT/AT

    International Nuclear Information System (INIS)

    Borsuk, S.; Kulka, Z.

    1989-12-01

    A 12-bit spectroscopy analog-to-digital converter (ADC) type SAA (Successive Approximation type with channel width Averaging) intended for multichannel pulse height analyzer SWAN-1 based on IBM PC/XT/AT has been described. Design principles, specifications and measurements of a fundamental SAA-2 converter version are reported. Finally, two next versions of the converter with introduced modifications are discussed. 6 refs., 7 figs. (author)

  6. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    International Nuclear Information System (INIS)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-01-01

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step

  7. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT.

    Science.gov (United States)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-02-01

    To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  8. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  9. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  10. Sampling Polya-Gamma random variates: alternate and approximate techniques

    OpenAIRE

    Windle, Jesse; Polson, Nicholas G.; Scott, James G.

    2014-01-01

    Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.

  11. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  12. Comparison of Spot and Time Weighted Averaging (TWA Sampling with SPME-GC/MS Methods for Trihalomethane (THM Analysis

    Directory of Open Access Journals (Sweden)

    Don-Roger Parkinson

    2016-02-01

    Full Text Available Water samples were collected and analyzed for conductivity, pH, temperature and trihalomethanes (THMs during the fall of 2014 at two monitored municipal drinking water source ponds. Both spot (or grab and time weighted average (TWA sampling methods were assessed over the same two day sampling time period. For spot sampling, replicate samples were taken at each site and analyzed within 12 h of sampling by both Headspace (HS- and direct (DI- solid phase microextraction (SPME sampling/extraction methods followed by Gas Chromatography/Mass Spectrometry (GC/MS. For TWA, a two day passive on-site TWA sampling was carried out at the same sampling points in the ponds. All SPME sampling methods undertaken used a 65-µm PDMS/DVB SPME fiber, which was found optimal for THM sampling. Sampling conditions were optimized in the laboratory using calibration standards of chloroform, bromoform, bromodichloromethane, dibromochloromethane, 1,2-dibromoethane and 1,2-dichloroethane, prepared in aqueous solutions from analytical grade samples. Calibration curves for all methods with R2 values ranging from 0.985–0.998 (N = 5 over the quantitation linear range of 3–800 ppb were achieved. The different sampling methods were compared for quantification of the water samples, and results showed that DI- and TWA- sampling methods gave better data and analytical metrics. Addition of 10% wt./vol. of (NH42SO4 salt to the sampling vial was found to aid extraction of THMs by increasing GC peaks areas by about 10%, which resulted in lower detection limits for all techniques studied. However, for on-site TWA analysis of THMs in natural waters, the calibration standard(s ionic strength conditions, must be carefully matched to natural water conditions to properly quantitate THM concentrations. The data obtained from the TWA method may better reflect actual natural water conditions.

  13. Subquadratic medial-axis approximation in $\\mathbb{R}^3$

    Directory of Open Access Journals (Sweden)

    Christian Scheffer

    2015-09-01

    Full Text Available We present an algorithm that approximates the medial axis of a smooth manifold in $\\mathbb{R}^3$ which is given by a sufficiently dense point sample. The resulting, non-discrete approximation is shown to converge to the medial axis as the sampling density approaches infinity. While all previous algorithms guaranteeing convergence have a running time quadratic in the size $n$ of the point sample, we achieve a running time of at most $\\mathcal{O}(n\\log^3 n$. While there is no subquadratic upper bound on the output complexity of previous algorithms for non-discrete medial axis approximation, the output of our algorithm is guaranteed to be of linear size.

  14. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  15. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  16. Approximate solutions of common fixed-point problems

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book presents results on the convergence behavior of algorithms which are known as vital tools for solving convex feasibility problems and common fixed point problems. The main goal for us in dealing with a known computational error is to find what approximate solution can be obtained and how many iterates one needs to find it. According to know results, these algorithms should converge to a solution. In this exposition, these algorithms are studied, taking into account computational errors which remain consistent in practice. In this case the convergence to a solution does not take place. We show that our algorithms generate a good approximate solution if computational errors are bounded from above by a small positive constant. Beginning with an introduction, this monograph moves on to study: · dynamic string-averaging methods for common fixed point problems in a Hilbert space · dynamic string methods for common fixed point problems in a metric space · dynamic string-averaging version of the proximal...

  17. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  18. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  19. Comparison of four support-vector based function approximators

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2004-01-01

    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been

  20. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  2. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  3. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    Science.gov (United States)

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  4. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  5. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  6. Calculation of thermodynamic properties using the random-phase approximation: alpha-N2

    NARCIS (Netherlands)

    Jansen, A.P.J.; Schoorl, R.

    1988-01-01

    The random-phase approximation (RPA) for molecular crystals is extended in order to calculate thermodynamic properties. A recursion formula for thermodynamic averages of products of mean-field excitation and deexcitation operators is derived. With this formula the thermodynamic average of any

  7. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  8. Elastic anisotropy of core samples from the Taiwan Chelungpu Fault Drilling Project (TCDP): direct 3-D measurements and weak anisotropy approximations

    Science.gov (United States)

    Louis, Laurent; David, Christian; Špaček, Petr; Wong, Teng-Fong; Fortin, Jérôme; Song, Sheng Rong

    2012-01-01

    The study of seismic anisotropy has become a powerful tool to decipher rock physics attributes in reservoirs or in complex tectonic settings. We compare direct 3-D measurements of P-wave velocity in 132 different directions on spherical rock samples to the prediction of the approximate model proposed by Louis et al. based on a tensorial approach. The data set includes measurements on dry spheres under confining pressure ranging from 5 to 200 MPa for three sandstones retrieved at a depth of 850, 1365 and 1394 metres in TCDP hole A (Taiwan Chelungpu Fault Drilling Project). As long as the P-wave velocity anisotropy is weak, we show that the predictions of the approximate model are in good agreement with the measurements. As the tensorial method is designed to work with cylindrical samples cored in three orthogonal directions, a significant gain both in the number of measurements involved and in sample preparation is achieved compared to measurements on spheres. We analysed the pressure dependence of the velocity field and show that as the confining pressure is raised the velocity increases, the anisotropy decreases but remains significant even at high pressure, and the shape of the ellipsoid representing the velocity (or elastic) fabric evolves from elongated to planar. These observations can be accounted for by considering the existence of both isotropic and anisotropic crack distributions and their evolution with applied pressure.

  9. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  10. Approximate determination of efficiency for activity measurements of cylindrical samples

    Energy Technology Data Exchange (ETDEWEB)

    Helbig, W [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany); Bothe, M [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany)

    1997-03-01

    Some calibration samples are necessary with the same geometrical parameters but of different materials, containing known activities A homogeniously distributed. Their densities are measured, their mass absorption coefficients may be unknown. These calibration samples are positioned in the counting geometry, for instance directly on the detector. The efficiency function {epsilon}(E) for each sample is gained by measuring the gamma spectra and evaluating all usable gamma energy peaks. From these {epsilon}(E) the common valid {epsilon}{sub geom}(E) will be deduced. For this purpose the functions {epsilon}{sub mu}(E) for these samples have to be established. (orig.)

  11. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  12. Average size of random polygons with fixed knot topology.

    Science.gov (United States)

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  13. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  14. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  15. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  16. Legendre-tau approximation for functional differential equations. II - The linear quadratic optimal control problem

    Science.gov (United States)

    Ito, Kazufumi; Teglas, Russell

    1987-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  17. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  18. Analytical calculation of the average scattering cross sections using fourier series

    Energy Technology Data Exchange (ETDEWEB)

    Palma, Daniel A.P. [Instituto Federal do Rio de Janeiro, Nilopolis, RJ (Brazil)], e-mail: dpalmaster@gmail.com; Goncalves, Alessandro C.; Martinez, Aquilino S.; Silva, Fernando C. da [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear], e-mail: asilva@con.ufrj.br, e-mail: agoncalves@con.ufrj.br, e-mail: aquilino@lmp.ufrj.br, e-mail: fernando@con.ufrj.br

    2009-07-01

    The precise determination of the Doppler broadening functions is very important in different applications of reactors physics, mainly in the processing of nuclear data. Analytical approximations are obtained in this paper for average scattering cross section using expansions in Fourier series, generating an approximation that is simple and precise. The results have shown to be satisfactory from the point-of-view of accuracy and do not depend on the type of resonance considered. (author)

  19. Analytical calculation of the average scattering cross sections using fourier series

    International Nuclear Information System (INIS)

    Palma, Daniel A.P.; Goncalves, Alessandro C.; Martinez, Aquilino S.; Silva, Fernando C. da

    2009-01-01

    The precise determination of the Doppler broadening functions is very important in different applications of reactors physics, mainly in the processing of nuclear data. Analytical approximations are obtained in this paper for average scattering cross section using expansions in Fourier series, generating an approximation that is simple and precise. The results have shown to be satisfactory from the point-of-view of accuracy and do not depend on the type of resonance considered. (author)

  20. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  1. Self-consistent approximations beyond the CPA: Part II

    International Nuclear Information System (INIS)

    Kaplan, T.; Gray, L.J.

    1982-01-01

    This paper concentrates on a self-consistent approximation for random alloys developed by Kaplan, Leath, Gray, and Diehl. The construction of the augmented space formalism for a binary alloy is sketched, and the notation to be used derived. Using the operator methods of the augmented space, the self-consistent approximation is derived for the average Green's function, and for evaluating the self-energy, taking into account the scattering by clusters of excitations. The particular cluster approximation desired is derived by treating the scattering by the excitations with S /SUB T/ exactly. Fourier transforms on the disorder-space clustersite labels solve the self-consistent set of equations. Expansion to short range order in the alloy is also discussed. A method to reduce the problem to a computationally tractable form is described

  2. Approximate reasoning in physical systems

    International Nuclear Information System (INIS)

    Mutihac, R.

    1991-01-01

    The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)

  3. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  4. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    Science.gov (United States)

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright

  5. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    Science.gov (United States)

    LöWe, H.; Helbig, N.

    2012-10-01

    We provide a new quasi-analytical method to compute the subgrid topographic influences on the shortwave radiation fluxes and the effective albedo in complex terrain as required for large-scale meteorological, land surface, or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain-averaged fluxes of direct, diffuse, and terrain radiation and the sky view factor. Domain-averaged quantities can be related to a type of level-crossing probability of the random field, which is approximated by long-standing results developed for acoustic scattering at ocean boundaries. This allows us to express all nonlocal horizon effects in terms of a local terrain parameter, namely, the mean-square slope. Emerging integrals are computed numerically, and fit formulas are given for practical purposes. As an implication of our approach, we provide an expression for the effective albedo of complex terrain in terms of the Sun elevation angle, mean-square slope, the area-averaged surface albedo, and the ratio of atmospheric direct beam to diffuse radiation. For demonstration we compute the decrease of the effective albedo relative to the area-averaged albedo in Switzerland for idealized snow-covered and clear-sky conditions at noon in winter. We find an average decrease of 5.8% and spatial patterns which originate from characteristics of the underlying relief. Limitations and possible generalizations of the method are discussed.

  6. Legendre-tau approximation for functional differential equations. Part 2: The linear quadratic optimal control problem

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1984-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  7. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    , obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose

  8. On the construction of a time base and the elimination of averaging errors in proxy records

    Science.gov (United States)

    Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.

    2009-04-01

    measured averaged proxy signal is modeled by following signal model: -- Δ ∫ n+12Δδ- y(n,θ) = δ- 1Δ- y(m,θ)dm n-2 δ where m is the position, x(m) = Δm; θ are the unknown parameters and y(m,θ) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, θ) = A +∑H [A sin(kωt(m ))+ A cos(kωt(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: ∑ g(m ) = b blφl(m ) l=1 where, b is a vector of unknown time base distortion parameters, and φ is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.

  9. Rational approximations for tomographic reconstructions

    International Nuclear Information System (INIS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-01-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)

  10. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  11. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  12. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  13. Parallel magnetic resonance imaging as approximation in a reproducing kernel Hilbert space

    International Nuclear Information System (INIS)

    Athalye, Vivek; Lustig, Michael; Martin Uecker

    2015-01-01

    In magnetic resonance imaging data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. To understand and design k-space sampling patterns, a theoretical framework is needed to analyze how well arbitrary sampling patterns reconstruct unsampled k-space using receive coil information. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a reproducing kernel Hilbert space with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional image-domain g-factor noise analysis to both noise amplification and approximation errors in k-space. This is demonstrated with numerical examples. (paper)

  14. Nodal O(h4)-superconvergence in 3D by averaging piecewise linear, bilinear, and trilinear FE approximations

    Czech Academy of Sciences Publication Activity Database

    Hannukainen, A.; Korotov, S.; Křížek, Michal

    2010-01-01

    Roč. 28, č. 1 (2010), s. 1-10 ISSN 0254-9409 R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : higher order error estimates * tetrahedral and prismatic elements * superconvergence * averaging operators Subject RIV: BA - General Mathematics Impact factor: 0.760, year: 2010 http://www.jstor.org/stable/43693564

  15. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  16. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  17. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  18. Implementation of an approximate zero-variance scheme in the TRIPOLI Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Dumonteil, E.; Petit, O.; Diop, C. [Commissariat a l' Energie Atomique CEA, Gif-sur-Yvette (France)

    2006-07-01

    In an accompanying paper it is shown that theoretically a zero-variance Monte Carlo scheme can be devised for criticality calculations if the space, energy and direction dependent adjoint function is exactly known. This requires biasing of the transition and collision kernels with the appropriate adjoint function. In this paper it is discussed how an existing general purpose Monte Carlo code like TRIPOLI can be modified to approach the zero-variance scheme. This requires modifications for reading in the adjoint function obtained from a separate deterministic calculation for a number of space intervals, energy groups and discrete directions. Furthermore, a function has to be added to supply the direction dependent and the averaged adjoint function at a specific position in the system by interpolation. The initial particle weights of a certain batch must be set inversely proportional to the averaged adjoint function and proper normalization of the initial weights must be secured. The sampling of the biased transition kernel requires cumulative integrals of the biased kernel along the flight path until a certain value, depending on a selected random number is reached to determine a new collision site. The weight of the particle must be adapted accordingly. The sampling of the biased collision kernel (in a multigroup treatment) is much more like the normal sampling procedure. A numerical example is given for a 3-group calculation with a simplified transport model (two-direction model), demonstrating that the zero-variance scheme can be approximated quite well for this simplified case. (authors)

  19. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  20. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  1. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  2. Physics-preserving averaging scheme based on Grunwald-Letnikov formula for gas flow in fractured media

    KAUST Repository

    Amir, Sahar Z.

    2018-01-02

    The heterogeneous natures of rock fabrics, due to the existence of multi-scale fractures and geological formations, led to the deviations from unity in the flux-equations fractional-exponent magnitudes. In this paper, the resulting non-Newtonian non-Darcy fractional-derivatives flux equations are solved using physics-preserving averaging schemes that incorporates both, original and shifted, Grunwald-Letnikov (GL) approximation formulas preserving the physics, by reducing the shifting effects, while maintaining the stability of the system, by keeping one shifted expansion. The proposed way of using the GL expansions also generate symmetrical coefficient matrices that significantly reduces the discretization complexities appearing with all shifted cases from literature, and help considerably in 2D and 3D systems. Systems equations derivations and discretization details are discussed. Then, the physics-preserving averaging scheme is explained and illustrated. Finally, results are presented and reviewed. Edge-based original GL expansions are unstable as also illustrated in literatures. Shifted GL expansions are stable but add a lot of additional weights to both discretization sides affecting the physical accuracy. In comparison, the physics-preserving averaging scheme balances the physical accuracy and stability requirements leading to a more physically conservative scheme that is more stable than the original GL approximation but might be slightly less stable than the shifted GL approximations. It is a locally conservative Single-Continuum averaging scheme that applies a finite-volume viewpoint.

  3. Physics-preserving averaging scheme based on Grunwald-Letnikov formula for gas flow in fractured media

    KAUST Repository

    Amir, Sahar Z.; Sun, Shuyu

    2018-01-01

    The heterogeneous natures of rock fabrics, due to the existence of multi-scale fractures and geological formations, led to the deviations from unity in the flux-equations fractional-exponent magnitudes. In this paper, the resulting non-Newtonian non-Darcy fractional-derivatives flux equations are solved using physics-preserving averaging schemes that incorporates both, original and shifted, Grunwald-Letnikov (GL) approximation formulas preserving the physics, by reducing the shifting effects, while maintaining the stability of the system, by keeping one shifted expansion. The proposed way of using the GL expansions also generate symmetrical coefficient matrices that significantly reduces the discretization complexities appearing with all shifted cases from literature, and help considerably in 2D and 3D systems. Systems equations derivations and discretization details are discussed. Then, the physics-preserving averaging scheme is explained and illustrated. Finally, results are presented and reviewed. Edge-based original GL expansions are unstable as also illustrated in literatures. Shifted GL expansions are stable but add a lot of additional weights to both discretization sides affecting the physical accuracy. In comparison, the physics-preserving averaging scheme balances the physical accuracy and stability requirements leading to a more physically conservative scheme that is more stable than the original GL approximation but might be slightly less stable than the shifted GL approximations. It is a locally conservative Single-Continuum averaging scheme that applies a finite-volume viewpoint.

  4. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  5. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  6. The average inter-crossing number of equilateral random walks and polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Stasiak, A

    2005-01-01

    In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = 3ln2/8 ∼ 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well

  7. Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control

    Energy Technology Data Exchange (ETDEWEB)

    Gaitsgory, Vladimir, E-mail: vladimir.gaitsgory@mq.edu.au [Macquarie University, Department of Mathematics (Australia); Rossomakhine, Sergey, E-mail: serguei.rossomakhine@flinders.edu.au [Flinders University, Flinders Mathematical Sciences Laboratory, School of Computer Science, Engineering and Mathematics (Australia)

    2015-04-15

    The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.

  8. Cosmogenic 22Na and 26Al in samples of lunar ground from a drill column of Moon-24

    International Nuclear Information System (INIS)

    Lavrukhina, A.K.; Povinets, P.; Ustinova, G.K.

    1984-01-01

    The method of low background (β-γ-γ)-spectrometry without destruction of the sample has been used to measure 22 Na and 26 Al radioactivity in samples of lunar ground 24118.4-4, 24143.4-4 apd 24184.4-4 from the ''Luna-24'' drilling column. Equilibrium radioactivity of these cosmoqenic isotopes is calculated by the analytic method. The analysis of theoretical and experimental data shows that at depths lower than approximately 40 cm from the lunar surface the drilling process did not bring about ground mixing in the drilling column. For the last million of years the regolite surface layer in the place of ''Luna-24'' landing remained pracically unchanged, i. e. has not been subjected to intensive effect of some mechanic processes on lunar surface. The average intensity of galactic cosmic rays with the rigidity > 0.5 GV for the last million years within the limits of approximtaely 20% remained stable and corresponded to their modern medium intensity 0.24 particlesxcm -2 xc -1 xsr -1 . The average spectrum of galactic cosmic rays for a million years approximately corresponds to the average spectrum for 1962 or 1971

  9. Calculating properties with the coherent-potential approximation

    International Nuclear Information System (INIS)

    Faulkner, J.S.; Stocks, G.M.

    1980-01-01

    It is demonstrated that the expression that has hitherto been used for calculating the Bloch spectral-density function A/sup B/(E,k) in the Korringa-Kohn-Rostoker coherent-potential-approximation theory of alloys leads to manifestly unphysical results. No manipulation of the expression can eliminate this behavior. We develop an averaged Green's-function formulation and from it derive a new expression for A/sup B/(E,k) which does not contain unphysical features. The earlier expression for A/sup B/(E,k) was suggested as plausible on the basis that it is a spectral decomposition of the Lloyd formula. Expressions for many other properties of alloys have been obtained by manipulations of the Lloyd formula, and it is now clear that all such expressions must be considered suspect. It is shown by numerical and algebraic comparisons that some of the expressions obtained in this way are equivalent to the ones obtained from a Green's function, while others are not. In addition to studying these questions, the averaged Green's-function formulation developed in this paper is shown to furnish an interesting new way to approach many problems in alloy theory. The method is described in such a way that the aspects of the formulation that arise from the single-site approximation can be distinguished from those that depend on a specific choice for the effective scatterer

  10. Reducing Approximation Error in the Fourier Flexible Functional Form

    Directory of Open Access Journals (Sweden)

    Tristan D. Skolrud

    2017-12-01

    Full Text Available The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.

  11. 2> for a scalar field in 2D black holes: A new uniform approximation

    International Nuclear Information System (INIS)

    Frolov, V.; Sushkov, S.V.; Zelnikov, A.

    2003-01-01

    We study nonconformal quantum scalar fields and averages of their local observables (such as 2 > ren and μν > ren ) in the spacetime of a two-dimensional black hole. In order to get an analytical approximation for these expressions the WKB approximation is often used. We demonstrate that at the horizon the WKB approximation is violated for a nonconformal field, that is, when the field mass or/and the parameter of nonminimal coupling does not vanish. We propose a new 'uniform approximation' which solves this problem. We use this approximation to obtain an improved analytical approximation for 2 > ren in the two-dimensional black hole geometry. We compare the results obtained with numerical calculations

  12. Calculation of the MSD two-step process with the sudden approximation

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Shiro [Tohoku Univ., Sendai (Japan). Dept. of Physics; Kawano, Toshihiko [Kyushu Univ., Advanced Energy Engineering Science, Kasuga, Fukuoka (Japan)

    2000-03-01

    A calculation of the two-step process with the sudden approximation is described. The Green's function which connects the one-step matrix element to the two-step one is represented in {gamma}-space to avoid the on-energy-shell approximation. Microscopically calculated two-step cross sections are averaged together with an appropriate level density to give a two-step cross section. The calculated cross sections are compared with the experimental data, however the calculation still contains several simplifications at this moment. (author)

  13. A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence

    Science.gov (United States)

    Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.

    2018-04-01

    Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.

  14. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  15. Real-space calculations of nonspherically averaged charge densities for substitutionally disordered alloys

    International Nuclear Information System (INIS)

    Singh, P.P.; Gonis, A.

    1993-01-01

    Based on screening transformations of muffin-tin orbitals introduced by Andersen and Jepsen [Phys. Rev. Lett. 53, 2571 (1984)], we have developed a formalism for calculating the nonspherically averaged charge densities of substitutionally disordered alloys using the Korringa-Kohn-Rostoker coherent-potential-approximation (KKR CPA) method in the atomic-sphere approximation (ASA). We have validated our method by calculating charge densities for ordered structures, where we find that our approach yields charge densities that are essentially indistinguishable from the results of full-potential methods. Calculations and comparisons are reported for Si, Al, and Li. For substitutionally disordered alloys, where full-potential methods have not been implemented so far, our approach can be used to calculate reliable nonspherically averaged charge densities from spherically symmetric one-electron potentials obtained from the KKR-ASA CPA. We report on our study of differences in charge density between ordered AlLi in the L1 0 phase and substitutionally disordered Al 0.5 Li 0.5 on a face-centered-cubic lattice

  16. The single-collision thermalization approximation for application to cold neutron moderation problems

    International Nuclear Information System (INIS)

    Ritenour, R.L.

    1989-01-01

    The single collision thermalization (SCT) approximation models the thermalization process by assuming that neutrons attain a thermalized distribution with only a single collision within the moderating material, independent of the neutron's incident energy. The physical intuition on which this approximation is based is that the salient properties of neutron thermalization are accounted for in the first collision, and the effects of subsequent collisions tend to average out statistically. The independence of the neutron incident and outscattering energy leads to variable separability in the scattering kernel and, thus, significant simplification of the neutron thermalization problem. The approximation also addresses detailed balance and neutron conservation concerns. All of the tests performed on the SCT approximation yielded excellent results. The significance of the SCT approximation is that it greatly simplifies thermalization calculations for CNS design. Preliminary investigations with cases involving strong absorbers also indicates that this approximation may have broader applicability, as in the upgrading of the thermalization codes

  17. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay; Jo, Seongil; Nott, David; Shoemaker, Christine; Tempone, Raul

    2017-01-01

    is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  18. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  19. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  20. ON THE AVERAGE DENSITY PROFILE OF DARK-MATTER HALOS IN THE INNER REGIONS OF MASSIVE EARLY-TYPE GALAXIES

    International Nuclear Information System (INIS)

    Grillo, C.

    2012-01-01

    We study a sample of 39 massive early-type lens galaxies at redshift z ∼< 0.3 to determine the slope of the average dark-matter density profile in the innermost regions. We keep the strong-lensing and stellar population synthesis modeling as simple as possible to measure the galaxy total and luminous masses. By rescaling the values of the Einstein radius and dark-matter projected mass with the values of the luminous effective radius and mass, we combine all the data of the galaxies in the sample. We find that between 0.3 and 0.9 times the value of the effective radius the average logarithmic slope of the dark-matter projected density profile is –1.0 ± 0.2 (i.e., approximately isothermal) or –0.7 ± 0.5 (i.e., shallower than isothermal), if, respectively, a constant Chabrier or heavier, Salpeter-like stellar initial mass function is adopted. These results provide positive evidence of the influence of the baryonic component on the contraction of the galaxy dark-matter halos, compared to the predictions of dark-matter-only cosmological simulations, and open a new way to test models of structure formation and evolution within the standard ΛCDM cosmological scenario.

  1. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  2. Spatial models for probabilistic prediction of wind power with application to annual-average and high temporal resolution data

    DEFF Research Database (Denmark)

    Lenzi, Amanda; Pinson, Pierre; Clemmensen, Line Katrine Harder

    2017-01-01

    average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled...... with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show...

  3. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  4. Distribution and evolution of Zn, Cd, and Pb in Apollo 16 regolith samples and the average U-Pb ages of the parent rocks

    Science.gov (United States)

    Cirlin, E. H.; Housley, R. M.

    1982-01-01

    The concentration of surface (low temperature site) and interior (high temperature site) Cd, Zn, and Pb in 13 Apollo 16 highland fines samples, pristine rock 65325, and mare fines sample 75081 were analyzed directly from the thermal release profiles obtained by flameless atomic absorption technique (FLAA). Cd and Zn in pristine ferroan anothosite 65325, anorthositic grains of the most mature fines 65701, and basaltic rock fragments of mare fines 75081 were almost all surface Cd and Zn indicating that most volatiles were deposited on the surfaces of vugs, vesicles and microcracks during the initial cooling process. A considerable amount of interior Cd and Zn was observed in agglutinates. This result suggests that high temperature site interior volatiles originate from entrapment during the lunar maturation processes. Interior Cd found in the most mature fines sample 65701 was only about 15% of the total Cd in the sample. Interior Pb present in Apollo 16 fines samples went up to 60%. From our Cd studies we can assume that this interior Pb in highland fines samples is largely due to the radiogenic decay which occurred after the redistribution of the volatiles took place. We obtained an average age of 4.0 b.y. for the parent rocks of Apollo 16 highland regolith from our interior Pb analyses.

  5. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  6. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  7. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  8. Function approximation using combined unsupervised and supervised learning.

    Science.gov (United States)

    Andras, Peter

    2014-03-01

    Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.

  9. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2014-01-01

    Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.

  10. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  11. Estimating the approximation error when fixing unessential factors in global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sobol' , I.M. [Institute for Mathematical Modelling of the Russian Academy of Sciences, Moscow (Russian Federation); Tarantola, S. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: stefano.tarantola@jrc.it; Gatelli, D. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: debora.gatelli@jrc.it; Kucherenko, S.S. [Imperial College London (United Kingdom); Mauntz, W. [Department of Biochemical and Chemical Engineering, Dortmund University (Germany)

    2007-07-15

    One of the major settings of global sensitivity analysis is that of fixing non-influential factors, in order to reduce the dimensionality of a model. However, this is often done without knowing the magnitude of the approximation error being produced. This paper presents a new theorem for the estimation of the average approximation error generated when fixing a group of non-influential factors. A simple function where analytical solutions are available is used to illustrate the theorem. The numerical estimation of small sensitivity indices is discussed.

  12. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  13. Green-Ampt approximations: A comprehensive analysis

    Science.gov (United States)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  14. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  15. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    Science.gov (United States)

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  16. Determination of the complex refractive index segments of turbid sample with multispectral spatially modulated structured light and models approximation

    Science.gov (United States)

    Meitav, Omri; Shaul, Oren; Abookasis, David

    2017-09-01

    Spectral data enabling the derivation of a biological tissue sample's complex refractive index (CRI) can provide a range of valuable information in the clinical and research contexts. Specifically, changes in the CRI reflect alterations in tissue morphology and chemical composition, enabling its use as an optical marker during diagnosis and treatment. In the present work, we report a method for estimating the real and imaginary parts of the CRI of a biological sample using Kramers-Kronig (KK) relations in the spatial frequency domain. In this method, phase-shifted sinusoidal patterns at single high spatial frequency are serially projected onto the sample surface at different near-infrared wavelengths while a camera mounted normal to the sample surface acquires the reflected diffuse light. In the offline analysis pipeline, recorded images at each wavelength are converted to spatial phase maps using KK analysis and are then calibrated against phase-models derived from diffusion approximation. The amplitude of the reflected light, together with phase data, is then introduced into Fresnel equations to resolve both real and imaginary segments of the CRI at each wavelength. The technique was validated in tissue-mimicking phantoms with known optical parameters and in mouse models of ischemic injury and heat stress. Experimental data obtained indicate variations in the CRI among brain tissue suffering from injury. CRI fluctuations correlated with alterations in the scattering and absorption coefficients of the injured tissue are demonstrated. This technique for deriving dynamic changes in the CRI of tissue may be further developed as a clinical diagnostic tool and for biomedical research applications. To the best of our knowledge, this is the first report of the estimation of the spectral CRI of a mouse head following injury obtained in the spatial frequency domain.

  17. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  18. Approximate solution to the Kolmogorov equation for a fission chain-reacting system

    International Nuclear Information System (INIS)

    Ruby, L.; McSwine, T.L.

    1986-01-01

    An approximate solution has been obtained for the Kolmogorov equation describing a fission chain-reacting system. The method considers the population of neutrons, delayed-neutron precursors, and detector counts. The effect of the detector is separated from the statistics of the chain reaction by a weak coupling assumption that predicts that the detector responds to the average rather than to the instantaneous neutron population. An approximate solution to the remaining equation, involving the populations of neutrons and precursors, predicts a negative-binomial behaviour for the neutron probability distribution

  19. Sample summary report for ARG 1 pressure tube sample

    International Nuclear Information System (INIS)

    Belinco, C.

    2006-01-01

    The ARG 1 sample is made from an un-irradiated Zr-2.5% Nb pressure tube. The sample has 103.4 mm ID, 112 mm OD and approximately 500 mm length. A punch mark was made very close to one end of the sample. The punch mark indicates the 12 O'clock position and also identifies the face of the tube for making all the measurements. ARG 1 sample contains flaws on ID and OD surface. There was no intentional flaw within the wall of the pressure tube sample. Once the flaws are machined the pressure tube sample was covered from outside to hide the OD flaws. Approximately 50 mm length of pressure tube was left open at both the ends to facilitate the holding of sample in the fixtures for inspection. No flaw was machined in this zone of 50 mm on either end of the pressure tube sample. A total of 20 flaws were machined in ARG 1 sample. Out of these, 16 flaws were on the OD surface and the remaining 4 on the ID surface of the pressure tube. The flaws were characterized in to various groups like axial flaws, circumferential flaws, etc

  20. Properties of bright solitons in averaged and unaveraged models for SDG fibres

    Science.gov (United States)

    Kumar, Ajit; Kumar, Atul

    1996-04-01

    Using the slowly varying envelope approximation and averaging over the fibre cross-section the evolution equation for optical pulses in semiconductor-doped glass (SDG) fibres is derived from the nonlinear wave equation. Bright soliton solutions of this equation are obtained numerically and their properties are studied and compared with those of the bright solitons in the unaveraged model.

  1. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  2. Approximate number word knowledge before the cardinal principle.

    Science.gov (United States)

    Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C

    2015-02-01

    Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Polarized constituent quarks in NLO approximation

    International Nuclear Information System (INIS)

    Khorramian, Ali N.; Tehrani, S. Atashbar; Mirjalili, A.

    2006-01-01

    The valon representation provides a basis between hadrons and quarks, in terms of which the bound-state and scattering properties of hadrons can be united and described. We studied polarized valon distributions which have an important role in describing the spin dependence of parton distribution in leading and next-to-leading order approximation. Convolution integral in frame work of valon model as a useful tool, was used in polarized case. To obtain polarized parton distributions in a proton we need to polarized valon distribution in a proton and polarized parton distributions inside the valon. We employed Bernstein polynomial averages to get unknown parameters of polarized valon distributions by fitting to available experimental data

  4. Respiratory Motion Correction for Compressively Sampled Free Breathing Cardiac MRI Using Smooth l1-Norm Approximation

    Directory of Open Access Journals (Sweden)

    Muhammad Bilal

    2018-01-01

    Full Text Available Transformed domain sparsity of Magnetic Resonance Imaging (MRI has recently been used to reduce the acquisition time in conjunction with compressed sensing (CS theory. Respiratory motion during MR scan results in strong blurring and ghosting artifacts in recovered MR images. To improve the quality of the recovered images, motion needs to be estimated and corrected. In this article, a two-step approach is proposed for the recovery of cardiac MR images in the presence of free breathing motion. In the first step, compressively sampled MR images are recovered by solving an optimization problem using gradient descent algorithm. The L1-norm based regularizer, used in optimization problem, is approximated by a hyperbolic tangent function. In the second step, a block matching algorithm, known as Adaptive Rood Pattern Search (ARPS, is exploited to estimate and correct respiratory motion among the recovered images. The framework is tested for free breathing simulated and in vivo 2D cardiac cine MRI data. Simulation results show improved structural similarity index (SSIM, peak signal-to-noise ratio (PSNR, and mean square error (MSE with different acceleration factors for the proposed method. Experimental results also provide a comparison between k-t FOCUSS with MEMC and the proposed method.

  5. Beauty is in the ease of the beholding: A neurophysiological test of the averageness theory of facial attractiveness

    Science.gov (United States)

    Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.

    2014-01-01

    Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966

  6. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  7. Method for sampling and analysis of volatile biomarkers in process gas from aerobic digestion of poultry carcasses using time-weighted average SPME and GC-MS.

    Science.gov (United States)

    Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J

    2017-10-01

    A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Cosmogenic /sup 22/Na and /sup 26/Al in samples of lunar ground from a drill column of Moon-24

    Energy Technology Data Exchange (ETDEWEB)

    Lavrukhina, A.K.; Povinets, P.; Ustinova, G.K.

    1984-01-01

    The method of low background (..beta..-..gamma..-..gamma..)-spectrometry without destruction of the sample has been used to measure /sup 22/Na and /sup 26/Al radioactivity in samples of lunar ground 24118.4-4, 24143.4-4 apd 24184.4-4 from the ''Luna-24'' drilling column. Equilibrium radioactivity of these cosmogenic isotopes is calculated by the analytic method. The analysis of theoretical and experimental data shows that at depths lower than approximately 40 cm from the lunar surface the drilling process did not bring about ground mixing in the drilling column. For the last million of years the regolite surface layer in the place of ''Luna-24'' landing remained pracically unchanged, i.e. has not been subjected to intensive effect of some mechanic processes on lunar surface. The average intensity of galactic cosmic rays with the rigidity > 0.5 GV for the last million years within the limits of approximtaely 20% remained stable and corresponded to their modern medium intensity 0.24 particlesxcm/sup -2/xc/sup -1/xsr/sup -1/. The average spectrum of galactic cosmic rays for a million years approximately corresponds to the average spectrum for 1962 or 1971.

  9. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  10. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  11. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  12. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  13. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  14. Accurate computations of monthly average daily extraterrestrial irradiation and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1985-12-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers

  15. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  16. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  17. A revised radiation package of G-packed McICA and two-stream approximation: Performance evaluation in a global weather forecasting model

    Science.gov (United States)

    Baek, Sunghye

    2017-07-01

    For more efficient and accurate computation of radiative flux, improvements have been achieved in two aspects, integration of the radiative transfer equation over space and angle. First, the treatment of the Monte Carlo-independent column approximation (MCICA) is modified focusing on efficiency using a reduced number of random samples ("G-packed") within a reconstructed and unified radiation package. The original McICA takes 20% of CPU time of radiation in the Global/Regional Integrated Model systems (GRIMs). The CPU time consumption of McICA is reduced by 70% without compromising accuracy. Second, parameterizations of shortwave two-stream approximations are revised to reduce errors with respect to the 16-stream discrete ordinate method. Delta-scaled two-stream approximation (TSA) is almost unanimously used in Global Circulation Model (GCM) but contains systematic errors which overestimate forward peak scattering as solar elevation decreases. These errors are alleviated by adjusting the parameterizations of each scattering element—aerosol, liquid, ice and snow cloud particles. Parameterizations are determined with 20,129 atmospheric columns of the GRIMs data and tested with 13,422 independent data columns. The result shows that the root-mean-square error (RMSE) over the all atmospheric layers is decreased by 39% on average without significant increase in computational time. Revised TSA developed and validated with a separate one-dimensional model is mounted on GRIMs for mid-term numerical weather forecasting. Monthly averaged global forecast skill scores are unchanged with revised TSA but the temperature at lower levels of the atmosphere (pressure ≥ 700 hPa) is slightly increased (< 0.5 K) with corrected atmospheric absorption.

  18. Comparative study of dense plasma state equations obtained from different models of average-atom

    International Nuclear Information System (INIS)

    Fromy, Patrice

    1991-01-01

    This research thesis addresses the influence of temperature and density effects on magnitudes such as pressure, energy, ionisation, and on energy levels of a body described according to the approximation of an electrically neutral isolated atomic sphere. Starting from the general formalism of the functional density, with some approximations, the author deduces the Thomas-Fermi, Thomas-Fermi-Dirac, and Thomas-Fermi-Dirac-Weizsaecker models, and an average-atom approximated quantum model. For each of these models, the author presents an explicit method of resolution, as well as the determination of different magnitudes taken into account in this study. For the different studied magnitudes, the author highlights effects due to the influence of temperature and of density, as well as variations due to the different models [fr

  19. Configuring Airspace Sectors with Approximate Dynamic Programming

    Science.gov (United States)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  20. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    Rainer, F.; Hildum, E.A.; Milam, D.

    1987-01-01

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm 2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  1. Development of quick-response area-averaged void fraction meter

    International Nuclear Information System (INIS)

    Watanabe, Hironori; Iguchi, Tadashi; Kimura, Mamoru; Anoda, Yoshinari

    2000-11-01

    Authors are performing experiments to investigate BWR thermal-hydraulic instability under coupling of neutronics and thermal-hydraulics. To perform the experiment, it is necessary to measure instantaneously area-averaged void fraction in rod bundle under high temperature/high pressure gas-liquid two-phase flow condition. Since there were no void fraction meters suitable for these requirements, we newly developed a practical void fraction meter. The principle of the meter is based on the electrical conductance changing with void fraction in gas-liquid two-phase flow. In this meter, metal flow channel wall is used as one electrode and a L-shaped line electrode installed at the center of flow channel is used as the other electrode. This electrode arrangement makes possible instantaneous measurement of area-averaged void fraction even under the metal flow channel. We performed experiments with air/water two-phase flow to clarify the void fraction meter performance. Experimental results indicated that void fraction was approximated by α=1-I/I o , where α and I are void fraction and current (I o is current at α=0). This relation holds in the wide range of void fraction of 0∼70%. The difference between α and 1-I/I o was approximately 10% at maximum. The major reasons of the difference are a void distribution over measurement area and an electrical insulation of the center electrode by bubbles. The principle and structure of this void fraction meter are very basic and simple. Therefore, the meter can be applied to various fields on gas-liquid two-phase flow studies. (author)

  2. Hot accreting white dwarfs in the quasi-static approximation

    International Nuclear Information System (INIS)

    Iben, I. Jr.

    1982-01-01

    Properties of white dwarfs which are accreting hydrogen-rich matter at rates in the range 1.5 x 10 -9 to 2.5 x 10 -7 M/sub sun/ yr -1 are investigated in several approximations. Steady-burning models, in which matter is processed through nuclear-burning shells as rapidly as it is accreted, provide a framework for understanding the properties of models in which thermal pulses induced by hydrogen burning and helium burning are allowed to occur. In these latter models, the underlying carbon-oxygen core is chosen to be in a cycle-averaged steady state with regard to compressional heating and neutrino losses. Several of these models are evolved in the quasi-static approximation. Combining results obtained in the steady-burning approximation with those obtained in the quasi-static approximation, expressions are obtained for estimating, as functions of accretion rate and white dwarf mass, the thermal pulse recurrence period and the duration of hydrogen-burning phases. The time spent by an accreting model burning hydrogen as a large star of giant dimensions versus time spent burning hydrogen as a hot dwarf is also estimated as a function of model mass and accretion rate. Finally, suggestions for detecting observational counterparts of the theoretical models and suggestions for further theoretical investigations are offered. Subject headings: stars: accretion: stars: interiors: stars: novae: stars: symbiotic: stars: white dwarfs

  3. Approximation of the Monte Carlo Sampling Method for Reliability Analysis of Structures

    Directory of Open Access Journals (Sweden)

    Mahdi Shadab Far

    2016-01-01

    Full Text Available Structural load types, on the one hand, and structural capacity to withstand these loads, on the other hand, are of a probabilistic nature as they cannot be calculated and presented in a fully deterministic way. As such, the past few decades have witnessed the development of numerous probabilistic approaches towards the analysis and design of structures. Among the conventional methods used to assess structural reliability, the Monte Carlo sampling method has proved to be very convenient and efficient. However, it does suffer from certain disadvantages, the biggest one being the requirement of a very large number of samples to handle small probabilities, leading to a high computational cost. In this paper, a simple algorithm was proposed to estimate low failure probabilities using a small number of samples in conjunction with the Monte Carlo method. This revised approach was then presented in a step-by-step flowchart, for the purpose of easy programming and implementation.

  4. Concentration fluctuations and averaging time in vapor clouds

    CERN Document Server

    Wilson, David J

    2010-01-01

    This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t

  5. Intensity-based hierarchical elastic registration using approximating splines.

    Science.gov (United States)

    Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C

    2014-01-01

    We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS

  6. Function approximation of tasks by neural networks

    International Nuclear Information System (INIS)

    Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.

    2008-01-01

    For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem

  7. Reconstruction of 3D structures of MET antibodies from electron microscopy 2D class averages.

    Directory of Open Access Journals (Sweden)

    Qi Chen

    Full Text Available Dynamics of three MET antibody constructs (IgG1, IgG2, and IgG4 and the IgG4-MET antigen complex was investigated by creating their atomic models with an integrative experimental and computational approach. In particular, we used two-dimensional (2D Electron Microscopy (EM images, image class averaging, homology modeling, Rapidly exploring Random Tree (RRT structure sampling, and fitting of models to images, to find the relative orientations of antibody domains that are consistent with the EM images. We revealed that the conformational preferences of the constructs depend on the extent of the hinge flexibility. We also quantified how the MET antigen impacts on the conformational dynamics of IgG4. These observations allow to create testable hypothesis to investigate MET biology. Our protocol may also help describe structural diversity of other antigen systems at approximately 5 Å precision, as quantified by Root-Mean-Square Deviation (RMSD among good-scoring models.

  8. Forecasting with Universal Approximators and a Learning Algorithm

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2011-01-01

    to the performance of the best single model in the set of models combined from. The use of universal approximators along with a combination scheme for which explicit loss bounds exist should give a solid theoretical foundation to the way the forecasts are performed. The practical performance will be investigated...... combination has a long history in econometrics focus has not been on proving loss bounds for the combination rules applied. We apply the Weighted Average Algorithm (WAA) of Kivinen & Warmuth (1999) for which such loss bounds exist. Specifically, one can bound the worst case performance of the WAA compared...

  9. Forecasting with Universal Approximators and a Learning Algorithm

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    bounds for the combination rules applied. We apply the Weighted Average Algorithm (WAA) of Kivinen and Warmuth (1999) for which such loss bounds exist. Specifically, one can bound the worst case performance of the WAA compared to the performance of the best single model in the set of models combined from....... The use of universal approximators along with a combination scheme for which explicit loss bounds exist should give a solid theoretical foundation to the way the forecasts are performed. The practical performance will be investigated by considering various monthly postwar macroeconomic data sets for the G...

  10. Strong convergence and convergence rates of approximating solutions for algebraic Riccati equations in Hilbert spaces

    Science.gov (United States)

    Ito, Kazufumi

    1987-01-01

    The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.

  11. An approximate method to calculate ionization of LTE and non-LTE plasma

    International Nuclear Information System (INIS)

    Zhang Jun; Gu Peijun

    1987-01-01

    When matter, especially high Z element, is heated to high temperature, it will be ionized many times. The degree of ionization has a strong effect on many plasma properties. So an approximate method to calculate the mean ionization degree is needed for solving many practical problems. An analytical expression which is convenient for the approximate numerical calculation is given by fitting it to the scaling law and numerical results of the ionization potential of Thomas-Fermi statistical model. In LTE case, the ionization degree of Au calculated by using the approximate method is in agreement with that of the average ion model. By extending the approximate method to non-LTE case, the ionization degree of Au is similarly calculated according to Corona model and Collision-Radiatoin model(C-R). The results of Corona model agree with the published data quite well, while the results of C-R approach those of Corona model as the density is reduced and approach those of LTE as the density is increased. Finally, all approximately calculated results of ionization degree of Au and the comparision of them are given in figures and tables

  12. Fourier analysis of spherically averaged momentum densities for some gaseous molecules

    International Nuclear Information System (INIS)

    Tossel, J.A.; Moore, J.H.

    1981-01-01

    The spherically averaged autocorrelation function, B(r), of the position-space wavefunction, psi(anti r), is calculated by numerical Fourier transformation from spherically averaged momentum densities, rho(p), obtained from either theoretical wavefunctions or (e,2e) electron-impact ionization experiments. Inspection of B(r) for the π molecular orbitals of C 4 H 6 established that autocorrelation function differences, ΔB(r), can be qualitatively related to bond lengths and numbers of bonding interactions. Differences between B(r) functions obtained from different approximate wavefunctions for a given orbital can be qualitatively understood in terms of wavefunction difference, Δpsi(1anti r), maps for these orbitals. Comparison of the B(r) function for the 1αsub(u) orbital of C 4 H 6 obtained from (e,2e) momentum densities with that obtained from an ab initio SCF MO wavefunction shows differences consistent with expected correlation effects. Thus, B(r) appears to be a useful quantity for relating spherically averaged momentum distributions to position-space wavefunction differences. (orig.)

  13. Approximation scheme for strongly coupled plasmas: Dynamical theory

    International Nuclear Information System (INIS)

    Golden, K.I.; Kalman, G.

    1979-01-01

    The authors present a self-consistent approximation scheme for the calculation of the dynamical polarizability α (k, ω) at long wavelengths in strongly coupled one-component plasmas. Development of the scheme is carried out in two stages. The first stage follows the earlier Golden-Kalman-Silevitch (GKS) velocity-average approximation approach, but goes much further in its application of the nonlinear fluctuation-dissipation theorem to dynamical calculations. The result is the simple expression for α (k, ω), αatsub GKSat(k, ω) 4 moment sum rule. In the second stage, the above dynamical expression is made self-consistent at long wavelengths by postulating that a decomposition of the quadratic polarizabilities in terms of linear ones, which prevails in the k → 0 limit for weak coupling, can be relied upon as a paradigm for arbitrary coupling. The result is a relatively simple quadratic integral equation for α. Its evaluation in the weak-coupling limit and its comparison with known exact results in that limit reveal that almost all important correlational and long-time effects are reproduced by our theory with very good numerical accuracy over the entire frequency range; the only significant defect of the approximation seems to be the absence of the ''dominant'' γ ln γ -1 (γ is the plasma parameter) contribution to Im α

  14. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  15. Approximate symmetries of Hamiltonians

    Science.gov (United States)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  16. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  17. Sparse linear models: Variational approximate inference and Bayesian experimental design

    International Nuclear Information System (INIS)

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  18. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  19. SACALCCYL, Calculates the average solid angle subtended by a volume; SACALC2B, Calculates the average solid angle for source-detector geometries

    International Nuclear Information System (INIS)

    Whitcher, Ralph

    2007-01-01

    1 - Description of program or function: SACALC2B calculates the average solid angle subtended by a rectangular or circular detector window to a coaxial or non-coaxial rectangular, circular or point source, including where the source and detector planes are not parallel. SACALC C YL calculates the average solid angle subtended by a cylinder to a rectangular or circular source, plane or thick, at any location and orientation. This is needed, for example, in calculating the intrinsic gamma efficiency of a detector such as a GM tube. The program also calculates the number of hits on the cylinder side and on each end, and the average path length through the detector volume (assuming no scattering or absorption). Point sources can be modelled by using a circular source of zero radius. NEA-1688/03: Documentation has been updated (January 2006). 2 - Methods: The program uses a Monte Carlo method to calculate average solid angle for source-detector geometries that are difficult to analyse by analytical methods. The values of solid angle are calculated to accuracies of typically better than 0.1%. The calculated values from the Monte Carlo method agree closely with those produced by polygon approximation and numerical integration by Gardner and Verghese, and others. 3 - Restrictions on the complexity of the problem: The program models a circular or rectangular detector in planes that are not necessarily coaxial, nor parallel. Point sources can be modelled by using a circular source of zero radius. The sources are assumed to be uniformly distributed. NEA-1688/04: In SACALC C YL, to avoid rounding errors, differences less than 1 E-12 are assumed to be zero

  20. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  1. Detection of cracks in shafts with the Approximated Entropy algorithm

    Science.gov (United States)

    Sampaio, Diego Luchesi; Nicoletti, Rodrigo

    2016-05-01

    The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.

  2. Estimating average shock pressures recorded by impactite samples based on universal stage investigations of planar deformation features in quartz - Sources of error and recommendations

    Science.gov (United States)

    Holm-Alwmark, S.; Ferrière, L.; Alwmark, C.; Poelchau, M. H.

    2018-01-01

    Planar deformation features (PDFs) in quartz are the most widely used indicator of shock metamorphism in terrestrial rocks. They can also be used for estimating average shock pressures that quartz-bearing rocks have been subjected to. Here we report on a number of observations and problems that we have encountered when performing universal stage measurements and crystallographically indexing of PDF orientations in quartz. These include a comparison between manual and automated methods of indexing PDFs, an evaluation of the new stereographic projection template, and observations regarding the PDF statistics related to the c-axis position and rhombohedral plane symmetry. We further discuss the implications that our findings have for shock barometry studies. Our study shows that the currently used stereographic projection template for indexing PDFs in quartz might induce an overestimation of rhombohedral planes with low Miller-Bravais indices. We suggest, based on a comparison of different shock barometry methods, that a unified method of assigning shock pressures to samples based on PDFs in quartz is necessary to allow comparison of data sets. This method needs to take into account not only the average number of PDF sets/grain but also the number of high Miller-Bravais index planes, both of which are important factors according to our study. Finally, we present a suggestion for such a method (which is valid for nonporous quartz-bearing rock types), which consists of assigning quartz grains into types (A-E) based on the PDF orientation pattern, and then calculation of a mean shock pressure for each sample.

  3. A tutorial on bridge sampling

    NARCIS (Netherlands)

    Gronau, Q.F.; Sarafoglou, A.; Matzke, D.; Ly, A.; Boehm, U.; Marsman, M.; Leslie, D.S.; Forster, J.J.; Wagenmakers, E.-M.; Steingroever, H.

    2017-01-01

    The marginal likelihood plays an important role in many areas of Bayesian statistics such as parameter estimation, model comparison, and model averaging. In most applications, however, the marginal likelihood is not analytically tractable and must be approximated using numerical methods. Here we

  4. Rigid muffin-tin approximation for the electron-phonon interaction in transition metals

    International Nuclear Information System (INIS)

    Butler, W.H.

    1980-01-01

    Progress in calculating the electron-phonon parameters of transition metals has been based on either the rigid muffin-tin approximation (RMTA) or the fitted modified tight-binding approximation (FMTBA). The RMTA has been shown to be remarkably accurate for average electron-phonon properties, but there are indications that RMTA matrix elements may be too small at low momentum transfer. An attempt is made to demonstrate these assertions concerning the accuracy of RMTA and the numerous electron-phonon calculations are placed in a broader perspective by a demonstration of how they can be used to explain the trends in the strength of the electron-phonon coupling among the transition metals and the A-15 compounds

  5. Rigid muffin-tin approximation for the electron-phonon interaction in transition metals

    Energy Technology Data Exchange (ETDEWEB)

    Butler, W.H.

    1980-01-01

    Progress in calculating the electron-phonon parameters of transition metals has been based on either the rigid muffin-tin approximation (RMTA) or the fitted modified tight-binding approximation (FMTBA). The RMTA has been shown to be remarkably accurate for average electron-phonon properties, but there are indications that RMTA matrix elements may be too small at low momentum transfer. An attempt is made to demonstrate these assertions concerning the accuracy of RMTA and the numerous electron-phonon calculations are placed in a broader perspective by a demonstration of how they can be used to explain the trends in the strength of the electron-phonon coupling among the transition metals and the A-15 compounds. (GHT)

  6. Average rainwater pH, concepts of atmospheric acidity, and buffering in open systems

    Energy Technology Data Exchange (ETDEWEB)

    Liljestrand, H.M.

    1985-01-01

    The system of water equilibrated with a constant partial pressure of CO/sub 2/, as a reference point for pH acidity-alkalinity relationships, has nonvolatile acidity and alkalinity components as conservative quantities, but not (H/sup +/). Simple algorithms are presented for the determination of the average pH for combinations of samples both above and below pH 5.6. Averaging the nonconservative quantity (H/sup +/) yields erroneously low mean pH values. To extend the open CO/sub 2/ system to include other volatile atmospheric acids and bases distributed among the gas, liquid and particulate matter phases, a theoretical framework for atmospheric acidity is presented. Within certain oxidation-reduction limitations, the total atmospheric acidity (but not free acidity) is a conservative quantity. The concept of atmospheric acidity is applied to air-water systems approximating aerosols, fogwater, cloudwater and rainwater. The buffer intensity in hydrometers is described as a function of net strong acidity, partial pressures of acid and base gases and the water to air ratio. For high liquid to air volume ratios, the equilibrium partial pressures of trace acid and base gases are set by the pH or net acidity controlled by the nonvolatile acid and base concentrations. For low water to air volume ratios as well as stationary state systems such as precipitation scavenging with continuous emissions, the partial pressures of trace gases (NH/sub 3/, HCl, NHO/sub 3/, SO/sub 2/, and CH/sub 3/COOH) appear to be of greater or equal importance as carbonate species as buffers in the aqueous phase.

  7. Average rainwater pH, concepts of atmospheric acidity, and buffering in open systems

    Science.gov (United States)

    Liljestrand, Howard M.

    The system of water equilibrated with a constant partial pressure of CO 2, as a reference point for pH acidity-alkalinity relationships, has nonvolatile acidity and alkalinity components as conservative quantities, but not [H +]. Simple algorithms are presented for the determination of the average pH for combinations of samples both above and below pH 5.6. Averaging the nonconservative quantity [H +] yields erroneously low mean pH values. To extend the open CO 2 system to include other volatile atmospheric acids and bases distributed among the gas, liquid and particulate matter phases, a theoretical framework for atmospheric acidity is presented. Within certain oxidation-reduction limitations, the total atmospheric acidity (but not free acidity) is a conservative quantity. The concept of atmospheric acidity is applied to air-water systems approximating aerosols, fogwater, cloudwater and rainwater. The buffer intensity in hydrometeors is described as a function of net strong acidity, partial pressures of acid and base gases and the water to air ratio. For high liquid to air volume ratios, the equilibrium partial pressures of trace acid and base gases are set by the pH or net acidity controlled by the nonvolatile acid and base concentrations. For low water to air volume ratios as well as stationary state systems such as precipitation scavenging with continuous emissions, the partial pressures of trace gases (NH 3, HCl, HNO 3, SO 2 and CH 3COOH) appear to be of greater or equal importance as carbonate species as buffers in the aqueous phase.

  8. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  9. Fluxes by eddy correlation over heterogeneous landscape: How shall we apply the Reynolds average?

    Science.gov (United States)

    Dobosy, R.

    2007-12-01

    Top-down estimates of carbon exchange across the earth's surface are implicitly an integral scheme, deriving bulk exchanges over large areas. Bottom-up estimates explicitly integrate the individual components of exchange to derive a bulk value. If these approaches are to be properly compared, their estimates should represent the same quantity. Over heterogeneous landscape, eddy-covariance flux computations from towers or aircraft intended for comparison with top-down approach face a question of the proper definition of the mean or base state, the departures from which yield the fluxes by Reynolds averaging. 1)≠Use a global base state derived over a representative sample of the surface, insensitive to land use. The departure quantities then fail to sum to zero over any subsample representing an individual surface type, violating Reynolds criteria. Yet fluxes derived from such subsamples can be directly composed into a bulk flux, globally satisfying Reynolds criteria. 2)≠Use a different base state for each surface type. satisfying Reynolds criteria individually. Then some of the flux may get missed if a surface's characteristics significantly bias its base state. Base state≠(2) is natural for tower samples. Base state≠(1) is natural for airborne samples over heterogeneous landscape, especially in patches smaller than an appropriate averaging length. It appears (1) incorporates a more realistic sample of the flux, though desirably there would be no practical difference between the two schemes. The schemes are related by the expression w¯*a*)C - w¯'a¯')C = w¯'ã¯)C+ wtilde ¯a¯')C+ wtilde ¯ã¯)C Here w is vertical motion, and a is some scalar, such as CO2. The star denotes departure from the global base state≠(1), and the prime from the base state≠(2), defined only over surface class≠C. The overbar with round bracket denotes average over samples drawn from class≠C, determined by footprint model. Thus a¯')C = 0 but a¯*)C ≠ 0 in general. The

  10. Effect of random edge failure on the average path length

    Energy Technology Data Exchange (ETDEWEB)

    Guo Dongchao; Liang Mangui; Li Dandan; Jiang Zhongyuan, E-mail: mgliang58@gmail.com, E-mail: 08112070@bjtu.edu.cn [Institute of Information Science, Beijing Jiaotong University, 100044, Beijing (China)

    2011-10-14

    We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent {alpha} > 2. (paper)

  11. Leveraging Gaussian process approximations for rapid image overlay production

    CSIR Research Space (South Africa)

    Burke, Michael

    2017-10-01

    Full Text Available value, xs = argmax x∗ [ K (x∗, x∗) − K (x∗, x)K (x, x)−1K (x, x∗) ] . (10) Figure 2 illustrates this sampling strategy more clearly. This selec- tion process can be slow, but could be bootstrapped using Latin hypercube sampling [16]. 3 RESULTS Empirical... point - a 240 sample Gaussian process approximation takes roughly the same amount of time to compute as the full blanked overlay. GP 50 GP 100 GP 150 GP 200 GP 250 GP 300 GP 350 GP 400 Full Itti-Koch 0 2 4 6 8 10 Method R at in g Boxplot of storyboard...

  12. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    Energy Technology Data Exchange (ETDEWEB)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.

  13. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  14. Nodal approximations of varying order by energy group for solving the diffusion equation

    International Nuclear Information System (INIS)

    Broda, J.T.

    1992-02-01

    The neutron flux across the nuclear reactor core is of interest to reactor designers and others. The diffusion equation, an integro-differential equation in space and energy, is commonly used to determine the flux level. However, the solution of a simplified version of this equation when automated is very time consuming. Since the flux level changes with time, in general, this calculation must be made repeatedly. Therefore solution techniques that speed the calculation while maintaining accuracy are desirable. One factor that contributes to the solution time is the spatial flux shape approximation used. It is common practice to use the same order flux shape approximation in each energy group even though this method may not be the most efficient. The one-dimensional, two-energy group diffusion equation was solved, for the node average flux and core k-effective, using two sets of spatial shape approximations for each of three reactor types. A fourth-order approximation in both energy groups forms the first set of approximations used. The second set used combines a second-order approximation with a fourth-order approximation in energy group two. Comparison of the results from the two approximation sets show that the use of a different order spatial flux shape approximation results in considerable loss in accuracy for the pressurized water reactor modeled. However, the loss in accuracy is small for the heavy water and graphite reactors modeled. The use of different order approximations in each energy group produces mixed results. Further investigation into the accuracy and computing time is required before any quantitative advantage of the use of the second-order approximation in energy group one and the fourth-order approximation in energy group two can be determined

  15. Average blood flow and oxygen uptake in the human brain during resting wakefulness

    DEFF Research Database (Denmark)

    Madsen, P L; Holm, S; Herning, M

    1993-01-01

    tracer between the brain and its venous blood is not reached. As a consequence, normal values for CBF and CMRO2 of 54 ml 100 g-1 min-1 and 3.5 ml 100 g-1 min-1 obtained with the Kety-Schmidt technique are an overestimation of the true values. Using the Kety-Schmidt technique we have performed 57...... the measured data, we find that the true average values for CBF and CMRO2 in the healthy young adult are approximately 46 ml 100 g-1 min-1 and approximately 3.0 ml 100 g-1 min-1. Previous studies have suggested that some of the variation in CMRO2 values could be ascribed to differences in cerebral venous...

  16. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  17. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  18. General Rytov approximation.

    Science.gov (United States)

    Potvin, Guy

    2015-10-01

    We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.

  19. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    Energy Technology Data Exchange (ETDEWEB)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2011-07-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  20. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    International Nuclear Information System (INIS)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G.; Silva, Ademir X.

    2011-01-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  1. Study on characteristics of the aperture-averaging factor of atmospheric scintillation in terrestrial optical wireless communication

    Science.gov (United States)

    Shen, Hong; Liu, Wen-xing; Zhou, Xue-yun; Zhou, Li-ling; Yu, Long-Kun

    2018-02-01

    In order to thoroughly understand the characteristics of the aperture-averaging effect of atmospheric scintillation in terrestrial optical wireless communication and provide references for engineering design and performance evaluation of the optics system employed in the atmosphere, we have theoretically deduced the generally analytic expression of the aperture-averaging factor of atmospheric scintillation, and numerically investigated characteristics of the apertureaveraging factor under different propagation conditions. The limitations of the current commonly used approximate calculation formula of aperture-averaging factor have been discussed, and the results showed that the current calculation formula is not applicable for the small receiving aperture under non-uniform turbulence link. Numerical calculation has showed that aperture-averaging factor of atmospheric scintillation presented an exponential decline model for the small receiving aperture under non-uniform turbulent link, and the general expression of the model was given. This model has certain guiding significance for evaluating the aperture-averaging effect in the terrestrial optical wireless communication.

  2. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  3. Energy-averaged neutron cross sections of fast-reactor structural materials

    International Nuclear Information System (INIS)

    Smith, A.; McKnight, R.; Smith, D.

    1978-02-01

    The status of energy-averaged cross sections of fast-reactor structural materials is outlined with emphasis on U.S. data programs in the neutron-energy range 1-10 MeV. Areas of outstanding accomplishment and significant uncertainty are noted with recommendations for future efforts. Attention is primarily given to the main constituents of stainless steel (e.g., Fe, Ni, and Cr) and, secondarily, to alternate structural materials (e.g., V, Ti, Nb, Mo, Zr). Generally, the mass regions of interest are A approximately 50 to 60 and A approximately 90 to 100. Neutron total and elastic-scattering cross sections are discussed with the implication on the non-elastic-cross sections. Cross sections governing discrete-inelastic-neutron-energy transfers are examined in detail. Cross sections for the reactions (n;p), (n;n',p), (n;α), (n;n',α) and (n;2n') are reviewed in the context of fast-reactor performance and/or diagnostics. The primary orientation of the discussion is experimental with some additional attention to the applications of theory, the problems of evaluation and the data sensitivity of representative fast-reactor systems

  4. F-centers in alkaline-earth fluorides. Inadequacy of the muffin-tin approximation

    International Nuclear Information System (INIS)

    Oliveira, L.E.; Oliveira, P.M.; Maffeo, B.

    1977-01-01

    The SCF-MSXα (Self Consisting F-centers-Multiple Scattering Xα) method has been applied in the study of the electronic structure of F centers in CaF 2 , SrF 2 and BaF 2 . The predicted optical transition energies are in disagreement with the experimental data. An explanation for the discrepancy is provided showing the inadequacy of the spherical averaging of the potential within the muffin-tin approximation [pt

  5. Samples in applied psychology: over a decade of research in review.

    Science.gov (United States)

    Shen, Winny; Kiger, Thomas B; Davies, Stacy E; Rasch, Rena L; Simon, Kara M; Ones, Deniz S

    2011-09-01

    This study examines sample characteristics of articles published in Journal of Applied Psychology (JAP) from 1995 to 2008. At the individual level, the overall median sample size over the period examined was approximately 173, which is generally adequate for detecting the average magnitude of effects of primary interest to researchers who publish in JAP. Samples using higher units of analyses (e.g., teams, departments/work units, and organizations) had lower median sample sizes (Mdn ≈ 65), yet were arguably robust given typical multilevel design choices of JAP authors despite the practical constraints of collecting data at higher units of analysis. A substantial proportion of studies used student samples (~40%); surprisingly, median sample sizes for student samples were smaller than working adult samples. Samples were more commonly occupationally homogeneous (~70%) than occupationally heterogeneous. U.S. and English-speaking participants made up the vast majority of samples, whereas Middle Eastern, African, and Latin American samples were largely unrepresented. On the basis of study results, recommendations are provided for authors, editors, and readers, which converge on 3 themes: (a) appropriateness and match between sample characteristics and research questions, (b) careful consideration of statistical power, and (c) the increased popularity of quantitative synthesis. Implications are discussed in terms of theory building, generalizability of research findings, and statistical power to detect effects. PsycINFO Database Record (c) 2011 APA, all rights reserved

  6. Approximal sealings on lesions in neighbouring teeth requiring operative treatment: an in vitro study.

    Science.gov (United States)

    Cartagena, Alvaro; Bakhshandeh, Azam; Ekstrand, Kim Rud

    2018-02-07

    With this in vitro study we aimed to assess the possibility of precise application of sealant on accessible artificial white spot lesions (WSL) on approximal surfaces next to a tooth surface under operative treatment. A secondary aim was to evaluate whether the use of magnifying glasses improved the application precision. Fifty-six extracted premolars were selected, approximal WSL lesions were created with 15% HCl gel and standardized photographs were taken. The premolars were mounted in plaster-models in contact with a neighbouring molar with Class II/I-II restoration (Sample 1) or approximal, cavitated dentin lesion (Sample 2). The restorations or the lesion were removed, and Clinpro Sealant was placed over the WSL. Magnifying glasses were used when sealing half the study material. The sealed premolar was removed from the plaster-model and photographed. Adobe Photoshop was used to measure the size of WSL and sealed area. The degree of match between the areas was determined in Photoshop. Interclass agreement for WSL, sealed, and matched areas were found as excellent (κ = 0.98-0.99). The sealant covered 48-100% of the WSL-area (median = 93%) in Sample 1 and 68-100% of the WSL-area (median = 95%) in Sample 2. No statistical differences were observed concerning uncovered proportions of the WSL-area between groups with and without using magnifying glasses (p values ≥ .19). However, overextended sealed areas were more pronounced when magnification was used (p = .01). The precision did not differ between the samples (p = .31). It was possible to seal accessible approximal lesions with high precision. Use of magnifying glasses did not improve the precision.

  7. A continuous tensor field approximation of discrete DT-MRI data for extracting microstructural and architectural features of tissue.

    Science.gov (United States)

    Pajevic, Sinisa; Aldroubi, Akram; Basser, Peter J

    2002-01-01

    The effective diffusion tensor of water, D, measured by diffusion tensor MRI (DT-MRI), is inherently a discrete, noisy, voxel-averaged sample of an underlying macroscopic effective diffusion tensor field, D(x). Within fibrous tissues this field is presumed to be continuous and smooth at a gross anatomical length scale. Here a new, general mathematical framework is proposed that uses measured DT-MRI data to produce a continuous approximation to D(x). One essential finding is that the continuous tensor field representation can be constructed by repeatedly performing one-dimensional B-spline transforms of the DT-MRI data. The fidelity and noise-immunity of this approximation are tested using a set of synthetically generated tensor fields to which background noise is added via Monte Carlo methods. Generally, these tensor field templates are reproduced faithfully except at boundaries where diffusion properties change discontinuously or where the tensor field is not microscopically homogeneous. Away from such regions, the tensor field approximation does not introduce bias in useful DT-MRI parameters, such as Trace(D(x)). It also facilitates the calculation of several new parameters, particularly differential quantities obtained from the tensor of spatial gradients of D(x). As an example, we show that they can identify tissue boundaries across which diffusion properties change rapidly using in vivo human brain data. One important application of this methodology is to improve the reliability and robustness of DT-MRI fiber tractography.

  8. Thermal probe design for Europa sample acquisition

    Science.gov (United States)

    Horne, Mera F.

    2018-01-01

    The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.

  9. A Poisson process approximation for generalized K-5 confidence regions

    Science.gov (United States)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  10. Theory of inelastic electron tunneling from a localized spin in the impulsive approximation.

    Science.gov (United States)

    Persson, Mats

    2009-07-31

    A simple expression for the conductance steps in inelastic electron tunneling from spin excitations in a single magnetic atom adsorbed on a nonmagnetic metal surface is derived. The inelastic coupling between the tunneling electron and the spin is via the exchange coupling and is treated in an impulsive approximation using the Tersoff-Hamann approximation for the tunneling between the tip and the sample.

  11. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  12. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  13. Quantitative microwave impedance microscopy with effective medium approximations

    Directory of Open Access Journals (Sweden)

    T. S. Jones

    2017-02-01

    Full Text Available Microwave impedance microscopy (MIM is a scanning probe technique to measure local changes in tip-sample admittance. The imaginary part of the reported change is calibrated with finite element simulations and physical measurements of a standard capacitive sample, and thereafter the output ΔY is given a reference value in siemens. Simulations also provide a means of extracting sample conductivity and permittivity from admittance, a procedure verified by comparing the estimated permittivity of polytetrafluoroethlyene (PTFE to the accepted value. Simulations published by others have investigated the tip-sample system for permittivity at a given conductivity, or conversely conductivity and a given permittivity; here we supply the full behavior for multiple values of both parameters. Finally, the well-known effective medium approximation of Bruggeman is considered as a means of estimating the volume fractions of the constituents in inhomogeneous two-phase systems. Specifically, we consider the estimation of porosity in carbide-derived carbon, a nanostructured material known for its use in energy storage devices.

  14. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%

  15. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  16. Functional approximations to posterior densities: a neural network approach to efficient sampling

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2002-01-01

    textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate

  17. Position-Dependent Dynamics Explain Pore-Averaged Diffusion in Strongly Attractive Adsorptive Systems.

    Science.gov (United States)

    Krekelberg, William P; Siderius, Daniel W; Shen, Vincent K; Truskett, Thomas M; Errington, Jeffrey R

    2017-12-12

    Using molecular simulations, we investigate the relationship between the pore-averaged and position-dependent self-diffusivity of a fluid adsorbed in a strongly attractive pore as a function of loading. Previous work (Krekelberg, W. P.; Siderius, D. W.; Shen, V. K.; Truskett, T. M.; Errington, J. R. Connection between thermodynamics and dynamics of simple fluids in highly attractive pores. Langmuir 2013, 29, 14527-14535, doi: 10.1021/la4037327) established that pore-averaged self-diffusivity in the multilayer adsorption regime, where the fluid exhibits a dense film at the pore surface and a lower density interior pore region, is nearly constant as a function of loading. Here we show that this puzzling behavior can be understood in terms of how loading affects the fraction of particles that reside in the film and interior pore regions as well as their distinct dynamics. Specifically, the insensitivity of pore-averaged diffusivity to loading arises from the approximate cancellation of two factors: an increase in the fraction of particles in the higher diffusivity interior pore region with loading and a corresponding decrease in the particle diffusivity in that region. We also find that the position-dependent self-diffusivities scale with the position-dependent density. We present a model for predicting the pore-average self-diffusivity based on the position-dependent self-diffusivity, which captures the unusual characteristics of pore-averaged self-diffusivity in strongly attractive pores over several orders of magnitude.

  18. Image averaging of flexible fibrous macromolecules: the clathrin triskelion has an elastic proximal segment.

    Science.gov (United States)

    Kocsis, E; Trus, B L; Steer, C J; Bisher, M E; Steven, A C

    1991-08-01

    We have developed computational techniques that allow image averaging to be applied to electron micrographs of filamentous molecules that exhibit tight and variable curvature. These techniques, which involve straightening by cubic-spline interpolation, image classification, and statistical analysis of the molecules' curvature properties, have been applied to purified brain clathrin. This trimeric filamentous protein polymerizes, both in vivo and in vitro, into a wide range of polyhedral structures. Contrasted by low-angle rotary shadowing, dissociated clathrin molecules appear as distinctive three-legged structures, called "triskelions" (E. Ungewickell and D. Branton (1981) Nature 289, 420). We find triskelion legs to vary from 35 to 62 nm in total length, according to an approximately bell-shaped distribution (mu = 51.6 nm). Peaks in averaged curvature profiles mark hinges or sites of enhanced flexibility. Such profiles, calculated for each length class, show that triskelion legs are flexible over their entire lengths. However, three curvature peaks are observed in every case: their locations define a proximal segment of systematically increasing length (14.0-19.0 nm), a mid-segment of fixed length (approximately 12 nm), and a rather variable end-segment (11.6-19.5 nm), terminating in a hinge just before the globular terminal domain (approximately 7.3 nm diameter). Thus, two major factors contribute to the overall variability in leg length: (1) stretching of the proximal segment and (2) stretching of the end-segment and/or scrolling of the terminal domain. The observed elasticity of the proximal segment may reflect phosphorylation of the clathrin light chains.

  19. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  20. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  1. A Divergence Median-based Geometric Detector with A Weighted Averaging Filter

    Science.gov (United States)

    Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang

    2018-01-01

    To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.

  2. Approximation of rejective sampling inclusion probabilities and application to high order correlations

    NARCIS (Netherlands)

    Boistard, H.; Lopuhää, H.P.; Ruiz-Gazen, A.

    2012-01-01

    This paper is devoted to rejective sampling. We provide an expansion of joint inclusion probabilities of any order in terms of the inclusion probabilities of order one, extending previous results by Hájek (1964) and Hájek (1981) and making the remainder term more precise. Following Hájek (1981), the

  3. Free-Space Optical Communications: Capacity Bounds, Approximations, and a New Sphere-Packing Perspective

    KAUST Repository

    Chaaban, Anas

    2016-02-03

    The capacity of the free-space optical channel is studied. A new recursive approach for bounding the capacity of the channel based on sphere-packing is proposed. This approach leads to new capacity upper bounds for a channel with a peak intensity constraint or an average intensity constraint. Under an average constraint only, the derived bound is tighter than an existing sphere-packing bound derived earlier by Farid and Hranilovic. The achievable rate of a truncated-Gaussian input distribution is also derived. It is shown that under both average and peak constraints, this achievable rate and the sphere-packing bounds are within a small gap at high SNR, leading to a simple high-SNR capacity approximation. Simple fitting functions that capture the best known achievable rate for the channel are provided. These functions can be of practical importance especially for the study of systems operating under atmospheric turbulence and misalignment conditions.

  4. Free-Space Optical Communications: Capacity Bounds, Approximations, and a New Sphere-Packing Perspective

    KAUST Repository

    Chaaban, Anas; Morvan, Jean-Marie; Alouini, Mohamed-Slim

    2016-01-01

    The capacity of the free-space optical channel is studied. A new recursive approach for bounding the capacity of the channel based on sphere-packing is proposed. This approach leads to new capacity upper bounds for a channel with a peak intensity constraint or an average intensity constraint. Under an average constraint only, the derived bound is tighter than an existing sphere-packing bound derived earlier by Farid and Hranilovic. The achievable rate of a truncated-Gaussian input distribution is also derived. It is shown that under both average and peak constraints, this achievable rate and the sphere-packing bounds are within a small gap at high SNR, leading to a simple high-SNR capacity approximation. Simple fitting functions that capture the best known achievable rate for the channel are provided. These functions can be of practical importance especially for the study of systems operating under atmospheric turbulence and misalignment conditions.

  5. Approximate analytical solution of diffusion equation with fractional time derivative using optimal homotopy analysis method

    Directory of Open Access Journals (Sweden)

    S. Das

    2013-12-01

    Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.

  6. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    Science.gov (United States)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  7. Sampling procedures and tables

    International Nuclear Information System (INIS)

    Franzkowski, R.

    1980-01-01

    Characteristics, defects, defectives - Sampling by attributes and by variables - Sample versus population - Frequency distributions for the number of defectives or the number of defects in the sample - Operating characteristic curve, producer's risk, consumer's risk - Acceptable quality level AQL - Average outgoing quality AOQ - Standard ISQ 2859 - Fundamentals of sampling by variables for fraction defective. (RW)

  8. Inverse bremsstrahlung heating beyond the first Born approximation for dense plasmas in laser fields

    International Nuclear Information System (INIS)

    Moll, M; Schlanges, M; Bornath, Th; Krainov, V P

    2012-01-01

    Inverse bremsstrahlung (IB) heating, an important process in the laser-matter interaction, involves two different kinds of interaction—the interaction of the electrons with the external laser field and the electron-ion interaction. This makes analytical approaches very difficult. In a quantum perturbative approach to the IB heating rate in strong laser fields, usually the first Born approximation with respect to the electron-ion potential is considered, whereas the influence of the electric field is taken exactly in the Volkov wave functions. In this paper, a perturbative treatment is presented adopting a screened electron-ion interaction potential. As a new result, we derive the momentum-dependent, angle-averaged heating rate in the first Born approximation. Numerical results are discussed for a broad range of field strengths, and the conditions for the applicability of a linear approximation for the heating rate are analyzed in detail. Going a step further in the perturbation series, we consider the transition amplitude in the second Born approximation, which enables us to calculate the heating rate up to the third order of the interaction strength. (paper)

  9. Detecting Change-Point via Saddlepoint Approximations

    Institute of Scientific and Technical Information of China (English)

    Zhaoyuan LI; Maozai TIAN

    2017-01-01

    It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.

  10. keV-Scale sterile neutrino sensitivity estimation with time-of-flight spectroscopy in KATRIN using self-consistent approximate Monte Carlo

    Science.gov (United States)

    Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian

    2018-03-01

    We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.

  11. Reliable Approximation of Long Relaxation Timescales in Molecular Dynamics

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2017-07-01

    Full Text Available Many interesting rare events in molecular systems, like ligand association, protein folding or conformational changes, occur on timescales that often are not accessible by direct numerical simulation. Therefore, rare event approximation approaches like interface sampling, Markov state model building, or advanced reaction coordinate-based free energy estimation have attracted huge attention recently. In this article we analyze the reliability of such approaches. How precise is an estimate of long relaxation timescales of molecular systems resulting from various forms of rare event approximation methods? Our results give a theoretical answer to this question by relating it with the transfer operator approach to molecular dynamics. By doing so we also allow for understanding deep connections between the different approaches.

  12. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  13. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    Science.gov (United States)

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  14. Analyzing the errors of DFT approximations for compressed water systems

    International Nuclear Information System (INIS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-01-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the

  15. Testing a groundwater sampling tool: Are the samples representative?

    International Nuclear Information System (INIS)

    Kaback, D.S.; Bergren, C.L.; Carlson, C.A.; Carlson, C.L.

    1989-01-01

    A ground water sampling tool, the HydroPunch trademark, was tested at the Department of Energy's Savannah River Site in South Carolina to determine if representative ground water samples could be obtained without installing monitoring wells. Chemical analyses of ground water samples collected with the HydroPunch trademark from various depths within a borehole were compared with chemical analyses of ground water from nearby monitoring wells. The site selected for the test was in the vicinity of a large coal storage pile and a coal pile runoff basin that was constructed to collect the runoff from the coal storage pile. Existing monitoring wells in the area indicate the presence of a ground water contaminant plume that: (1) contains elevated concentrations of trace metals; (2) has an extremely low pH; and (3) contains elevated concentrations of major cations and anions. Ground water samples collected with the HydroPunch trademark provide in excellent estimate of ground water quality at discrete depths. Groundwater chemical data collected from various depths using the HydroPunch trademark can be averaged to simulate what a screen zone in a monitoring well would sample. The averaged depth-discrete data compared favorably with the data obtained from the nearby monitoring wells

  16. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    Science.gov (United States)

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  17. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  18. A new mathematical process for the calculation of average forms of teeth.

    Science.gov (United States)

    Mehl, A; Blanz, V; Hickel, R

    2005-12-01

    Qualitative visual inspections and linear metric measurements have been predominant methods for describing the morphology of teeth. No quantitative formulation exists for the description of dental features. The aim of this study was to determine and validate a mathematical process for calculation of the average form of first maxillary molars, including the general occlusal features. Stone replicas of 174 caries-free first maxillary molar crowns from young patients ranging from 6 to 9 years of age were measured 3-dimensionally with a laser scanning system at a resolution of approximately 100,000 points. Then, the average tooth was computed, which captured the common features of the molar's surface quantitatively. This new method adapts algorithms both from computer science and neuroscience to detect and associate the same features and same surface points (correspondences) between 1 reference tooth and all other teeth. In this study, the method was tested for 7 different reference teeth. The algorithm does not involve any prior knowledge about teeth and their features. Irrespective of the reference tooth used, the procedure yielded average teeth that showed nearly no differences (less than +/-30 microm). This approach provides a valid quantitative process for calculating 3-dimensional (3D) averages of occlusal surfaces of teeth even in the event of a high number of digitized surface points. Additionally, because this process detects and assigns point-wise feature correspondences between all library teeth, it may also serve as a basis for a more substantiated principal component analysis evaluating the main natural shape deviations from the 3D average.

  19. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    Science.gov (United States)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  20. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  1. Benefits of Dominance over Additive Models for the Estimation of Average Effects in the Presence of Dominance

    Directory of Open Access Journals (Sweden)

    Pascal Duenk

    2017-10-01

    Full Text Available In quantitative genetics, the average effect at a single locus can be estimated by an additive (A model, or an additive plus dominance (AD model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and identically distributed. Our objective was to investigate the accuracy of an estimated average effect (α^ in the presence of dominance, using either a single locus A-model or AD-model. Estimation was based on a finite sample from a large population in Hardy-Weinberg equilibrium (HWE, and the root mean squared error of α^ was calculated for several broad-sense heritabilities, sample sizes, and sizes of the dominance effect. Results show that with the A-model, both sampling deviations of genotype frequencies from HWE frequencies and sampling deviations of allele frequencies contributed to the error. With the AD-model, only sampling deviations of allele frequencies contributed to the error, provided that all three genotype classes were sampled. In the presence of dominance, the root mean squared error of α^ with the AD-model was always smaller than with the A-model, even when the heritability was less than one. Remarkably, in the absence of dominance, there was no disadvantage of fitting dominance. In conclusion, the AD-model yields more accurate estimates of average effects from a finite sample, because it is more robust against sampling deviations from HWE frequencies than the A-model. Genetic models that include dominance, therefore, yield higher accuracies of estimated average effects than purely additive models when dominance is present.

  2. Neutron flux calculations for criticality safety analysis using the narrow resonance approximations. Vol. 2

    Energy Technology Data Exchange (ETDEWEB)

    Hathout, A M [National Center for Nuclear Safety and Radiation Control, NC-NSRC, Atomic Energy Authority, Cairo (Egypt)

    1996-03-01

    The narrow resonance approximation is applicable for all low-energy resonances and the heaviest nuclides. It is of great importance in neutron calculations, hence, fertile isotopes do not undergo fission at resonance energies. The effect of overestimating the self shielded group averaged cross-section data for a given resonance nuclide can be fairly serious. In the present work, a detailed study, and derivation of the problem of self-shielding are carried-out through the information of Hansen-roach library which is used for criticality safety analysis. The intermediate neutron flux spectrum is analyzed, using the narrow resonance approximation. The resonance self-shielded values of various cross-sections are determined. 4 figs., 3 tabs.

  3. Approximate Solution of Dam-break Flow of Low Viscosity Bingham Fluid

    Science.gov (United States)

    Puay, How Tion; Hosoda, Takashi

    In this study, we investigate the characteristics of dam-break flow of low viscosity Bingham fluid by deriving an approximate solution for the time development of the front position and depth at the origin of the flow. The asymptotic solutions representing the characteristic of Bingham fluid in the limit of low plastic viscosity are verified with a depth-averaged numerical model. Numerical simulations showed that with the decrease of plastic viscosity, the time development of the front position and depth at the origin approach to the theoretical asymptotic solution.

  4. A note on computing average state occupation times

    Directory of Open Access Journals (Sweden)

    Jan Beyersmann

    2014-05-01

    Full Text Available Objective: This review discusses how biometricians would probably compute or estimate expected waiting times, if they had the data. Methods: Our framework is a time-inhomogeneous Markov multistate model, where all transition hazards are allowed to be time-varying. We assume that the cumulative transition hazards are given. That is, they are either known, as in a simulation, determined by expert guesses, or obtained via some method of statistical estimation. Our basic tool is product integration, which transforms the transition hazards into the matrix of transition probabilities. Product integration enjoys a rich mathematical theory, which has successfully been used to study probabilistic and statistical aspects of multistate models. Our emphasis will be on practical implementation of product integration, which allows us to numerically approximate the transition probabilities. Average state occupation times and other quantities of interest may then be derived from the transition probabilities.

  5. Analysis of average radiation widths of neutron resonances

    International Nuclear Information System (INIS)

    Malezki, H.; Popov, A.B.; Trzeciak, K.

    1982-01-01

    On the basis of the available data on parameters of neutron resonances average values of radiation widths (GITAsub(γ)) are calculated for a wide range of nuclei in the 50 upto 250 atomic weight range. Experimental values are compared with different variants of theoretical estimates of GITAsub(γ) which are reduced to the GITAsub(γ) dependence upon atomic weight A, excitation energy U and level density parameter a as GITAsub(γ)=CAsup(α)Usup(β)asup(γ). Besides, empirical values C, α, β, γ are selected satisfying the experimental data best of all. It is determined that the use of a=kA hypothesis leads to a sufficiently better agreement between all theoretical estimates of GITAsub(γ) and experimental values. It turned out that the estimations by Weisskopf, Bondarenko-Urin or with empirically chosen parameters give an approximately similar correspondence of calculated values GITAsub(γ)sup(p) to experimental data [ru

  6. Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...

  7. Thermodynamic Integration Methods, Infinite Swapping and the Calculation of Generalized Averages

    OpenAIRE

    Doll, J. D.; Dupuis, P.; Nyquist, P.

    2016-01-01

    In the present paper we examine the risk-sensitive and sampling issues associated with the problem of calculating generalized averages. By combining thermodynamic integration and Stationary Phase Monte Carlo techniques, we develop an approach for such problems and explore its utility for a prototypical class of applications.

  8. The association between estimated average glucose levels and fasting plasma glucose levels

    Directory of Open Access Journals (Sweden)

    Giray Bozkaya

    2010-01-01

    Full Text Available OBJECTIVE: The level of hemoglobin A1c (HbA1c, also known as glycated hemoglobin, determines how well a patient's blood glucose level has been controlled over the previous 8-12 weeks. HbA1c levels help patients and doctors understand whether a particular diabetes treatment is working and whether adjustments need to be made to the treatment. Because the HbA1c level is a marker of blood glucose for the previous 120 days, average blood glucose levels can be estimated using HbA1c levels. Our aim in the present study was to investigate the relationship between estimated average glucose levels, as calculated by HbA1c levels, and fasting plasma glucose levels. METHODS: The fasting plasma glucose levels of 3891 diabetic patient samples (1497 male, 2394 female were obtained from the laboratory information system used for HbA1c testing by the Department of Internal Medicine at the Izmir Bozyaka Training and Research Hospital in Turkey. These samples were selected from patient samples that had hemoglobin levels between 12 and 16 g/dL. The estimated glucose levels were calculated using the following formula: 28.7 x HbA1c - 46.7. Glucose and HbA1c levels were determined using hexokinase and high performance liquid chromatography (HPLC methods, respectively. RESULTS: A strong positive correlation between fasting plasma glucose levels and estimated average blood glucose levels (r=0.757, p<0.05 was observed. The difference was statistically significant. CONCLUSION: Reporting the estimated average glucose level together with the HbA1c level is believed to assist patients and doctors determine the effectiveness of blood glucose control measures.

  9. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  10. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  11. Random vs. systematic sampling from administrative databases involving human subjects.

    Science.gov (United States)

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  12. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  13. Characteristics of Mae Moh lignite: Hardgrove grindability index and approximate work index

    OpenAIRE

    Wutthiphong Tara; Chairoj Rattanakawin

    2012-01-01

    The purpose of this research was to preliminarily study the Mae Moh lignite grindability tests emphasizing onHardgrove grindability and approximate work index determination respectively. Firstly, the lignite samples were collected,prepared and analyzed for calorific value, total sulfur content, and proximate analysis. After that the Hardgrove grindabilitytest using ball-race test mill was performed. Knowing the Hardgrove indices, the Bond work indices of some samples wereestimated using the A...

  14. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  15. Approximate cohomology in Banach algebras | Pourabbas ...

    African Journals Online (AJOL)

    We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...

  16. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  17. Hot sample archiving. Revision 3

    International Nuclear Information System (INIS)

    McVey, C.B.

    1995-01-01

    This Engineering Study revision evaluated the alternatives to provide tank waste characterization analytical samples for a time period as recommended by the Tank Waste Remediation Systems Program. The recommendation of storing 40 ml segment samples for a period of approximately 18 months (6 months past the approval date of the Tank Characterization Report) and then composite the core segment material in 125 ml containers for a period of five years. The study considers storage at 222-S facility. It was determined that the critical storage problem was in the hot cell area. The 40 ml sample container has enough material for approximately 3 times the required amount for a complete laboratory re-analysis. The final result is that 222-S can meet the sample archive storage requirements. During the 100% capture rate the capacity is exceeded in the hot cell area, but quick, inexpensive options are available to meet the requirements

  18. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  19. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  20. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  1. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  2. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  3. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  4. Extracting gravitational waves induced by plasma turbulence in the early Universe through an averaging process

    International Nuclear Information System (INIS)

    Garrison, David; Ramirez, Christopher

    2017-01-01

    This work is a follow-up to the paper, ‘Numerical relativity as a tool for studying the early Universe’. In this article, we determine if cosmological gravitational waves can be accurately extracted from a dynamical spacetime using an averaging process as opposed to conventional methods of gravitational wave extraction using a complex Weyl scalar. We calculate the normalized energy density, strain and degree of polarization of gravitational waves produced by a simulated turbulent plasma similar to what was believed to have existed shortly after the electroweak scale. This calculation is completed using two numerical codes, one which utilizes full general relativity calculations based on modified BSSN equations while the other utilizes a linearized approximation of general relativity. Our results show that the spectrum of gravitational waves calculated from the nonlinear code using an averaging process is nearly indistinguishable from those calculated from the linear code. This result validates the use of the averaging process for gravitational wave extraction of cosmological systems. (paper)

  5. Structure of two-phase air-water flows. Study of average void fraction and flow patterns

    International Nuclear Information System (INIS)

    Roumy, R.

    1969-01-01

    This report deals with experimental work on a two phase air-water mixture in vertical tubes of different diameters. The average void fraction was measured in a 2 metre long test section by means of quick-closing valves. Using resistive probes and photographic techniques, we have determined the flow patterns and developed diagrams to indicate the boundaries between the various patterns: independent bubbles, agglomerated bubbles, slugs, semi-annular, annular. In the case of bubble flow and slug flow, it is shown that the relationship between the average void fraction and the superficial velocities of the phases is given by: V sg = f( ) * g(V sl ). The function g(V sl ) for the case of independent bubbles has been found to be: g(V sl ) = V sl + 20. For semi-annular and annular flow conditions; it appears that the average void fraction depends, to a first approximation only on the ratio V sg /V sl . (author) [fr

  6. Use of gamma ray spectroscopy measurements for assessment of the average effective dose from the analysis of 226Ra, 232Th and 40K in soil samples

    International Nuclear Information System (INIS)

    Mehra, Rohit; Singh, Surinder

    2008-01-01

    The activity concentrations of soil samples collected from different locations of Ludhiana and Patiala districts of Punjab were determined by using HPGe detector based on high-resolution gamma spectrometry system. The range of activity concentrations of 226 Ra, 232 Th and 40 K in the soil from the studied areas varies from 23.32 Bq kg -1 to 43.64 Bq kg -1 , 104.23 Bq kg -1 to 148.21 Bq kg -1 and 289.83 Bq kg -1 to 394.41 Bq kg -1 with overall mean values of 32 Bq kg -1 , 126 Bq kg -1 and 348 Bq kg -1 respectively. The absorbed dose rate calculated from activity concentration of 226 Ra, 232 Th and 40 K ranges between 10.75 and 20.12, 64.93 and 92.33, and 11.99 and 16.32 n Gy h -1 , respectively. The total absorbed dose in the study area ranges from 91.35 n Gy h -1 to 119.76 n Gy h -1 with an average value of 107.97 n Gy h -1 . The calculated values of external hazard index (H ex ) for the soil samples of the study area range from 0.55 to 0.72. Since these values are lower than unity, therefore, according to the Radiation Protection 112 (European Commission, 1999) report, soil from these regions is safe and can be used as a construction material without posing any significant radiological threat to population. The concentration of 232 Th in soil samples of Malwa region of Punjab are higher than the world figures reported in UNSCEAR (2000). However, the concentrations for 226 Ra is very much comparable and concentration of 40 K are lower than world figures. The results obtained have shown that the indoor and outdoor effective dose due to natural radioactivity of soil samples is lower than the average national and world recommended value of 1.0 mSv.Y -1 . These values reported for radium content in soils of study area are generally low as compared to the values reported for radium concentration in soils of Himachal Pradesh. (author)

  7. New device for time-averaged measurement of volatile organic compounds (VOCs)

    Energy Technology Data Exchange (ETDEWEB)

    Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio, E-mail: julio.llorca@aqualogy.net

    2014-07-01

    Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes

  8. Approximations of Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Vinai K. Singh

    2013-03-01

    Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions

  9. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  10. Multigroup Approximation of Radiation Transfer in SF6 Arc Plasmas

    Directory of Open Access Journals (Sweden)

    Milada Bartlova

    2013-01-01

    Full Text Available The first order of the method of spherical harmonics (P1-approximation has been used to evaluate the radiation properties of arc plasmas of various mixtures of SF6 and PTFE ((C2F4n, polytetrafluoroethylene in the temperature range (1000 ÷ 35 000 K and pressures from 0.5 to 5 MPa. Calculations have been performed for isothermal cylindrical plasma of various radii (0.01 ÷ 10 cm. The frequency dependence of the absorption coefficients has been handled using the Planck and Rosseland averaging methods for several frequency intervals. Results obtained using various means calculated for different choices of frequency intervals are discussed.

  11. Whole genome transcript profiling from fingerstick blood samples: a comparison and feasibility study

    Directory of Open Access Journals (Sweden)

    Williams Adam R

    2009-12-01

    Full Text Available Abstract Background Whole genome gene expression profiling has revolutionized research in the past decade especially with the advent of microarrays. Recently, there have been significant improvements in whole blood RNA isolation techniques which, through stabilization of RNA at the time of sample collection, avoid bias and artifacts introduced during sample handling. Despite these improvements, current human whole blood RNA stabilization/isolation kits are limited by the requirement of a venous blood sample of at least 2.5 mL. While fingerstick blood collection has been used for many different assays, there has yet to be a kit developed to isolate high quality RNA for use in gene expression studies from such small human samples. The clinical and field testing advantages of obtaining reliable and reproducible gene expression data from a fingerstick are many; it is less invasive, time saving, more mobile, and eliminates the need of a trained phlebotomist. Furthermore, this method could also be employed in small animal studies, i.e. mice, where larger sample collections often require sacrificing the animal. In this study, we offer a rapid and simple method to extract sufficient amounts of high quality total RNA from approximately 70 μl of whole blood collected via a fingerstick using a modified protocol of the commercially available Qiagen PAXgene RNA Blood Kit. Results From two sets of fingerstick collections, about 70 uL whole blood collected via finger lancet and capillary tube, we recovered an average of 252.6 ng total RNA with an average RIN of 9.3. The post-amplification yields for 50 ng of total RNA averaged at 7.0 ug cDNA. The cDNA hybridized to Affymetrix HG-U133 Plus 2.0 GeneChips had an average % Present call of 52.5%. Both fingerstick collections were highly correlated with r2 values ranging from 0.94 to 0.97. Similarly both fingerstick collections were highly correlated to the venous collection with r2 values ranging from 0.88 to 0

  12. Cosmological applications of Padé approximant

    International Nuclear Information System (INIS)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation

  13. Cosmological applications of Padé approximant

    Science.gov (United States)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

  14. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  15. A modified stochastic averaging method on single-degree-of-freedom strongly nonlinear stochastic vibrations

    International Nuclear Information System (INIS)

    Ge, Gen; Li, ZePeng

    2016-01-01

    A modified stochastic averaging method on single-degree-of-freedom (SDOF) oscillators under white noise excitations with strongly nonlinearity was proposed. Considering the existing approach dealing with strongly nonlinear SDOFs derived by Zhu and Huang [14, 15] is quite time consuming in calculating the drift coefficient and diffusion coefficients and the expressions of them are considerable long, the so-called He's energy balance method was applied to overcome the minor defect of the Zhu and Huang's method. The modified method can offer more concise approximate expressions of the drift and diffusion coefficients without weakening the accuracy of predicting the responses of the systems too much by giving an averaged frequency beforehand. Three examples, a cubic and quadratic nonlinearity coexisting oscillator, a quadratic nonlinear oscillator under external white noise excitations and an externally excited Duffing–Rayleigh oscillator, were given to illustrate the approach we proposed. The three examples were excited by the Gaussian white noise and the Gaussian colored noise separately. The stationary responses of probability density of amplitudes and energy, together with joint probability density of displacement and velocity are studied to verify the presented approach. The reliability of the systems were also investigated to offer further support. Digital simulations were carried out and the output of that are coincide with the theoretical approximations well.

  16. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  17. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  18. Determination of environmental levels of 239240Pu, 241Am, 137Cs, and 90Sr in large volume sea water samples

    International Nuclear Information System (INIS)

    Sutton, D.C.; Calderon, G.; Rosa, W.

    1976-06-01

    A method is reported for the determination of environmental levels of 239 240 Pu and 241 Am in approximately 60-liter size samples of seawater. 137 Cs and 90 Sr were also separated and determined from the same samples. The samples were collected at the sea surface and at various depths in the oceans through the facilities of the Woods Hole Oceanographic Institution. Plutonium and americium were separated from the seawater by iron hydroxide scavenging then treated with a mixture of nitric, hydrochloric, and perchloric acids. A series of anion exchange separations were used to remove interferences and purify plutonium and americium; then each was electroplated on platinum disks and measured by solid state alpha particle spectrometry. The overall chemical yields averaged 62 +- 9 and 69 +- 14 percent for 236 Pu, and 243 Am tracers, respectively. Following the iron hydroxide scavenge of the transuranics, cesium was removed from the acidified seawater matrix by adsorption onto ammonium phosphomolybdate. Cesium carrier and 137 Cs isolation was effected by ion exchange and precipitations were made using chloroplatinic acid. The samples were weighed to determine overall chemical yield then beta counted. Cesium recoveries averaged 75 +- 5 percent. After cesium was removed from the seawater matrix, the samples were neutralized with sodium hydroxide and ammonium carbonate was added to precipitate 85 Sr tracer and the mixed alkaline earth carbonates. Strontium was separated as the nitrate and scavenged by chromate and hydroxide precipitations. Yttrium-90 was allowed to build up for two weeks, then milked and precipitated as the oxalate, weighed, and beta counted. The overall chemical yields of 85 Sr tracer averaged 84 +- 16 percent. The recovery of the yttrium oxalate precipitates averaged 96 +- 3 percent

  19. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  20. Determination of average molecular parameters of vacuum residues and asphalt by elementary analysis and 1 H NMR and comparison with 13 C NMR results

    International Nuclear Information System (INIS)

    Teixeira, Marco Antonio; Marques, Rosana Garrido

    1995-01-01

    This work proposes a new approach for determining average molecular parameters in petroleum fractions, from some approximation based on consideration about average composition of petroleum heavy fractions. A comparative evaluation between the proposed method and the traditional one has been carried out, showing 60 hours saving in time spent at analysis. The results were present and discussed

  1. A surprise in the first Born approximation for electron scattering

    International Nuclear Information System (INIS)

    Treacy, M.M.J.; Van Dyck, D.

    2012-01-01

    A standard textbook derivation for the scattering of electrons by a weak potential under the first Born approximation suggests that the far-field scattered wave should be in phase with the incident wave. However, it is well known that waves scattered from a weak phase object should be phase-shifted by π/2 relative to the incident wave. A disturbing consequence of this missing phase is that, according to the Optical Theorem, the total scattering cross section would be zero in the first Born approximation. We resolve this mystery pedagogically by showing that the first Born approximation fails to conserve electrons even to first order. Modifying the derivation to conserve electrons introduces the correct phase without changing the scattering amplitude. We also show that the far-field expansion for the scattered waves used in many texts is inappropriate for computing an exit wave from a sample, and that the near-field expansion also give the appropriately phase-shifted result. -- Highlights: ► The first Born approximation is usually invoked as the theoretical physical basis for kinematical electron scattering theory. ► Although it predicts the correct scattering amplitude, it predicts the wrong phase; the scattered wave is missing a prefactor of i. ► We show that this arises because the standard textbook version of the first Born approximation does not conserve electrons. ► We show how this can be fixed.

  2. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....

  3. MADNIX a code to calculate prompt fission neutron spectra and average prompt neutron multiplicities

    International Nuclear Information System (INIS)

    Merchant, A.C.

    1986-03-01

    A code has been written and tested on the CDC Cyber-170 to calculate the prompt fission neutron spectrum, N(E), as a function of both the fissioning nucleus and its excitation energy. In this note a brief description of the underlying physical principles involved and a detailed explanation of the required input data (together with a sample output for the fission of 235 U induced by 14 MeV neutrons) are presented. Weisskopf's standard nuclear evaporation theory provides the basis for the calculation. Two important refinements are that the distribution of fission-fragment residual nuclear temperature and the cooling of the fragments as neutrons are emitted approximately taken into account, and also the energy dependence of the cross section for the inverse process of compound nucleus formation is included. This approach is then used to calculate the average number of prompt neutrons emitted per fission, v-bar p . At high excitation energies, where fission is still possible after neutron emission, the consequences of the competition between first, second and third chance fission on N(E) and v-bar p are calculated. Excellent agreement with all the examples given in the original work of Madland and Nix is obtained. (author) [pt

  4. Some advances in importance sampling of reliability models based on zero variance approximation

    NARCIS (Netherlands)

    Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Scheinhardt, Willem R.W.; Juneja, Sandeep

    We are interested in estimating, through simulation, the probability of entering a rare failure state before a regeneration state. Since this probability is typically small, we apply importance sampling. The method that we use is based on finding the most likely paths to failure. We present an

  5. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    Pedersen, Steffen Højris

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...

  6. Photoelectron spectroscopy and the dipole approximation

    Energy Technology Data Exchange (ETDEWEB)

    Hemmers, O.; Hansen, D.L.; Wang, H. [Univ. of Nevada, Las Vegas, NV (United States)] [and others

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  7. Finite-temperature random-phase approximation for spectroscopic properties of neon plasmas

    International Nuclear Information System (INIS)

    Colgan, J.; Collins, L. A.; Fontes, C. J.; Csanak, G.

    2007-01-01

    A finite-temperature random-phase approximation (FTRPA) is applied to calculate oscillator strengths for excitations in hot and dense plasmas. Application of the FTRPA provides a convenient, self-consistent method with which to explore coupled-channel effects of excited electrons in a dense plasma. We present FTRPA calculations that include coupled-channel effects. The inclusion of these effects is shown to cause significant differences in the oscillator strength for a prototypical case of 1 P excitation in neon when compared with single-channel and with average-atom calculations. Trends as a function of temperature and density are also discussed

  8. Bounded-Degree Approximations of Stochastic Networks

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.

  9. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  10. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  11. Whole-genome gene expression profiling of formalin-fixed, paraffin-embedded tissue samples.

    Directory of Open Access Journals (Sweden)

    Craig April

    2009-12-01

    Full Text Available We have developed a gene expression assay (Whole-Genome DASL, capable of generating whole-genome gene expression profiles from degraded samples such as formalin-fixed, paraffin-embedded (FFPE specimens.We demonstrated a similar level of sensitivity in gene detection between matched fresh-frozen (FF and FFPE samples, with the number and overlap of probes detected in the FFPE samples being approximately 88% and 95% of that in the corresponding FF samples, respectively; 74% of the differentially expressed probes overlapped between the FF and FFPE pairs. The WG-DASL assay is also able to detect 1.3-1.5 and 1.5-2 -fold changes in intact and FFPE samples, respectively. The dynamic range for the assay is approximately 3 logs. Comparing the WG-DASL assay with an in vitro transcription-based labeling method yielded fold-change correlations of R(2 approximately 0.83, while fold-change comparisons with quantitative RT-PCR assays yielded R(2 approximately 0.86 and R(2 approximately 0.55 for intact and FFPE samples, respectively. Additionally, the WG-DASL assay yielded high self-correlations (R(2>0.98 with low intact RNA inputs ranging from 1 ng to 100 ng; reproducible expression profiles were also obtained with 250 pg total RNA (R(2 approximately 0.92, with approximately 71% of the probes detected in 100 ng total RNA also detected at the 250 pg level. When FFPE samples were assayed, 1 ng total RNA yielded self-correlations of R(2 approximately 0.80, while still maintaining a correlation of R(2 approximately 0.75 with standard FFPE inputs (200 ng.Taken together, these results show that WG-DASL assay provides a reliable platform for genome-wide expression profiling in archived materials. It also possesses utility within clinical settings where only limited quantities of samples may be available (e.g. microdissected material or when minimally invasive procedures are performed (e.g. biopsied specimens.

  12. Investigation of the energy-averaged double transition density of isoscalar monopole excitations in medium-heavy mass spherical nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Gorelik, M.L.; Shlomo, S. [National Research Nuclear University “MEPhI”, Moscow 115409 (Russian Federation); Cyclotron Institute, Texas A& M University, College Station, TX 77843 (United States); Tulupov, B.A. [National Research Nuclear University “MEPhI”, Moscow 115409 (Russian Federation); Institute for Nuclear Research, RAS, Moscow 117312 (Russian Federation); Urin, M.H., E-mail: urin@theor.mephi.ru [National Research Nuclear University “MEPhI”, Moscow 115409 (Russian Federation)

    2016-11-15

    The particle–hole dispersive optical model, developed recently, is applied to study properties of high-energy isoscalar monopole excitations in medium-heavy mass spherical nuclei. The energy-averaged strength functions of the isoscalar giant monopole resonance and its overtone in {sup 208}Pb are analyzed. In particular, we analyze the energy-averaged isoscalar monopole double transition density, the key quantity in the description of the hadron–nucleus inelastic scattering, and studied the validity of the factorization approximation using semi classical and microscopic one body transition densities, respectively, in calculating the cross sections for the excitation of isoscalar giant resonances by inelastic alpha scattering.

  13. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  14. Limitations of shallow nets approximation.

    Science.gov (United States)

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Exact fluctuations of nonequilibrium steady states from approximate auxiliary dynamics

    OpenAIRE

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2017-01-01

    We describe a framework to significantly reduce the computational effort to evaluate large deviation functions of time integrated observables within nonequilibrium steady states. We do this by incorporating an auxiliary dynamics into trajectory based Monte Carlo calculations, through a transformation of the system's propagator using an approximate guiding function. This procedure importance samples the trajectories that most contribute to the large deviation function, mitigating the exponenti...

  16. Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns

    DEFF Research Database (Denmark)

    Gonçalves, Sílvia; Hounyo, Ulrich; Meddahi, Nour

    The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias......-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our...... Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately. We use empirical work to illustrate its use in practice....

  17. Radon and radon daughters indoors, problems in the determination of the annual average

    International Nuclear Information System (INIS)

    Swedjemark, G.A.

    1984-01-01

    The annual average of the concentration of radon and radon daughters in indoor air is required both in studies such as determining the collective dose to a population and at comparing with limits. Measurements are often carried out during a time period shorter than a year for practical reasons. Methods for estimating the uncertainties due to temporal variations in an annual average calculated from measurements carried out during various lengths of the sampling periods. These methods have been applied to the results from long-term measurements of radon-222 in a few houses. The possibilities to use correction factors in order to get a more adequate annual average have also been studied and some examples have been given. (orig.)

  18. The Effect of Cumulus Cloud Field Anisotropy on Domain-Averaged Solar Fluxes and Atmospheric Heating Rates

    Science.gov (United States)

    Hinkelman, Laura M.; Evans, K. Franklin; Clothiaux, Eugene E.; Ackerman, Thomas P.; Stackhouse, Paul W., Jr.

    2006-01-01

    Cumulus clouds can become tilted or elongated in the presence of wind shear. Nevertheless, most studies of the interaction of cumulus clouds and radiation have assumed these clouds to be isotropic. This paper describes an investigation of the effect of fair-weather cumulus cloud field anisotropy on domain-averaged solar fluxes and atmospheric heating rate profiles. A stochastic field generation algorithm was used to produce twenty three-dimensional liquid water content fields based on the statistical properties of cloud scenes from a large eddy simulation. Progressively greater degrees of x-z plane tilting and horizontal stretching were imposed on each of these scenes, so that an ensemble of scenes was produced for each level of distortion. The resulting scenes were used as input to a three-dimensional Monte Carlo radiative transfer model. Domain-average transmission, reflection, and absorption of broadband solar radiation were computed for each scene along with the average heating rate profile. Both tilt and horizontal stretching were found to significantly affect calculated fluxes, with the amount and sign of flux differences depending strongly on sun position relative to cloud distortion geometry. The mechanisms by which anisotropy interacts with solar fluxes were investigated by comparisons to independent pixel approximation and tilted independent pixel approximation computations for the same scenes. Cumulus anisotropy was found to most strongly impact solar radiative transfer by changing the effective cloud fraction, i.e., the cloud fraction when the field is projected on a surface perpendicular to the direction of the incident solar beam.

  19. Life Science's Average Publishable Unit (APU Has Increased over the Past Two Decades.

    Directory of Open Access Journals (Sweden)

    Radames J B Cordero

    Full Text Available Quantitative analysis of the scientific literature is important for evaluating the evolution and state of science. To study how the density of biological literature has changed over the past two decades we visually inspected 1464 research articles related only to the biological sciences from ten scholarly journals (with average Impact Factors, IF, ranging from 3.8 to 32.1. By scoring the number of data items (tables and figures, density of composite figures (labeled panels per figure or PPF, as well as the number of authors, pages and references per research publication we calculated an Average Publishable Unit or APU for 1993, 2003, and 2013. The data show an overall increase in the average ± SD number of data items from 1993 to 2013 of approximately 7±3 to 14±11 and PPF ratio of 2±1 to 4±2 per article, suggesting that the APU has doubled in size over the past two decades. As expected, the increase in data items per article is mainly in the form of supplemental material, constituting 0 to 80% of the data items per publication in 2013, depending on the journal. The changes in the average number of pages (approx. 8±3 to 10±3, references (approx. 44±18 to 56±24 and authors (approx. 5±3 to 8±9 per article are also presented and discussed. The average number of data items, figure density and authors per publication are correlated with the journal's average IF. The increasing APU size over time is important when considering the value of research articles for life scientists and publishers, as well as, the implications of these increasing trends in the mechanisms and economics of scientific communication.

  20. Life Science's Average Publishable Unit (APU) Has Increased over the Past Two Decades.

    Science.gov (United States)

    Cordero, Radames J B; de León-Rodriguez, Carlos M; Alvarado-Torres, John K; Rodriguez, Ana R; Casadevall, Arturo

    2016-01-01

    Quantitative analysis of the scientific literature is important for evaluating the evolution and state of science. To study how the density of biological literature has changed over the past two decades we visually inspected 1464 research articles related only to the biological sciences from ten scholarly journals (with average Impact Factors, IF, ranging from 3.8 to 32.1). By scoring the number of data items (tables and figures), density of composite figures (labeled panels per figure or PPF), as well as the number of authors, pages and references per research publication we calculated an Average Publishable Unit or APU for 1993, 2003, and 2013. The data show an overall increase in the average ± SD number of data items from 1993 to 2013 of approximately 7±3 to 14±11 and PPF ratio of 2±1 to 4±2 per article, suggesting that the APU has doubled in size over the past two decades. As expected, the increase in data items per article is mainly in the form of supplemental material, constituting 0 to 80% of the data items per publication in 2013, depending on the journal. The changes in the average number of pages (approx. 8±3 to 10±3), references (approx. 44±18 to 56±24) and authors (approx. 5±3 to 8±9) per article are also presented and discussed. The average number of data items, figure density and authors per publication are correlated with the journal's average IF. The increasing APU size over time is important when considering the value of research articles for life scientists and publishers, as well as, the implications of these increasing trends in the mechanisms and economics of scientific communication.

  1. Measurement of the average polarization of b baryons in hadronic $Z^0$ decays

    CERN Document Server

    Abbiendi, G.; Alexander, G.; Allison, John; Altekamp, N.; Anderson, K.J.; Anderson, S.; Arcelli, S.; Asai, S.; Ashby, S.F.; Axen, D.; Azuelos, G.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Bartoldus, R.; Batley, J.R.; Baumann, S.; Bechtluft, J.; Behnke, T.; Bell, Kenneth Watson; Bella, G.; Bellerive, A.; Bentvelsen, S.; Bethke, S.; Betts, S.; Biebel, O.; Biguzzi, A.; Bird, S.D.; Blobel, V.; Bloodworth, I.J.; Bobinski, M.; Bock, P.; Bohme, J.; Bonacorsi, D.; Boutemeur, M.; Braibant, S.; Bright-Thomas, P.; Brigliadori, L.; Brown, Robert M.; Burckhart, H.J.; Burgard, C.; Burgin, R.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Chrisman, D.; Ciocca, C.; Clarke, P.E.L.; Clay, E.; Cohen, I.; Conboy, J.E.; Cooke, O.C.; Couyoumtzelis, C.; Coxe, R.L.; Cuffiani, M.; Dado, S.; Dallavalle, G.Marco; Davis, R.; De Jong, S.; del Pozo, L.A.; De Roeck, A.; Desch, K.; Dienes, B.; Dixit, M.S.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Eatough, D.; Estabrooks, P.G.; Etzion, E.; Evans, H.G.; Fabbri, F.; Fanti, M.; Faust, A.A.; Fiedler, F.; Fierro, M.; Fleck, I.; Folman, R.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gascon, J.; Gascon-Shotkin, S.M.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Gibson, V.; Gibson, W.R.; Gingrich, D.M.; Glenzinski, D.; Goldberg, J.; Gorn, W.; Grandi, C.; Gross, E.; Grunhaus, J.; Gruwe, M.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Harder, K.; Hargrove, C.K.; Hartmann, C.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Herndon, M.; Herten, G.; Heuer, R.D.; Hildreth, M.D.; Hill, J.C.; Hillier, S.J.; Hobson, P.R.; Hocker, James Andrew; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Igo-Kemenes, P.; Imrie, D.C.; Ishii, K.; Jacob, F.R.; Jawahery, A.; Jeremie, H.; Jimack, M.; Jones, C.R.; Jovanovic, P.; Junk, T.R.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Kayal, P.I.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Koetke, D.S.; Kokott, T.P.; Kolrep, M.; Komamiya, S.; Kowalewski, Robert V.; Kress, T.; Krieger, P.; von Krogh, J.; Kuhl, T.; Kyberd, P.; Lafferty, G.D.; Lanske, D.; Lauber, J.; Lautenschlager, S.R.; Lawson, I.; Layter, J.G.; Lazic, D.; Lee, A.M.; Lellouch, D.; Letts, J.; Levinson, L.; Liebisch, R.; List, B.; Littlewood, C.; Lloyd, A.W.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Ludwig, J.; Lui, D.; Macchiolo, A.; Macpherson, A.; Mader, W.; Mannelli, M.; Marcellini, S.; Markopoulos, C.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; Mckigney, E.A.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menke, S.; Merritt, F.S.; Mes, H.; Meyer, J.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Mir, R.; Mohr, W.; Montanari, A.; Mori, T.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nellen, B.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Oreglia, M.J.; Orito, S.; Palinkas, J.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Perez-Ochoa, R.; Petzold, S.; Pfeifenschneider, P.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poffenberger, P.; Polok, J.; Przybycien, M.; Rembser, C.; Rick, H.; Robertson, S.; Robins, S.A.; Rodning, N.; Roney, J.M.; Roscoe, K.; Rossi, A.M.; Rozen, Y.; Runge, K.; Runolfsson, O.; Rust, D.R.; Sachs, K.; Saeki, T.; Sahr, O.; Sang, W.M.; Sarkisian, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharf, F.; Scharff-Hansen, P.; Schieck, J.; Schmitt, B.; Schmitt, S.; Schoning, A.; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Sittler, A.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Sproston, M.; Stahl, A.; Stephens, K.; Steuerer, J.; Stoll, K.; Strom, David M.; Strohmer, R.; Surrow, B.; Talbot, S.D.; Tanaka, S.; Taras, P.; Tarem, S.; Teuscher, R.; Thiergen, M.; Thomson, M.A.; von Torne, E.; Torrence, E.; Towers, S.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turcot, A.S.; Turner-Watson, M.F.; Van Kooten, Rick J.; Vannerem, P.; Verzocchi, M.; Voss, H.; Wackerle, F.; Wagner, A.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wermes, N.; White, J.S.; Wilson, G.W.; Wilson, J.A.; Wyatt, T.R.; Yamashita, S.; Yekutieli, G.; Zacek, V.; Zer-Zion, D.

    1998-01-01

    In the Standard Model, b quarks produced in e^+e^- annihilation at the Z^0 peak have a large average longitudinal polarization of -0.94. Some fraction of this polarization is expected to be transferred to b-flavored baryons during hadronization. The average longitudinal polarization of weakly decaying b baryons, , is measured in approximately 4.3 million hadronic Z^0 decays collected with the OPAL detector between 1990 and 1995 at LEP. Those b baryons that decay semileptonically and produce a \\Lambda baryon are identified through the correlation of the baryon number of the \\Lambda and the electric charge of the lepton. In this semileptonic decay, the ratio of the neutrino energy to the lepton energy is a sensitive polarization observable. The neutrino energy is estimated using missing energy measurements. From a fit to the distribution of this ratio, the value = -0.56^{+0.20}_{-0.13} +/- 0.09 is obtained, where the first error is statistical and the second systematic.

  2. Teachers' Self-Reported Pedagogical Practices toward Socially Inhibited, Hyperactive, and Average Children

    Science.gov (United States)

    Thijs, Jochem T.; Koomen, Helma M. Y.; Van Der Leij, Aryan

    2006-01-01

    This study examined teachers' self-reported pedagogical practices toward socially inhibited, hyperactive, and average kindergartners. A self-report instrument was developed and examined in three samples of kindergartners and their teachers. Principal components analyses were conducted in four datasets pertaining to 1 child per teacher. Two…

  3. Approximate circuits for increased reliability

    Science.gov (United States)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  4. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey; Alkhalifah, Tariq Ali

    2013-01-01

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  5. Analytical approximation of neutron physics data

    International Nuclear Information System (INIS)

    Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.

    1984-01-01

    The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy

  6. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  7. Extension of the time-average model to Candu refueling schemes involving reshuffling

    International Nuclear Information System (INIS)

    Rouben, Benjamin; Nichita, Eleodor

    2008-01-01

    Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)

  8. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    Science.gov (United States)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  9. Concentration of mercury in wheat samples stored with mercury tablets as preservative

    International Nuclear Information System (INIS)

    Lalit, B.Y.; Ramachandran, T.V.

    1977-01-01

    Tablets consisting of mercury in the form of a dull grey powder made by triturating mercury with chalk and sugar are used in Indian household for storing food-grains. The contamination of wheat samples by mercury, when stored with mercury tablets for period of upto four years has been assessed by using non-destructive neutron activation analysis. The details of the analytical procedure used have also been briefly described. The concentration of mercury in wheat increases with storage period. Loss of weight of mercury tablet is proportional to the storage period to a first approximation. In the present experiment, the average weight loss at the and end of first year was 0.009716 g corresponding to 6 ppm in wheat. (T.G.)

  10. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  11. Nuclear Hartree-Fock approximation testing and other related approximations

    International Nuclear Information System (INIS)

    Cohenca, J.M.

    1970-01-01

    Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt

  12. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  13. Monte carlo sampling of fission multiplicity.

    Energy Technology Data Exchange (ETDEWEB)

    Hendricks, J. S. (John S.)

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  14. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    KAUST Repository

    Beck, Joakim

    2018-02-19

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized for a specified error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a single-loop Monte Carlo method that uses the Laplace approximation of the return value of the inner loop. The first demonstration example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  15. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    Science.gov (United States)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  16. The phenotypic equilibrium of cancer cells: From average-level stability to path-wise convergence.

    Science.gov (United States)

    Niu, Yuanling; Wang, Yue; Zhou, Da

    2015-12-07

    The phenotypic equilibrium, i.e. heterogeneous population of cancer cells tending to a fixed equilibrium of phenotypic proportions, has received much attention in cancer biology very recently. In the previous literature, some theoretical models were used to predict the experimental phenomena of the phenotypic equilibrium, which were often explained by different concepts of stabilities of the models. Here we present a stochastic multi-phenotype branching model by integrating conventional cellular hierarchy with phenotypic plasticity mechanisms of cancer cells. Based on our model, it is shown that: (i) our model can serve as a framework to unify the previous models for the phenotypic equilibrium, and then harmonizes the different kinds of average-level stabilities proposed in these models; and (ii) path-wise convergence of our model provides a deeper understanding to the phenotypic equilibrium from stochastic point of view. That is, the emergence of the phenotypic equilibrium is rooted in the stochastic nature of (almost) every sample path, the average-level stability just follows from it by averaging stochastic samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  18. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  19. CERES Monthly TOA and SRB Averages (SRBAVG) data in HDF-EOS Grid (CER_SRBAVG_TRMM-PFM-VIRS_Edition2B)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  20. Quantitative metagenomic analyses based on average genome size normalization

    DEFF Research Database (Denmark)

    Frank, Jeremy Alexander; Sørensen, Søren Johannes

    2011-01-01

    provide not just a census of the community members but direct information on metabolic capabilities and potential interactions among community members. Here we introduce a method for the quantitative characterization and comparison of microbial communities based on the normalization of metagenomic data...... marine sources using both conventional small-subunit (SSU) rRNA gene analyses and our quantitative method to calculate the proportion of genomes in each sample that are capable of a particular metabolic trait. With both environments, to determine what proportion of each community they make up and how......). These analyses demonstrate how genome proportionality compares to SSU rRNA gene relative abundance and how factors such as average genome size and SSU rRNA gene copy number affect sampling probability and therefore both types of community analysis....

  1. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  2. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  3. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  4. A Stokes drift approximation based on the Phillips spectrum

    Science.gov (United States)

    Breivik, Øyvind; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.

    2016-04-01

    A new approximation to the Stokes drift velocity profile based on the exact solution for the Phillips spectrum is explored. The profile is compared with the monochromatic profile and the recently proposed exponential integral profile. ERA-Interim spectra and spectra from a wave buoy in the central North Sea are used to investigate the behavior of the profile. It is found that the new profile has a much stronger gradient near the surface and lower normalized deviation from the profile computed from the spectra. Based on estimates from two open-ocean locations, an average value has been estimated for a key parameter of the profile. Given this parameter, the profile can be computed from the same two parameters as the monochromatic profile, namely the transport and the surface Stokes drift velocity.

  5. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  6. ABCtoolbox: a versatile toolkit for approximate Bayesian computations

    Directory of Open Access Journals (Sweden)

    Neuenschwander Samuel

    2010-03-01

    Full Text Available Abstract Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC. It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.

  7. Modeling shock waves in an ideal gas: combining the Burnett approximation and Holian's conjecture.

    Science.gov (United States)

    He, Yi-Guang; Tang, Xiu-Zhang; Pu, Yi-Kang

    2008-07-01

    We model a shock wave in an ideal gas by combining the Burnett approximation and Holian's conjecture. We use the temperature in the direction of shock propagation rather than the average temperature in the Burnett transport coefficients. The shock wave profiles and shock thickness are compared with other theories. The results are found to agree better with the nonequilibrium molecular dynamics (NEMD) and direct simulation Monte Carlo (DSMC) data than the Burnett equations and the modified Navier-Stokes theory.

  8. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  9. Approximating a DSM-5 Diagnosis of PTSD Using DSM-IV Criteria

    Science.gov (United States)

    Rosellini, Anthony J.; Stein, Murray B.; Colpe, Lisa J.; Heeringa, Steven G.; Petukhova, Maria V.; Sampson, Nancy A.; Schoenbaum, Michael; Ursano, Robert J.; Kessler, Ronald C.

    2015-01-01

    Background Diagnostic criteria for DSM-5 posttraumatic stress disorder (PTSD) are in many ways similar to DSM-IV criteria, raising the possibility that it might be possible to closely approximate DSM-5 diagnoses using DSM-IV symptoms. If so, the resulting transformation rules could be used to pool research data based on the two criteria sets. Methods The Pre-Post Deployment Study (PPDS) of the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) administered a blended 30-day DSM-IV and DSM-5 PTSD symptom assessment based on the civilian PTSD Checklist for DSM-IV (PCL-C) and the PTSD Checklist for DSM-5 (PCL-5). This assessment was completed by 9,193 soldiers from three US Army Brigade Combat Teams approximately three months after returning from Afghanistan. PCL-C items were used to operationalize conservative and broad approximations of DSM-5 PTSD diagnoses. The operating characteristics of these approximations were examined compared to diagnoses based on actual DSM-5 criteria. Results The estimated 30-day prevalence of DSM-5 PTSD based on conservative (4.3%) and broad (4.7%) approximations of DSM-5 criteria using DSM-IV symptom assessments were similar to estimates based on actual DSM-5 criteria (4.6%). Both approximations had excellent sensitivity (92.6-95.5%), specificity (99.6-99.9%), total classification accuracy (99.4-99.6%), and area under the receiver operating characteristic curve (0.96-0.98). Conclusions DSM-IV symptoms can be used to approximate DSM-5 diagnoses of PTSD among recently-deployed soldiers, making it possible to recode symptom-level data from earlier DSM-IV studies to draw inferences about DSM-5 PTSD. However, replication is needed in broader trauma-exposed samples to evaluate the external validity of this finding. PMID:25845710

  10. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    Science.gov (United States)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  11. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    International Nuclear Information System (INIS)

    Jakeman, J.D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation

  12. Approximate models for neutral particle transport calculations in ducts

    International Nuclear Information System (INIS)

    Ono, Shizuca

    2000-01-01

    The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)

  13. Table for monthly average daily extraterrestrial irradiation on horizontal surface and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-01-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal surface (H 0 ) and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by scientists each time they are needed and by using the approximate short-cut methods. Computations for these values have been made once and for all for latitude values of 60 deg. N to 60 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables should avoid the need for repetition and approximate calculations and serve as a useful ready reference for solar energy scientists and engineers. (author)

  14. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  15. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...

  16. Sedimentological time-averaging and 14C dating of marine shells

    International Nuclear Information System (INIS)

    Fujiwara, Osamu; Kamataki, Takanobu; Masuda, Fujio

    2004-01-01

    The radiocarbon dating of sediments using marine shells involves uncertainties due to the mixed ages of the shells mainly attributed to depositional processes also known as 'sedimentological time-averaging'. This stratigraphic disorder can be removed by selecting the well-preserved indigenous shells based on ecological and taphonomic criteria. These criteria on sample selection are recommended for accurate estimation of the depositional age of geologic strata from 14 C dating of marine shells

  17. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  18. Annealing evolutionary stochastic approximation Monte Carlo for global optimization

    KAUST Repository

    Liang, Faming

    2010-04-08

    In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.

  19. Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models

    Science.gov (United States)

    Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas

    2017-02-01

    A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally

  20. Averaged multivalued solutions and time discretization for conservation laws

    International Nuclear Information System (INIS)

    Brenier, Y.

    1985-01-01

    It is noted that the correct shock solutions can be approximated by averaging in some sense the multivalued solution given by the method of characteristics for the nonlinear scalar conservation law (NSCL). A time discretization for the NSCL equation based on this principle is considered. An equivalent analytical formulation is shown to lead quite easily to a convergence result, and a third formulation is introduced which can be generalized for the systems of conservation laws. Various numerical schemes are constructed from the proposed time discretization. The first family of schemes is obtained by using a spatial grid and projecting the results of the time discretization. Many known schemes are then recognized (mainly schemes by Osher, Roe, and LeVeque). A second way to discretize leads to a particle scheme without space grid, which is very efficient (at least in the scalar case). Finally, a close relationship between the proposed method and the Boltzmann type schemes is established. 14 references

  1. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  2. METHODS OF CONTROLLING THE AVERAGE DIAMETER OF THE THREAD WITH ASYMMETRICAL PROFILE

    Directory of Open Access Journals (Sweden)

    L. M. Aliomarov

    2015-01-01

    Full Text Available To handle the threaded holes in hard materials made of marine machinery, operating at high temperatures, heavy loads and in aggressive environments, the authors have developed the combined tool core drill -tap with a special cutting scheme, which has an asymmetric thread profile on the tap part. In order to control the average diameter of the thread of tap part of the combined tool was used the method three wires, which allows to make continuous measurement of the average diameter of the thread along the entire profile. Deviation from the average diameter from the sample is registered by inductive sensor and is recorded by the recorder. In the work are developed and presented control schemes of the average diameter of the threads with a symmetrical and asymmetrical profile. On the basis of these schemes are derived formulas for calculating the theoretical option to set the wires in the thread profile in the process of measuring the average diameter. Conducted complex research and the introduction of the combined instrument core drill-tap in the production of products of marine engineering, shipbuilding, ship repair power plants made of hard materials showed a high efficiency of the proposed technology for the processing of high-quality small-diameter threaded holes that meet modern requirements.

  3. Bridging the gap between the Jaynes–Cummings and Rabi models using an intermediate rotating wave approximation

    International Nuclear Information System (INIS)

    Wang, Yimin; Haw, Jing Yan

    2015-01-01

    Highlights: • The intermediate rotating wave approximation (IRWA) is introduced. • The co-rotating and counter-rotating terms in Rabi model are expressed separately. • IRWA is applied to the near resonance case and the large detuning case. • The continuity between the Jaynes–Cummings model and the Rabi model is established. - Abstract: We present a novel approach called the intermediate rotating wave approximation (IRWA), which employs a time-averaging method to encapsulate the dynamics of light-matter interaction from strong to ultrastrong coupling regime. In contrast to the ordinary rotating wave approximation, this method addresses the co-rotating and counter-rotating terms separately to trace their physical consequences individually, and thus establishes the continuity between the Jaynes–Cummings model and the quantum Rabi model. We investigate IRWA in near resonance and large detuning cases. Our IRWA not only agrees well with both models in their respective coupling strengths, but also offers a good explanation of their differences

  4. The Average IQ of Sub-Saharan Africans: Comments on Wicherts, Dolan, and van der Maas

    Science.gov (United States)

    Lynn, Richard; Meisenberg, Gerhard

    2010-01-01

    Wicherts, Dolan, and van der Maas (2009) contend that the average IQ of sub-Saharan Africans is about 80. A critical evaluation of the studies presented by WDM shows that many of these are based on unrepresentative elite samples. We show that studies of 29 acceptably representative samples on tests other than the Progressive Matrices give a…

  5. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  6. Insight into structural phase transitions from the decoupled anharmonic mode approximation.

    Science.gov (United States)

    Adams, Donat J; Passerone, Daniele

    2016-08-03

    We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T  =  0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.

  7. Concentration of mercury in wheat samples stored with mercury tablets as preservative. [Neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Lalit, B Y; Ramachandran, T V [Bhabha Atomic Research Centre, Bombay (India). Air Monitoring Section

    1977-01-01

    Tablets consisting of mercury in the form of a dull grey powder made by triturating mercury with chalk and sugar are used in Indian household for storing food-grains. The contamination of wheat samples by mercury, when stored with mercury tablets for period of upto four years has been assessed by using non-destructive neutron activation analysis. The details of the analytical procedure used have also been briefly described. The concentration of mercury in wheat increases with storage period. Loss of weight of mercury tablet is proportional to the storage period to a first approximation. In the present experiment, the average weight loss at the and end of first year was 0.009716 g corresponding to 6 ppm in wheat.

  8. High-order above-threshold ionization beyond the electric dipole approximation

    Science.gov (United States)

    Brennecke, Simon; Lein, Manfred

    2018-05-01

    Photoelectron momentum distributions from strong-field ionization are calculated by numerical solution of the one-electron time-dependent Schrödinger equation for a model atom including effects beyond the electric dipole approximation. We focus on the high-energy electrons from rescattering and analyze their momentum component along the field propagation direction. We show that the boundary of the calculated momentum distribution is deformed in accordance with the classical three-step model including the beyond-dipole Lorentz force. In addition, the momentum distribution exhibits an asymmetry in the signal strengths of electrons emitted in the forward/backward directions. Taken together, the two non-dipole effects give rise to a considerable average forward momentum component of the order of 0.1 a.u. for realistic laser parameters.

  9. Tractable approximations for probabilistic models: The adaptive Thouless-Anderson-Palmer mean field approach

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2001-01-01

    We develop an advanced mean held method for approximating averages in probabilistic data models that is based on the Thouless-Anderson-Palmer (TAP) approach of disorder physics. In contrast to conventional TAP. where the knowledge of the distribution of couplings between the random variables...... is required. our method adapts to the concrete couplings. We demonstrate the validity of our approach, which is so far restricted to models with nonglassy behavior? by replica calculations for a wide class of models as well as by simulations for a real data set....

  10. Average cost per person victimized by an intimate partner of the opposite gender: a comparison of men and women.

    Science.gov (United States)

    Arias, Ileana; Corso, Phaedra

    2005-08-01

    Differences in prevalence, injury, and utilization of services between female and male victims of intimate partner violence (IPV) have been noted. However, there are no studies indicating approximate costs of men's IPV victimization. This study explored gender differences in service utilization for physical IPV injuries and average cost per person victimized by an intimate partner of the opposite gender. Significantly more women than men reported physical IPV victimization and related injuries. A greater proportion of women than men reported seeking mental health services and reported more visits on average in response to physical IPV victimization. Women were more likely than men to report using emergency department, inpatient hospital, and physician services, and were more likely than men to take time off from work and from childcare or household duties because of their injuries. The total average per person cost for women experiencing at least one physical IPV victimization was more than twice the average per person cost for men.

  11. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  12. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2015-04-08

    Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

  13. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    Science.gov (United States)

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  14. CERES Monthly TOA and SRB Averages (SRBAVG) data in HDF-EOS Grid (CER_SRBAVG_Terra-FM1-MODIS_Edition2D)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2004-05-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  15. CERES Monthly TOA and SRB Averages (SRBAVG) data in HDF-EOS Grid (CER_SRBAVG_Terra-FM1-MODIS_Edition2C)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2003-02-28] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  16. CERES Monthly TOA and SRB Averages (SRBAVG) data in HDF-EOS Grid (CER_SRBAVG_Terra-FM2-MODIS_Edition2C)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2003-02-28] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  17. Approximation of Corrected Calcium Concentrations in Advanced Chronic Kidney Disease Patients with or without Dialysis Therapy

    Directory of Open Access Journals (Sweden)

    Yoshio Kaku

    2015-08-01

    Full Text Available Background: The following calcium (Ca correction formula (Payne is conventionally used for serum Ca estimation: corrected total Ca (TCa (mg/dl = TCa (mg/dl + [4 - albumin (g/dl]; however, it is inapplicable to advanced chronic kidney disease (CKD patients. Methods: 1,922 samples in CKD G4 + G5 patients and 341 samples in CKD G5D patients were collected. Levels of TCa (mg/day, ionized Ca2+ (iCa2+ (mmol/l and other clinical parameters were measured. We assumed the corrected TCa to be equal to eight times the iCa2+ value (measured corrected TCa. We subsequently performed stepwise multiple linear regression analysis using the clinical parameters. Results: The following formula was devised from multiple linear regression analysis. For CKD G4 + G5 patients: approximated corrected TCa (mg/dl = TCa + 0.25 × (4 - albumin + 4 × (7.4 - pH + 0.1 × (6 - P + 0.22. For CKD G5D patients: approximated corrected TCa (mg/dl = TCa + 0.25 × (4 - albumin + 0.1 × (6 - P + 0.05 × (24 - HCO3- + 0.35. Receiver operating characteristic analysis showed the high values of the area under the curve of approximated corrected TCa for the detection of measured corrected TCa ≥8.4 mg/dl and ≤10.4 mg/dl for each CKD sample. Both intraclass correlation coefficients for each CKD sample demonstrated superior agreement using the new formula compared to the previously reported formulas. Conclusion: Compared to other formulas, the approximated corrected TCa values calculated from the new formula for patients with CKD G4 + G5 and CKD G5D demonstrates superior agreement with the measured corrected TCa.

  18. Total number albedo and average cosine of the polar angle of low-energy photons reflected from water

    Directory of Open Access Journals (Sweden)

    Marković Srpko

    2007-01-01

    Full Text Available The total number albedo and average cosine of the polar angle for water and initial photon energy range from 20 keV to 100 keV are presented in this pa per. A water shield in the form of a thick, homogenous plate and per pendicular incidence of the monoenergetic photon beam are assumed. The results were obtained through Monte Carlo simulations of photon reflection by means of the MCNP computer code. Calculated values for the total number albedo were compared with data previously published and good agreement was confirmed. The dependence of the average cosine of the polar angle on energy is studied in detail. It has been found that the total average cosine of the polar angle has values in the narrow interval of 0.66-0.67, approximately corresponding to the reflection angle of 48°, and that it does not depend on the initial photon energy.

  19. Integrated sampling vs ion chromatography: Mathematical considerations

    International Nuclear Information System (INIS)

    Sundberg, L.L.

    1992-01-01

    This paper presents some general purpose considerations that can be utilized when comparisons are made between the results of integrated sampling over several hours or days, and ion chromatography where sample collection times are measured in minutes. The discussion is geared toward the measurement of soluble transition metal ions in BWR feedwater. Under steady-state conditions, the concentrations reported by both techniques should be in reasonable agreement. Transient operations effect both types of measurements. A simplistic model, applicable to both sampling techniques, is presented that demonstrates the effect of transients which occur during the acquisition of a steady-state sample. For a common set of conditions, the integrated concentration is proportional to the concentration and duration of the transient, and inversely proportional to the sample collection time. The adjustment of the collection period during a known transient allows an estimation of peak transient concentration. Though the probability of sampling a random transient with the integrated sampling technique is very high, the magnitude is severely diluted with long integration times. Transient concentrations are magnified with ion chromatography, but the probability of sampling a transient is significantly lower using normal ion chromatography operations. Various data averaging techniques are discussed for integrated sampling and IC determinations. The use of time-weighted averages appears to offer more advantages over arithmetic and geometric means for integrated sampling when the collection period is variable. For replicate steady-state ion chromatography determinations which bracket a transient sample, it may be advantageous to ignore the calculation of averages, and report the data as trending information only

  20. Determination of the average lifetime of b-baryons

    International Nuclear Information System (INIS)

    Abreu, P.; Adam, W.

    1996-01-01

    The average lifetime of b-baryons has been studied using 3.10 6 hadronic Z 0 decays collected by the DELPHI detector at LEP. Three methods have been used, based on the measurement of different observables: the proper decay time distribution of 206 vertices reconstructed with a Λ, a lepton and an oppositely charged pion; the impact parameter distribution of 441 muons with high transverse momentum accompanied by a Λ in the same jet; and the proper decay time distribution of 125 Λ c -lepton decay vertices with the Λ c exclusively reconstructed through its pKπ, pK 0 and Λ3π decay modes. The combined result is: τ(b-baryon)=(1.254 +0.121 -0.109 (stat) ±0.04(syst) +0.03 -0.05 (syst)) ps where the first systematic error is due to experimental uncertainties and the second to the uncertainties in the modelling of the b-baryon production and semi-leptonic decay. Including the measurement recently published by DELPHI based on a sample of proton-muon vertices, the average b-baryon lifetime is: τ(b-baryon)=(1.255 +0.115 -0.102 (stat) ±0.05) ps. (orig.)

  1. A maintenance policy for a system with multi-state components: an approximate solution

    International Nuclear Information System (INIS)

    Guerler, Uelkue; Kaya, Alev

    2002-01-01

    For maintenance and quality assessment purposes, various performance levels for both systems and components are identified, usually as a function of the deterioration. In this study, we consider a multicomponent system where the lifetime of each component is described by several stages, (0,...,S), which are further classified as good, doubtful, preventive maintenance due (PM due) and down. A control policy is suggested where the system is replaced when a component enters a PM due or a down state and the number of components in the doubtful states (K,...,S-2) is at least N. All maintenance activities are assumed to take negligible time. The exact description of the underlying stochastic model under the policy is very complicated. We therefore propose some approximations, which allow an explicit expression for the long run average cost function, which is minimized w.r.t. (K,N) by numerical methods. Sensitivity of the model to system parameters and the performance of the approximation are investigated through several examples

  2. Effect of flux discontinuity on spatial approximations for discrete ordinates methods

    International Nuclear Information System (INIS)

    Duo, J.I.; Azmy, Y.Y.

    2005-01-01

    This work presents advances on error analysis of the spatial approximation of the discrete ordinates method for solving the neutron transport equation. Error norms for different non-collided flux problems over a two dimensional pure absorber medium are evaluated using three numerical methods. The problems are characterized by the incoming flux boundary conditions to obtain solutions with different level of differentiability. The three methods considered are the Diamond Difference (DD) method, the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic type (AHOT-C). The last two methods are employed in constant, linear and quadratic orders of spatial approximation. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that the level of differentiability of the exact solution profoundly affects the rate of convergence of the numerical methods' solutions. Furthermore, in the case of discontinuous exact flux the methods fail to converge in the maximum error norm, or in the pointwise sense, in accordance with previous local error analysis. (authors)

  3. Fast and Analytical EAP Approximation from a 4th-Order Tensor.

    Science.gov (United States)

    Ghosh, Aurobrata; Deriche, Rachid

    2012-01-01

    Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.

  4. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  5. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  6. Bent approximations to synchrotron radiation optics

    International Nuclear Information System (INIS)

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors

  7. OrthoANI: An improved algorithm and software for calculating average nucleotide identity.

    Science.gov (United States)

    Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik

    2016-02-01

    Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.

  8. The impact of intermediate structure on the average fission cross sections

    International Nuclear Information System (INIS)

    Bouland, O.; Lynn, J.E.; Talou, P.

    2014-01-01

    This paper discusses two common approximations used to calculate average fission cross sections over the compound energy range: the disregard of the W II factor and the Porter-Thomas hypothesis made on the double barrier fission width distribution. By reference to a Monte Carlo-type calculation of formal R-matrix fission widths, this work estimates an overall error ranging from 12% to 20% on the fission cross section in the case of the 239 Pu fissile isotope in the energy domain from 1 to 100 keV with very significant impact on the competing capture cross section. This work is part of a recent and very comprehensive formal R-matrix study over the Pu isotope series and is able to give some hints for significant accuracy improvements in the treatment of the fission channel. (authors)

  9. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  10. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Bjö rk, Tomas; Szepessy, Anders; Tempone, Raul; Zouraris, Georgios E.

    2012-01-01

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  11. INTOR cost approximation

    International Nuclear Information System (INIS)

    Knobloch, A.F.

    1980-01-01

    A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de

  12. Sampling and sample processing in pesticide residue analysis.

    Science.gov (United States)

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  13. Radioactivity measurements and risk assessments in soil samples at south and middle of Qatar

    International Nuclear Information System (INIS)

    Al-Kinani, A.; Al Dosari, M.; Amr, M.A.; Al-Saad, K.A.; Helal, A.I.

    2012-01-01

    Health risks associated with the exposure to the natural radioactivity present in soil materials has great concern all over the world. Thus soil samples collected from an urban area at south and middle of Qatar in order to measure natural radioactivity, 40 K, 226 Ra and 232 Th and the artificial 137 Cs using gamma-ray spectrometry method.The soil activity concentrations ranges from 25.01- 40.31 for 226 Ra, 12.37- 4.99 for 232 Th and 133.8 - 250.1 for 40 K with mean values of 57, 87 and 207 Bq/ kg, respectively. The concentrations of these radionuclides are compared with the available data from other countries. The average and ranges of activity concentration of 226 Ra in Qatar soil areas are very much comparable to the world Figures. However, the concentration for 232 Th is comparable to other Gulf area and lower than that for Egypt and the world figures.The concentration for 40 K is lower as compared with Egypt, world, and Kuwait figures but comparable to Oman figures.The radium equivalent activity (Ra eq) in these soil samples ranges from 74.45 Bq/ kg to 41.21 Bq/ kg) with mean value of 57.4 Bq/ kg which is far below the safe limit (permissible) limit (370 Bq/ kg). The calculated values for external hazard index Hex for the soil samples range from 0.102 - 0.21 and average concentration of 0.164 which is lower than other values reported .However these values are lower than unity; therefore, the soil from these regions is safe and can be used as a construction material without posing any significant radiological threat to population.The absorbed dose rate calculated from activity concentration of 226 Ra, 232 Th and 40 K ranges between 11.529 - 21.446, 2.383 - 11.744, and 5.304 -10.357 n Gy/ h, respectively and the total average absorbed dose rate 28.915 n Gy/ h which are lower than the world wide average absorbed dose rate 51 n Gy/ h. The total absorbed dose in the study area ranges from 20.146 - 40.389 n Gy/ h with an average value of 28.915 n Gy/ h .The

  14. Relationship research between meteorological disasters and stock markets based on a multifractal detrending moving average algorithm

    Science.gov (United States)

    Li, Qingchen; Cao, Guangxi; Xu, Wei

    2018-01-01

    Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.

  15. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    Science.gov (United States)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  16. A unified approach to the Darwin approximation

    International Nuclear Information System (INIS)

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-01-01

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting

  17. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  18. Stochastic sampling of the RNA structural alignment space.

    Science.gov (United States)

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H

    2009-07-01

    A novel method is presented for predicting the common secondary structures and alignment of two homologous RNA sequences by sampling the 'structural alignment' space, i.e. the joint space of their alignments and common secondary structures. The structural alignment space is sampled according to a pseudo-Boltzmann distribution based on a pseudo-free energy change that combines base pairing probabilities from a thermodynamic model and alignment probabilities from a hidden Markov model. By virtue of the implicit comparative analysis between the two sequences, the method offers an improvement over single sequence sampling of the Boltzmann ensemble. A cluster analysis shows that the samples obtained from joint sampling of the structural alignment space cluster more closely than samples generated by the single sequence method. On average, the representative (centroid) structure and alignment of the most populated cluster in the sample of structures and alignments generated by joint sampling are more accurate than single sequence sampling and alignment based on sequence alone, respectively. The 'best' centroid structure that is closest to the known structure among all the centroids is, on average, more accurate than structure predictions of other methods. Additionally, cluster analysis identifies, on average, a few clusters, whose centroids can be presented as alternative candidates. The source code for the proposed method can be downloaded at http://rna.urmc.rochester.edu.

  19. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  20. New realisation of Preisach model using adaptive polynomial approximation

    Science.gov (United States)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  1. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  2. Description os surface quadrupole oscillations of heateU spherical nuclei in the Brownian movement approximation

    International Nuclear Information System (INIS)

    Svin'in, I.R.

    1982-01-01

    Description of collective phenomena in heated nuclei within the framework of the Brownian approximation may be conditionally divided into two parts: 1) solution of the problem for some realization of a random force, 2) averaging in a set of all the possible realizations. Results of the present work are setted the first part of the problem in the case of surface quadrupole oscillations of spherical heated nuclei. Quadrupole surface oscillations of heated spherical nuclei are considered in the Brownian motion approximation. The integrals of motion are constructed taking into account the energy and angular momentum conservations for the nucleus in the process of relaxation of the collective excitations. Wave functions are obtained for states having definite values of the integrals of motion in the phonon representation. It is noted that the description scheme developed is easily used with respect to other multipolarity oscillations

  3. Human-experienced temperature changes exceed global average climate changes for all income groups

    Science.gov (United States)

    Hsiang, S. M.; Parshall, L.

    2009-12-01

    Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The

  4.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  5. Planar undulator motion excited by a fixed traveling wave. Quasiperiodic averaging normal forms and the FEL pendulum

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, James A.; Heinemann, Klaus [New Mexico Univ., Albuquerque, NM (United States). Dept. of Mathematics and Statistics; Vogt, Mathias [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Gooden, Matthew [North Carolina State Univ., Raleigh, NC (United States). Dept. of Physics

    2013-03-15

    We present a mathematical analysis of planar motion of energetic electrons moving through a planar dipole undulator, excited by a fixed planar polarized plane wave Maxwell field in the X-Ray FEL regime. Our starting point is the 6D Lorentz system, which allows planar motions, and we examine this dynamical system as the wave length {lambda} of the traveling wave varies. By scalings and transformations the 6D system is reduced, without approximation, to a 2D system in a form for a rigorous asymptotic analysis using the Method of Averaging (MoA), a long time perturbation theory. The two dependent variables are a scaled energy deviation and a generalization of the so- called ponderomotive phase. As {lambda} varies the system passes through resonant and nonresonant (NR) zones and we develop NR and near-to-resonant (NtoR) MoA normal form approximations. The NtoR normal forms contain a parameter which measures the distance from a resonance. For a special initial condition, for the planar motion and on resonance, the NtoR normal form reduces to the well known FEL pendulum system. We then state and prove NR and NtoR first-order averaging theorems which give explicit error bounds for the normal form approximations. We prove the theorems in great detail, giving the interested reader a tutorial on mathematically rigorous perturbation theory in a context where the proofs are easily understood. The proofs are novel in that they do not use a near identity transformation and they use a system of differential inequalities. The NR case is an example of quasiperiodic averaging where the small divisor problem enters in the simplest possible way. To our knowledge the planar prob- lem has not been analyzed with the generality we aspire to here nor has the standard FEL pendulum system been derived with associated error bounds as we do here. We briefly discuss the low gain theory in light of our NtoR normal form. Our mathematical treatment of the noncollective FEL beam dynamics problem in

  6. Planar undulator motion excited by a fixed traveling wave. Quasiperiodic averaging normal forms and the FEL pendulum

    International Nuclear Information System (INIS)

    Ellison, James A.; Heinemann, Klaus; Gooden, Matthew

    2013-03-01

    We present a mathematical analysis of planar motion of energetic electrons moving through a planar dipole undulator, excited by a fixed planar polarized plane wave Maxwell field in the X-Ray FEL regime. Our starting point is the 6D Lorentz system, which allows planar motions, and we examine this dynamical system as the wave length λ of the traveling wave varies. By scalings and transformations the 6D system is reduced, without approximation, to a 2D system in a form for a rigorous asymptotic analysis using the Method of Averaging (MoA), a long time perturbation theory. The two dependent variables are a scaled energy deviation and a generalization of the so- called ponderomotive phase. As λ varies the system passes through resonant and nonresonant (NR) zones and we develop NR and near-to-resonant (NtoR) MoA normal form approximations. The NtoR normal forms contain a parameter which measures the distance from a resonance. For a special initial condition, for the planar motion and on resonance, the NtoR normal form reduces to the well known FEL pendulum system. We then state and prove NR and NtoR first-order averaging theorems which give explicit error bounds for the normal form approximations. We prove the theorems in great detail, giving the interested reader a tutorial on mathematically rigorous perturbation theory in a context where the proofs are easily understood. The proofs are novel in that they do not use a near identity transformation and they use a system of differential inequalities. The NR case is an example of quasiperiodic averaging where the small divisor problem enters in the simplest possible way. To our knowledge the planar prob- lem has not been analyzed with the generality we aspire to here nor has the standard FEL pendulum system been derived with associated error bounds as we do here. We briefly discuss the low gain theory in light of our NtoR normal form. Our mathematical treatment of the noncollective FEL beam dynamics problem in the

  7. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  8. Orientation-averaged optical properties of natural aerosol aggregates

    International Nuclear Information System (INIS)

    Zhang Xiaolin; Huang Yinbo; Rao Ruizhong

    2012-01-01

    Orientation-averaged optical properties of natural aerosol aggregates were analyzed by using discrete dipole approximation (DDA) for the effective radius in the range of 0.01 to 2 μm with the corresponding size parameter from 0.1 to 23 for the wavelength of 0.55 μm. Effects of the composition and morphology on the optical properties were also investigated. The composition show small influence on the extinction-efficiency factor in Mie scattering region, scattering- and backscattering-efficiency factors. The extinction-efficiency factor with the size parameter from 9 to 23 and asymmetry factor with the size parameter below 2.3 are almost independent of the natural aerosol composition. The extinction-, absorption, scattering-, and backscattering-efficiency factors with the size parameter below 0.7 are irrespective of the aggregate morphology. The intrinsic symmetry and discontinuity of the normal direction of the particle surface have obvious effects on the scattering properties for the size parameter above 4.6. Furthermore, the scattering phase functions of natural aerosol aggregates are enhanced at the backscattering direction (opposition effect) for large size parameters in the range of Mie scattering. (authors)

  9. Revisiting random walk based sampling in networks: evasion of burn-in period and frequent regenerations.

    Science.gov (United States)

    Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K

    2018-01-01

    In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our

  10. Exact and approximate multiple diffraction calculations

    International Nuclear Information System (INIS)

    Alexander, Y.; Wallace, S.J.; Sparrow, D.A.

    1976-08-01

    A three-body potential scattering problem is solved in the fixed scatterer model exactly and approximately to test the validity of commonly used assumptions of multiple scattering calculations. The model problem involves two-body amplitudes that show diffraction-like differential scattering similar to high energy hadron-nucleon amplitudes. The exact fixed scatterer calculations are compared to Glauber approximation, eikonal-expansion results and a noneikonal approximation

  11. On Covering Approximation Subspaces

    Directory of Open Access Journals (Sweden)

    Xun Ge

    2009-06-01

    Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

  12. Examination of statistical noise in SPECT image and sampling pitch

    International Nuclear Information System (INIS)

    Takaki, Akihiro; Soma, Tsutomu; Murase, Kenya; Watanabe, Hiroyuki; Murakami, Tomonori; Kawakami, Kazunori; Teraoka, Satomi; Kojima, Akihiro; Matsumoto, Masanori

    2008-01-01

    Statistical noise in single photon emission computed tomography (SPECT) image was examined for its relation with total count and with sampling pitch by simulation and phantom experiment to obtain their projection data under defined conditions. The former SPECT simulation was performed on assumption of a virtual, homogeneous water column (20 cm diameter) as an absorbing mass. In the latter, used were 3D-Hoffman brain phantom (Data Spectrum Corp.) filled with 370 MBq of 99m Tc-pertechnetate solution and a facing 2-detector SPECT machine with a low-energy/high-resolution collimator, E-CAM (Siemens). Projected data by the two methods were reconstructed through the filtered back projection to make each transaxial image. The noise was evaluated by vision, by their root mean square uncertainty calculated from average count and standard deviation (SD) in the region of interest (ROI) defined in reconstructed images and by normalized mean squares calculated from the difference between the reference image obtained with common sampling pitch to and all of obtained slices of, the simulation and phantom. As a conclusion, the pitch was recommended to be set in the machine as to approximating the value calculated by the sampling theorem, though the projection counts per one angular direction were smaller with the same total time of data acquisition. (R.T.)

  13. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  14. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  15. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  16. Global sensitivity analysis using low-rank tensor approximations

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.

  17. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  18. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    Science.gov (United States)

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  19. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  20. Comparison of the Born series and rational approximants in potential scattering. [Pade approximants, Yikawa and exponential potential

    Energy Technology Data Exchange (ETDEWEB)

    Garibotti, C R; Grinstein, F F [Rosario Univ. Nacional (Argentina). Facultad de Ciencias Exactas e Ingenieria

    1976-05-08

    It is discussed the real utility of Born series for the calculation of atomic collision processes in the Born approximation. It is suggested to make use of Pade approximants and it is shown that this approach provides very fast convergent sequences over all the energy range studied. Yukawa and exponential potential are explicitly considered and the results are compared with high-order Born approximation.

  1. Spherical Approximation on Unit Sphere

    Directory of Open Access Journals (Sweden)

    Eman Samir Bhaya

    2018-01-01

    Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of  functions in  spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in    spaces for  by modulus of smoothness of functions.

  2. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  3. Integrated sampling and analysis plan for samples measuring >10 mrem/hour

    International Nuclear Information System (INIS)

    Haller, C.S.

    1992-03-01

    This integrated sampling and analysis plan was prepared to assist in planning and scheduling of Hanford Site sampling and analytical activities for all waste characterization samples that measure greater than 10 mrem/hour. This report also satisfies the requirements of the renegotiated Interim Milestone M-10-05 of the Hanford Federal Facility Agreement and Consent Order (the Tri-Party Agreement). For purposes of comparing the various analytical needs with the Hanford Site laboratory capabilities, the analytical requirements of the various programs were normalized by converting required laboratory effort for each type of sample to a common unit of work, the standard analytical equivalency unit (AEU). The AEU approximates the amount of laboratory resources required to perform an extensive suite of analyses on five core segments individually plus one additional suite of analyses on a composite sample derived from a mixture of the five core segments and prepare a validated RCRA-type data package

  4. Analysis of corrections to the eikonal approximation

    Science.gov (United States)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  5. New device for time-averaged measurement of volatile organic compounds (VOCs).

    Science.gov (United States)

    Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio

    2014-07-01

    Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes

  6. Validation of single-sample doubly labeled water method

    International Nuclear Information System (INIS)

    Webster, M.D.; Weathers, W.W.

    1989-01-01

    We have experimentally validated a single-sample variant of the doubly labeled water method for measuring metabolic rate and water turnover in a very small passerine bird, the verdin (Auriparus flaviceps). We measured CO 2 production using the Haldane gravimetric technique and compared these values with estimates derived from isotopic data. Doubly labeled water results based on the one-sample calculations differed from Haldane values by less than 0.5% on average (range -8.3 to 11.2%, n = 9). Water flux computed by the single-sample method differed by -1.5% on average from results for the same birds based on the standard, two-sample technique (range -13.7 to 2.0%, n = 9)

  7. Factors That Predict Marijuana Use and Grade Point Average among Undergraduate College Students

    Science.gov (United States)

    Coco, Marlena B.

    2017-01-01

    The purpose of this study was to analyze factors that predict marijuana use and grade point average among undergraduate college students using the Core Institute national database. The Core Alcohol and Drug Survey was used to collect data on students' attitudes, beliefs, and experiences related to substance use in college. The sample used in this…

  8. The consequences of time averaging for measuring temporal species turnover in the fossil record

    Science.gov (United States)

    Tomašových, Adam; Kidwell, Susan

    2010-05-01

    Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and

  9. Demographic and Psychological Predictors of Grade Point Average (GPA) in North-Norway: A Particular Analysis of Cognitive/School-Related and Literacy Problems

    Science.gov (United States)

    Saele, Rannveig Grøm; Sørlie, Tore; Nergård-Nilssen, Trude; Ottosen, Karl-Ottar; Goll, Charlotte Bjørnskov; Friborg, Oddgeir

    2016-01-01

    Approximately 30% of students drop out from Norwegian upper secondary schools. Academic achievement, as indexed by grade point average (GPA), is one of the strongest predictors of dropout. The present study aimed to examine the role of cognitive, school-related and affective/psychological predictors of GPA. In addition, we examined the…

  10. Introduction to Methods of Approximation in Physics and Astronomy

    Science.gov (United States)

    van Putten, Maurice H. P. M.

    2017-04-01

    secular behavior. For instance, secular evolution of orbital parameters may derive from averaging over essentially periodic behavior on relatively short, orbital periods. When the original number of degrees of freedom is large, averaging over dynamical time scales may lead to a formulation in terms of a system in approximately thermodynamic equilibrium subject to evolution on a secular time scale by a regular or singular perturbation. In modern astrophysics and cosmology, gravitation is being probed across an increasingly broad range of scales and more accurately so than ever before. These observations probe weak gravitational interactions below what is encountered in our solar system by many orders of magnitude. These observations hereby probe (curved) spacetime at low energy scales that may reveal novel properties hitherto unanticipated in the classical vacuum of Newtonian mechanics and Minkowski spacetime. Dark energy and dark matter encountered on the scales of galaxies and beyond, therefore, may be, in part, revealing our ignorance of the vacuum at the lowest energy scales encountered in cosmology. In this context, our application of Newtonian mechanics to globular clusters, galaxies and cosmology is an approximation assuming a classical vacuum, ignoring the potential for hidden low energy scales emerging on cosmological scales. Given our ignorance of the latter, this poses a challenge in the potential for unknown systematic deviations. If of quantum mechanical origin, such deviations are often referred to as anomalies. While they are small in traditional, macroscopic Newtonian experiments in the laboratory, they same is not a given in the limit of arbitrarily weak gravitational interactions. We hope this selection of introductory material is useful and kindles the reader's interest to become a creative member of modern astrophysics and cosmology.

  11. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  12. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  13. The Gas Sampling Interval Effect on V˙O2peak Is Independent of Exercise Protocol.

    Science.gov (United States)

    Scheadler, Cory M; Garver, Matthew J; Hanson, Nicholas J

    2017-09-01

    There is a plethora of gas sampling intervals available during cardiopulmonary exercise testing to measure peak oxygen consumption (V˙O2peak). Different intervals can lead to altered V˙O2peak. Whether differences are affected by the exercise protocol or subject sample is not clear. The purpose of this investigation was to determine whether V˙O2peak differed because of the manipulation of sampling intervals and whether differences were independent of the protocol and subject sample. The first subject sample (24 ± 3 yr; V˙O2peak via 15-breath moving averages: 56.2 ± 6.8 mL·kg·min) completed the Bruce and the self-paced V˙O2max protocols. The second subject sample (21.9 ± 2.7 yr; V˙O2peak via 15-breath moving averages: 54.2 ± 8.0 mL·kg·min) completed the Bruce and the modified Astrand protocols. V˙O2peak was identified using five sampling intervals: 15-s block averages, 30-s block averages, 15-breath block averages, 15-breath moving averages, and 30-s block averages aligned to the end of exercise. Differences in V˙O2peak between intervals were determined using repeated-measures ANOVAs. The influence of subject sample on the sampling effect was determined using independent t-tests. There was a significant main effect of sampling interval on V˙O2peak (first sample Bruce and self-paced V˙O2max P sample Bruce and modified Astrand P sampling intervals followed a similar pattern for each protocol and subject sample, with 15-breath moving average presenting the highest V˙O2peak. The effect of manipulating gas sampling intervals on V˙O2peak appears to be protocol and sample independent. These findings highlight our recommendation that the clinical and scientific community request and report the sampling interval whenever metabolic data are presented. The standardization of reporting would assist in the comparison of V˙O2peak.

  14. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  15. Characteristics of Mae Moh lignite: Hardgrove grindability index and approximate work index

    Directory of Open Access Journals (Sweden)

    Wutthiphong Tara

    2012-02-01

    Full Text Available The purpose of this research was to preliminarily study the Mae Moh lignite grindability tests emphasizing onHardgrove grindability and approximate work index determination respectively. Firstly, the lignite samples were collected,prepared and analyzed for calorific value, total sulfur content, and proximate analysis. After that the Hardgrove grindabilitytest using ball-race test mill was performed. Knowing the Hardgrove indices, the Bond work indices of some samples wereestimated using the Aplan’s formula. The approximate work indices were determined by running a batch dry-grinding testusing a laboratory ball mill. Finally, the work indices obtained from both methods were compared. It was found that allsamples could be ranked as lignite B, using the heating value as criteria, if the content of mineral matter is neglected. Similarly,all samples can be classified as lignite with the Hargrove grindability indices ranging from about 40 to 50. However, there isa significant difference in the work indices derived from Hardgrove and simplified Bond grindability tests. This may be due todifference in variability of lignite properties and the test procedures. To obtain more accurate values of the lignite workindex, the time-consuming Bond procedure should be performed with a number of corrections for different milling conditions.With Hardgrove grindability indices and the work indices calculated from Aplan’s formula, capacity of the roller-racepulverizer and grindability of the Mae Moh lignite should be investigated in detail further.

  16. Ancilla-approximable quantum state transformations

    International Nuclear Information System (INIS)

    Blass, Andreas; Gurevich, Yuri

    2015-01-01

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation

  17. Ancilla-approximable quantum state transformations

    Energy Technology Data Exchange (ETDEWEB)

    Blass, Andreas [Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 (United States); Gurevich, Yuri [Microsoft Research, Redmond, Washington 98052 (United States)

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  18. Resolution of identity approximation for the Coulomb term in molecular and periodic systems

    Science.gov (United States)

    Burow, Asbjörn M.; Sierka, Marek; Mohamed, Fawzi

    2009-12-01

    A new formulation of resolution of identity approximation for the Coulomb term is presented, which uses atom-centered basis and auxiliary basis functions and treats molecular and periodic systems of any dimensionality on an equal footing. It relies on the decomposition of an auxiliary charge density into charged and chargeless components. Applying the Coulomb metric under periodic boundary conditions constrains the explicit form of the charged part. The chargeless component is determined variationally and converged Coulomb lattice sums needed for its determination are obtained using chargeless linear combinations of auxiliary basis functions. The lattice sums are partitioned in near- and far-field portions which are treated through an analytical integration scheme employing two- and three-center electron repulsion integrals and multipole expansions, respectively, operating exclusively in real space. Our preliminary implementation within the TURBOMOLE program package demonstrates consistent accuracy of the method across molecular and periodic systems. Using common auxiliary basis sets the errors of the approximation are small, in average about 20 μhartree per atom, for both molecular and periodic systems.

  19. Design of respiration averaged CT for attenuation correction of the PET data from PET/CT

    International Nuclear Information System (INIS)

    Chi, Pai-Chun Melinda; Mawlawi, Osama; Nehmeh, Sadek A.; Erdi, Yusuf E.; Balter, Peter A.; Luo, Dershan; Mohan, Radhe; Pan Tinsu

    2007-01-01

    Our previous patient studies have shown that the use of respiration averaged computed tomography (ACT) for attenuation correction of the positron emission tomography (PET) data from PET/CT reduces the potential misalignment in the thorax region by matching the temporal resolution of the CT to that of the PET. In the present work, we investigated other approaches of acquiring ACT in order to reduce the CT dose and to improve the ease of clinical implementation. Four-dimensional CT (4DCT) data sets for ten patients (17 lung/esophageal tumors) were acquired in the thoracic region immediately after the routine PET/CT scan. For each patient, multiple sets of ACTs were generated based on both phase image averaging (phase approach) and fixed cine duration image averaging (cine approach). In the phase approach, the ACTs were calculated from CT images corresponding to the significant phases of the respiratory cycle: ACT 050phs from end-inspiration (0%) and end-expiration (50%), ACT 2070phs from mid-inspiration (20%) and mid-expiration (70%), ACT 4phs from 0%, 20%, 50% and 70%, and ACT 10phs from all ten phases, which was the original approach. In the cine approach, which does not require 4DCT, the ACTs were calculated based on the cine images from cine durations of 1 to 6 s at 1 s increments. PET emission data for each patient were attenuation corrected with each of the above mentioned ACTs and the tumor maximum standard uptake value (SUV max ), average SUV (SUV avg ), and tumor volume measurements were compared. Percent differences were calculated between PET data corrected with various ACTs and that corrected with ACT 10phs . In the phase approach, the ACT 10phs can be approximated by the ACT 4phs to within a mean percent difference of 2% in SUV and tumor volume measurements. In cine approach, ACT 10phs can be approximated to within a mean percent difference of 3% by ACTs computed from cine durations ≥3 s. Acquiring CT images only at the four significant phases for the

  20. Fast and Analytical EAP Approximation from a 4th-Order Tensor

    Directory of Open Access Journals (Sweden)

    Aurobrata Ghosh

    2012-01-01

    Full Text Available Generalized diffusion tensor imaging (GDTI was developed to model complex apparent diffusivity coefficient (ADC using higher-order tensors (HOTs and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP. Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF, since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.

  1. Recognition of computerized facial approximations by familiar assessors.

    Science.gov (United States)

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  2. Multilevel Approximations of Markovian Jump Processes with Applications in Communication Networks

    KAUST Repository

    Vilanova, Pedro

    2015-05-04

    This thesis focuses on the development and analysis of efficient simulation and inference techniques for Markovian pure jump processes with a view towards applications in dense communication networks. These techniques are especially relevant for modeling networks of smart devices —tiny, abundant microprocessors with integrated sensors and wireless communication abilities— that form highly complex and diverse communication networks. During 2010, the number of devices connected to the Internet exceeded the number of people on Earth: over 12.5 billion devices. By 2015, Cisco’s Internet Business Solutions Group predicts that this number will exceed 25 billion. The first part of this work proposes novel numerical methods to estimate, in an efficient and accurate way, observables from realizations of Markovian jump processes. In particular, hybrid Monte Carlo type methods are developed that combine the exact and approximate simulation algorithms to exploit their respective advantages. These methods are tailored to keep a global computational error below a prescribed global error tolerance and within a given statistical confidence level. Indeed, the computational work of these methods is similar to the one of an exact method, but with a smaller constant. Finally, the methods are extended to systems with a disparity of time scales. The second part develops novel inference methods to estimate the parameters of Markovian pure jump process. First, an indirect inference approach is presented, which is based on upscaled representations and does not require sampling. This method is simpler than dealing directly with the likelihood of the process, which, in general, cannot be expressed in closed form and whose maximization requires computationally intensive sampling techniques. Second, a forward-reverse Monte Carlo Expectation-Maximization algorithm is provided to approximate a local maximum or saddle point of the likelihood function of the parameters given a set of

  3. Development of nodal interface conditions for a PN approximation nodal model

    International Nuclear Information System (INIS)

    Feiz, M.

    1993-01-01

    A relation was developed for approximating higher order odd-moments from lower order odd-moments at the nodal interfaces of a Legendre polynomial nodal model. Two sample problems were tested using different order P N expansions in adjacent nodes. The developed relation proved to be adequate and matched the nodal interface flux accurately. The development allows the use of different order expansions in adjacent nodes, and will be used in a hybrid diffusion-transport nodal model. (author)

  4. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    Science.gov (United States)

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  5. Approximating The DCM

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg

    2005-01-01

    The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...

  6. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  7. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  8. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  9. An approximation for kanban controlled assembly systems

    NARCIS (Netherlands)

    Topan, E.; Avsar, Z.M.

    2011-01-01

    An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated

  10. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  11. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  12. Tank 241-C-111 headspace gas and vapor sample results - August 1993 samples

    International Nuclear Information System (INIS)

    Huckaby, J.L.

    1994-01-01

    Tank 241-C-111 is on the ferrocyanide Watch List. Gas and vapor samples were collected to assure safe conditions before planned intrusive work was performed. Sample analyses showed that hydrogen is about ten times higher in the tank headspace than in ambient air. Nitrous oxide is about sixty times higher than ambient levels. The hydrogen cyanide concentration was below 0.04 ppbv, and the average NO x concentration was 8.6 ppmv

  13. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  14. Exploring JLA supernova data with improved flux-averaging technique

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Shuang; Wen, Sixiang; Li, Miao, E-mail: wangshuang@mail.sysu.edu.cn, E-mail: wensx@mail2.sysu.edu.cn, E-mail: limiao9@mail.sysu.edu.cn [School of Physics and Astronomy, Sun Yat-Sen University, University Road (No. 2), Zhuhai (China)

    2017-03-01

    In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z {sub cut}, Δ z ) plane, where z {sub cut} and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying z {sub cut} and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z {sub cut} = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z {sub cut} ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω {sub m} . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  15. Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations

    Science.gov (United States)

    Merckelbach, Lucas

    2016-12-01

    Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.

  16. On the way to 130 g CO2/km-Estimating the future characteristics of the average European passenger car

    International Nuclear Information System (INIS)

    Fontaras, Georgios; Samaras, Zissis

    2010-01-01

    A new average CO 2 emissions limit for passenger cars was introduced in EU in 2009 imposing gradual average CO 2 emissions reduction to 130 g/km until 2015. This paper attempts to study possible changes in vehicle characteristics for meeting this limit taking into account the average European passenger car of 2007-2008. For this purpose first the most important factors affecting vehicle fuel consumption over the reference cycle (NEDC) are identified. At a second step, the CO 2 benefit from the optimisation of these factors is quantified, through simulations of 6 different passenger cars commonly found in the European fleet. For the simulations Advisor 2002 was employed and validated against published type approval data. The analysis indicated that substantial reductions in vehicle weight, tyre rolling resistance and engine efficiency are necessary to reach even the 2008 target. A 10% reduction in average vehicle weight combined with 10% better aerodynamic characteristics, 20% reduced tyre rolling resistance and a 7.5% increase in average powertrain efficiency can lead to CO 2 reductions of approximately 13% (about 138 g/km based on 2007-2008 fleet-wide performance). Complying with the 130 g/km within the next six-year timeframe will be a rather difficult task and additional technical measures appear to be necessary.

  17. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    Science.gov (United States)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  18. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  19. ANALYTICAL RESULTS OF MOX COLEMANITE CONCRETE SAMPLES POURED AUGUST 29, 2012

    Energy Technology Data Exchange (ETDEWEB)

    Best, D.; Cozzi, A.; Reigel, M.

    2012-12-20

    The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory is tasked with measuring the total density, partial hydrogen density, and partial boron density of the colemanite concrete. Samples poured 8/29/12 were received on 9/20/2012 and analyzed. The average total density of each of the samples measured by the ASTM method C 642 was within the lower bound of 1.88 g/cm{sup 3}. The average partial hydrogen density of samples 8.6.1, 8.7.1, and 8.5.3 as measured using method ASTM E 1311 met the lower bound of 6.04E-02 g/cm{sup 3}. The average measured partial boron density of each sample met the lower bound of 1.65E-01 g/cm{sup 3} measured by the ASTM C 1301 method. The average partial hydrogen density of samples 8.5.1, 8.6.3, and 8.7.3 did not meet the lower bound. The samples, as received, were not wrapped in a moist towel as previous samples and appeared to be somewhat drier. This may explain the lower hydrogen partial density with respect to previous samples.

  20. Improved Dutch Roll Approximation for Hypersonic Vehicle

    Directory of Open Access Journals (Sweden)

    Liang-Liang Yin

    2014-06-01

    Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.

  1. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  2. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  3. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  4. On Nash-Equilibria of Approximation-Stable Games

    Science.gov (United States)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  5. Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories

    International Nuclear Information System (INIS)

    Vallisneri, Michele; Galley, Chad R

    2012-01-01

    The signal-to-noise ratio (SNR) is used in gravitational-wave observations as the basic figure of merit for detection confidence and, together with the Fisher matrix, for the amount of physical information that can be extracted from a detected signal. SNRs are usually computed from a sensitivity curve, which describes the gravitational-wave amplitude needed by a monochromatic source of given frequency to achieve a threshold SNR. Although the term 'sensitivity' is used loosely to refer to the detector's noise spectral density, the two quantities are not the same: the sensitivity includes also the frequency- and orientation-dependent response of the detector to gravitational waves and takes into account the duration of observation. For interferometric space-based detectors similar to LISA, which are sensitive to long-lived signals and have constantly changing position and orientation, exact SNRs need to be computed on a source-by-source basis. For convenience, most authors prefer to work with sky-averaged sensitivities, accepting inaccurate SNRs for individual sources and giving up control over the statistical distribution of SNRs for source populations. In this paper, we describe a straightforward end-to-end recipe to compute the non-sky-averaged sensitivity of interferometric space-based detectors of any geometry. This recipe includes the effects of spacecraft motion and of seasonal variations in the partially subtracted confusion foreground from Galactic binaries, and it can be used to generate a sampling distribution of sensitivities for a given source population. In effect, we derive error bars for the sky-averaged sensitivity curve, which provide a stringent statistical interpretation for previously unqualified statements about sky-averaged SNRs. As a worked-out example, we consider isotropic and Galactic-disk populations of monochromatic sources, as observed with the 'classic LISA' configuration. We confirm that the (standard) inverse-rms average sensitivity

  6. Adaptive and self-averaging Thouless-Anderson-Palmer mean-field theory for probabilistic modeling

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2001-01-01

    We develop a generalization of the Thouless-Anderson-Palmer (TAP) mean-field approach of disorder physics. which makes the method applicable to the computation of approximate averages in probabilistic models for real data. In contrast to the conventional TAP approach, where the knowledge...... of the distribution of couplings between the random variables is required, our method adapts to the concrete set of couplings. We show the significance of the approach in two ways: Our approach reproduces replica symmetric results for a wide class of toy models (assuming a nonglassy phase) with given disorder...... distributions in the thermodynamic limit. On the other hand, simulations on a real data model demonstrate that the method achieves more accurate predictions as compared to conventional TAP approaches....

  7. Legendre-tau approximations for functional differential equations

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  8. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  9. Local density approximations for relativistic exchange energies

    International Nuclear Information System (INIS)

    MacDonald, A.H.

    1986-01-01

    The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented

  10. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  11. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  12. Saddlepoint approximation methods in financial engineering

    CERN Document Server

    Kwok, Yue Kuen

    2018-01-01

    This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables.  The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...

  13. J/sub z/-conserving coupled states approximation: Magnetic transitions and angular distributions in rotating and fixed frames

    International Nuclear Information System (INIS)

    Kouri, D.J.; Shimoni, Y.

    1977-01-01

    Recently Shimoni and Kouri have pointed out that a careful treatment of j/sub z/-conserving coupled states (CS) approximation results in a body frame T-matrix T/sup J/(jlambdavertical-barj 0 lambda 0 ) which is not diagonal in lambda,lambda 0 . In addition they have shown that previous investigations of the CS did not optimally identify body frame T-matrix. In this paper, we explore the consequences of these observations. The exact T-matrix is obtained in the R- and P-helicity frames, as well as in an uncoupled spaceframe (USF) representation. The resulting exact expressions for these T-matrices are in terms of certain integrals, I/sup J//sub l/(jlambdavertical-barj 0 lambda 0 ), introduced earlier by Shimoni and Kouri. By obtaining CS approximation to these integrals, we are able to derive the preferred CS approximation in the R- and P-helicity and USF representations. We then employ the resulting CS T-matrices to derive the differential scattering amplitude and cross section in the various possible reference frames. The result is a unified treatment of these quantities. We are then able to demonstrate the equivalence of the CS approximation to the R- and P-helicity amplitudes. In addition, we show explicitly that the CS approximate degeneracy averaged differential cross section is frame independent. The CS approximation to the USF equation provides a rigorous basis for the original derivation of the CS method as given by McGuire and Kouri. In particular, our treatment shows that when the L 2 operator is approximated by an eigenvalue form l (l+1) h 2 (as was suggested first by McGuire and Kouri), there is no longer any difference between the BF and USF in the dynamical equations (for the wavefunction or amplitude density). Any differences are strictly kinematic in origin, and are the source of the lambda transitions which occur in the BF CS approximation. In the USF, there are no magnetic transitions in the CS approximation

  14. Approximating centrality in evolving graphs: toward sublinearity

    Science.gov (United States)

    Priest, Benjamin W.; Cybenko, George

    2017-05-01

    The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.

  15. Improved approximate inspirals of test bodies into Kerr black holes

    International Nuclear Information System (INIS)

    Gair, Jonathan R; Glampedakis, Kostas

    2006-01-01

    We present an improved version of the approximate scheme for generating inspirals of test bodies into a Kerr black hole recently developed by Glampedakis, Hughes and Kennefick. Their original 'hybrid' scheme was based on combining exact relativistic expressions for the evolution of the orbital elements (the semilatus rectum p and eccentricity e) with an approximate, weak-field, formula for the energy and angular momentum fluxes, amended by the assumption of constant inclination angle ι during the inspiral. Despite the fact that the resulting inspirals were overall well behaved, certain pathologies remained for orbits in the strong-field regime and for orbits which are nearly circular and/or nearly polar. In this paper we eliminate these problems by incorporating an array of improvements in the approximate fluxes. First, we add certain corrections which ensure the correct behavior of the fluxes in the limit of vanishing eccentricity and/or 90 deg. inclination. Second, we use higher order post-Newtonian formulas, adapted for generic orbits. Third, we drop the assumption of constant inclination. Instead, we first evolve the Carter constant by means of an approximate post-Newtonian expression and subsequently extract the evolution of ι. Finally, we improve the evolution of circular orbits by using fits to the angular momentum and inclination evolution determined by Teukolsky-based calculations. As an application of our improved scheme, we provide a sample of generic Kerr inspirals which we expect to be the most accurate to date, and for the specific case of nearly circular orbits we locate the critical radius where orbits begin to decircularize under radiation reaction. These easy-to-generate inspirals should become a useful tool for exploring LISA data analysis issues and may ultimately play a role in the detection of inspiral signals in the LISA data

  16. The approximation function of bridge deck vibration derived from the measured eigenmodes

    Directory of Open Access Journals (Sweden)

    Sokol Milan

    2017-12-01

    Full Text Available This article deals with a method of how to acquire approximate displacement vibration functions. Input values are discrete, experimentally obtained mode shapes. A new improved approximation method based on the modal vibrations of the deck is derived using the least-squares method. An alternative approach to be employed in this paper is to approximate the displacement vibration function by a sum of sine functions whose periodicity is determined by spectral analysis adapted for non-uniformly sampled data and where the parameters of scale and phase are estimated as usual by the least-squares method. Moreover, this periodic component is supplemented by a cubic regression spline (fitted on its residuals that captures individual displacements between piers. The statistical evaluation of the stiffness parameter is performed using more vertical modes obtained from experimental results. The previous method (Sokol and Flesch, 2005, which was derived for near the pier areas, has been enhanced to the whole length of the bridge. The experimental data describing the mode shapes are not appropriate for direct use. Especially the higher derivatives calculated from these data are very sensitive to data precision.

  17. Neutrinoless double-β decay of Se82 in the shell model: Beyond the closure approximation

    Science.gov (United States)

    Sen'kov, R. A.; Horoi, M.; Brown, B. A.

    2014-05-01

    We recently proposed a method [R. A. Senkov and M. Horoi, Phys. Rev. C 88, 064312 (2013), 10.1103/PhysRevC.88.064312] to calculate the standard nuclear matrix elements for neutrinoless double-β decay (0νββ) of Ca48 going beyond the closure approximation. Here we extend this analysis to the important case of Se82, which was chosen as the base isotope for the upcoming SuperNEMO experiment. We demonstrate that by using a mixed method that considers information from closure and nonclosure approaches, one can get excellent convergence properties for the nuclear matrix elements, which allows one to avoid unmanageable computational costs. We show that in contrast with the closure approximation the mixed approach has a very weak dependence on the average closure energy. The matrix elements for the heavy neutrino-exchange mechanism that could contribute to the 0νββ decay of Se82 are also presented.

  18. Identification of Large-Scale Structure Fluctuations in IC Engines using POD-Based Conditional Averaging

    Directory of Open Access Journals (Sweden)

    Buhl Stefan

    2016-01-01

    Full Text Available Cycle-to-Cycle Variations (CCV in IC engines is a well-known phenomenon and the definition and quantification is well-established for global quantities such as the mean pressure. On the other hand, the definition of CCV for local quantities, e.g. the velocity or the mixture distribution, is less straightforward. This paper proposes a new method to identify and calculate cyclic variations of the flow field in IC engines emphasizing the different contributions from large-scale energetic (coherent structures, identified by a combination of Proper Orthogonal Decomposition (POD and conditional averaging, and small-scale fluctuations. Suitable subsets required for the conditional averaging are derived from combinations of the the POD coefficients of the second and third mode. Within each subset, the velocity is averaged and these averages are compared to the ensemble-averaged velocity field, which is based on all cycles. The resulting difference of the subset-average and the global-average is identified as a cyclic fluctuation of the coherent structures. Then, within each subset, remaining fluctuations are obtained from the difference between the instantaneous fields and the corresponding subset average. The proposed methodology is tested for two data sets obtained from scale resolving engine simulations. For the first test case, the numerical database consists of 208 independent samples of a simplified engine geometry. For the second case, 120 cycles for the well-established Transparent Combustion Chamber (TCC benchmark engine are considered. For both applications, the suitability of the method to identify the two contributions to CCV is discussed and the results are directly linked to the observed flow field structures.

  19. Three-dimensional topography of the gingival line of young adult maxillary teeth: curve averaging using reverse-engineering methods.

    Science.gov (United States)

    Park, Young-Seok; Chang, Mi-Sook; Lee, Seung-Pyo

    2011-01-01

    This study attempted to establish three-dimensional average curves of the gingival line of maxillary teeth using reconstructed virtual models to utilize as guides for dental implant restorations. Virtual models from 100 full-mouth dental stone cast sets were prepared with a three-dimensional scanner and special reconstruction software. Marginal gingival lines were defined by transforming the boundary points to the NURBS (nonuniform rational B-spline) curve. Using an iterative closest point algorithm, the sample models were aligned and the gingival curves were isolated. Each curve was tessellated by 200 points using a uniform interval. The 200 tessellated points of each sample model were averaged according to the index of each model. In a pilot experiment, regression and fitting analysis of one obtained average curve was performed to depict it as mathematical formulae. The three-dimensional average curves of six maxillary anterior teeth, two maxillary right premolars, and a maxillary right first molar were obtained, and their dimensions were measured. Average curves of the gingival lines of young people were investigated. It is proposed that dentists apply these data to implant platforms or abutment designs to achieve ideal esthetics. The curves obtained in the present study may be incorporated as a basis for implant component design to improve the biologic nature and related esthetics of restorations.

  20. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.