WorldWideScience

Sample records for sample closely approximated

  1. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  2. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  3. Pade approximants for the ground-state energy of closed-shell quantum dots

    International Nuclear Information System (INIS)

    Gonzalez, A.; Partoens, B.; Peeters, F.M.

    1997-08-01

    Analytic approximations to the ground-state energy of closed-shell quantum dots (number of electrons from 2 to 210) are presented in the form of two-point Pade approximants. These Pade approximants are constructed from the small- and large-density limits of the energy. We estimated that the maximum error, reached for intermediate densities, is less than ≤ 3%. Within that present approximation the ground-state is found to be unpolarized. (author). 21 refs, 3 figs, 2 tabs

  4. The approximation gap for the metric facility location problem is not yet closed

    NARCIS (Netherlands)

    Byrka, J.; Aardal, K.I.

    2007-01-01

    We consider the 1.52-approximation algorithm of Mahdian et al. for the metric uncapacitated facility location problem. We show that their algorithm does not close the gap with the lower bound on approximability, 1.463, by providing a construction of instances for which its approximation ratio is not

  5. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  6. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    Science.gov (United States)

    Allphin, Devin

    benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  7. Gravitational recoil from binary black hole mergers: The close-limit approximation

    International Nuclear Information System (INIS)

    Sopuerta, Carlos F.; Yunes, Nicolas; Laguna, Pablo

    2006-01-01

    The coalescence of a binary black hole system is one of the main sources of gravitational waves that present and future detectors will study. Apart from the energy and angular momentum that these waves carry, for unequal-mass binaries there is also a net flux of linear momentum that implies a recoil velocity of the resulting final black hole in the opposite direction. Due to the relevance of this phenomenon in astrophysics, in particular, for galaxy merger scenarios, there have been several attempts to estimate the magnitude of this velocity. Since the main contribution to the recoil comes from the last orbit and plunge, an approximation valid at the last stage of coalescence is well motivated for this type of calculation. In this paper, we present a computation of the recoil velocity based on the close-limit approximation scheme, which gives excellent results for head-on and grazing collisions of black holes when compared to full numerical relativistic calculations. We obtain a maximum recoil velocity of ∼57 km/s for a symmetric mass ratio η=M 1 M 2 /(M 1 +M 2 ) 2 ∼0.19 and an initial proper separation of 4M, where M is the total Arnowitt-Deser-Misner (ADM) mass of the system. This separation is the maximum at which the close-limit approximation is expected to provide accurate results. Therefore, it cannot account for the contributions due to inspiral and initial merger. If we supplement this estimate with post-Newtonian (PN) calculations up to the innermost stable circular orbit, we obtain a lower bound for the recoil velocity, with a maximum around 80 km/s. This is a lower bound because it neglects the initial merger phase. We can however obtain a rough estimate by using PN methods or the close-limit approximation. Since both methods are known to overestimate the amount of radiation, we obtain in this way an upper bound for the recoil with maxima in the range of 214-240 km/s. We also provide nonlinear fits to these estimated upper and lower bounds. These

  8. Accuracy of the adiabatic-impulse approximation for closed and open quantum systems

    Science.gov (United States)

    Tomka, Michael; Campos Venuti, Lorenzo; Zanardi, Paolo

    2018-03-01

    We study the adiabatic-impulse approximation (AIA) as a tool to approximate the time evolution of quantum states when driven through a region of small gap. Such small-gap regions are a common situation in adiabatic quantum computing and having reliable approximations is important in this context. The AIA originates from the Kibble-Zurek theory applied to continuous quantum phase transitions. The Kibble-Zurek mechanism was developed to predict the power-law scaling of the defect density across a continuous quantum phase transition. Instead, here we quantify the accuracy of the AIA via the trace norm distance with respect to the exact evolved state. As expected, we find that for short times or fast protocols, the AIA outperforms the simple adiabatic approximation. However, for large times or slow protocols, the situation is actually reversed and the AIA provides a worse approximation. Nevertheless, we found a variation of the AIA that can perform better than the adiabatic one. This counterintuitive modification consists in crossing the region of small gap twice. Our findings are illustrated by several examples of driven closed and open quantum systems.

  9. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  10. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  11. Sampling and Low-Rank Tensor Approximation of the Response Surface

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann Georg; El-Moselhy, Tarek A.

    2013-01-01

    Most (quasi)-Monte Carlo procedures can be seen as computing some integral over an often high-dimensional domain. If the integrand is expensive to evaluate-we are thinking of a stochastic PDE (SPDE) where the coefficients are random fields and the integrand is some functional of the PDE-solution-there is the desire to keep all the samples for possible later computations of similar integrals. This obviously means a lot of data. To keep the storage demands low, and to allow evaluation of the integrand at points which were not sampled, we construct a low-rank tensor approximation of the integrand over the whole integration domain. This can also be viewed as a representation in some problem-dependent basis which allows a sparse representation. What one obtains is sometimes called a "surrogate" or "proxy" model, or a "response surface". This representation is built step by step or sample by sample, and can already be used for each new sample. In case we are sampling a solution of an SPDE, this allows us to reduce the number of necessary samples, namely in case the solution is already well-represented by the low-rank tensor approximation. This can be easily checked by evaluating the residuum of the PDE with the approximate solution. The procedure will be demonstrated in the computation of a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. © Springer-Verlag Berlin Heidelberg 2013.

  12. Chance constrained problems: penalty reformulation and performance of sample approximation technique

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin

    2012-01-01

    Roč. 48, č. 1 (2012), s. 105-122 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance constrained problems * penalty functions * asymptotic equivalence * sample approximation technique * investment problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-chance constrained problems penalty reformulation and performance of sample approximation technique.pdf

  13. A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence

    Science.gov (United States)

    Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.

    2018-04-01

    Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.

  14. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  15. New perspectives on approximation and sampling theory Festschrift in honor of Paul Butzer's 85th birthday

    CERN Document Server

    Schmeisser, Gerhard

    2014-01-01

    Paul Butzer, who is considered the academic father and grandfather of many prominent mathematicians, has established one of the best schools in approximation and sampling theory in the world. He is one of the leading figures in approximation, sampling theory, and harmonic analysis. Although on April 15, 2013, Paul Butzer turned 85 years old, remarkably, he is still an active research mathematician. In celebration of Paul Butzer’s 85th birthday, New Perspectives on Approximation and Sampling Theory is a collection of invited chapters on approximation, sampling, and harmonic analysis written by students, friends, colleagues, and prominent active mathematicians. Topics covered include approximation methods using wavelets, multi-scale analysis, frames, and special functions. New Perspectives on Approximation and Sampling Theory requires basic knowledge of mathematical analysis, but efforts were made to keep the exposition clear and the chapters self-contained. This volume will appeal to researchers and graduate...

  16. A Closed-Form Approximation Solution for an Inventory Model with Supply Disruptions and Non-ZIO Reorder Policy

    Directory of Open Access Journals (Sweden)

    David Heimann

    2007-08-01

    Full Text Available In supply chains, domestic and global, a producer must decide on an optimal quantity of items to order from suppliers and at what inventory level to place this order (the EOQ problem. We discuss how to modify the EOQ in the face of failures and recoveries by the supplier. This is the EOQ with disruption problem (EOQD. The supplier makes transitions between being capable and not being capable of filling an order in a Markov failure and recovery process. The producer adjusts the reorder point and the inventories to provide a margin of safety. Numerical solutions to the EOQD problem have been developed. In addition, a closed-form approximate solution has been developed for the zero inventory option (ZIO, where the inventory level on reordering is set to be zero. This paper develops a closed-form approximate solution for the EOQD problem when the reorder point can be non-zero, obtaining for that situation an optimal reorder quantity and optimal reorder point that represents an improvement on the optimal ZIO solution. The paper also supplies numerical examples demonstrating the cost savings against the ZIO situation, as well as the accuracy of the approximation technique.

  17. Gas liquid sampling for closed canisters in KW Basin - test plan

    International Nuclear Information System (INIS)

    Pitkoff, C.C.

    1995-01-01

    Test procedures for the gas/liquid sampler. Characterization of the Spent Nuclear Fuel, SNF, sealed in canisters at KW-Basin is needed to determine the state of storing SNF wet. Samples of the liquid and the gas in the closed canisters will be taken to gain characterization information. Sampling equipment has been designed to retrieve gas and liquid from the closed canisters in KW basin. This plan is written to outline the test requirements for this developmental sampling equipment

  18. Closed orbit feedback with digital signal processing

    International Nuclear Information System (INIS)

    Chung, Y.; Kirchman, J.; Lenkszus, F.

    1994-01-01

    The closed orbit feedback experiment conducted on the SPEAR using the singular value decomposition (SVD) technique and digital signal processing (DSP) is presented. The beam response matrix, defined as beam motion at beam position monitor (BPM) locations per unit kick by corrector magnets, was measured and then analyzed using SVD. Ten BPMs, sixteen correctors, and the eight largest SVD eigenvalues were used for closed orbit correction. The maximum sampling frequency for the closed loop feedback was measured at 37 Hz. Using the proportional and integral (PI) control algorithm with the gains Kp = 3 and K I = 0.05 and the open-loop bandwidth corresponding to 1% of the sampling frequency, a correction bandwidth (-3 dB) of approximately 0.8 Hz was achieved. Time domain measurements showed that the response time of the closed loop feedback system for 1/e decay was approximately 0.25 second. This result implies ∼ 100 Hz correction bandwidth for the planned beam position feedback system for the Advanced Photon Source storage ring with the projected 4-kHz sampling frequency

  19. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  20. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    Science.gov (United States)

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  1. Eigenvalue sensitivity of sampled time systems operating in closed loop

    Science.gov (United States)

    Bernal, Dionisio

    2018-05-01

    The use of feedback to create closed-loop eigenstructures with high sensitivity has received some attention in the Structural Health Monitoring field. Although practical implementation is necessarily digital, and thus in sampled time, work thus far has center on the continuous time framework, both in design and in checking performance. It is shown in this paper that the performance in discrete time, at typical sampling rates, can differ notably from that anticipated in the continuous time formulation and that discrepancies can be particularly large on the real part of the eigenvalue sensitivities; a consequence being important error on the (linear estimate) of the level of damage at which closed-loop stability is lost. As one anticipates, explicit consideration of the sampling rate poses no special difficulties in the closed-loop eigenstructure design and the relevant expressions are developed in the paper, including a formula for the efficient evaluation of the derivative of the matrix exponential based on the theory of complex perturbations. The paper presents an easily reproduced numerical example showing the level of error that can result when the discrete time implementation of the controller is not considered.

  2. Super-sample covariance approximations and partial sky coverage

    Science.gov (United States)

    Lacasa, Fabien; Lima, Marcos; Aguena, Michel

    2018-04-01

    Super-sample covariance (SSC) is the dominant source of statistical error on large scale structure (LSS) observables for both current and future galaxy surveys. In this work, we concentrate on the SSC of cluster counts, also known as sample variance, which is particularly useful for the self-calibration of the cluster observable-mass relation; our approach can similarly be applied to other observables, such as galaxy clustering and lensing shear. We first examined the accuracy of two analytical approximations proposed in the literature for the flat sky limit, finding that they are accurate at the 15% and 30-35% level, respectively, for covariances of counts in the same redshift bin. We then developed a harmonic expansion formalism that allows for the prediction of SSC in an arbitrary survey mask geometry, such as large sky areas of current and future surveys. We show analytically and numerically that this formalism recovers the full sky and flat sky limits present in the literature. We then present an efficient numerical implementation of the formalism, which allows fast and easy runs of covariance predictions when the survey mask is modified. We applied our method to a mask that is broadly similar to the Dark Energy Survey footprint, finding a non-negligible negative cross-z covariance, i.e. redshift bins are anti-correlated. We also examined the case of data removal from holes due to, for example bright stars, quality cuts, or systematic removals, and find that this does not have noticeable effects on the structure of the SSC matrix, only rescaling its amplitude by the effective survey area. These advances enable analytical covariances of LSS observables to be computed for current and future galaxy surveys, which cover large areas of the sky where the flat sky approximation fails.

  3. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  4. Integral transport multiregion geometrical shadowing factor for the approximate collision probability matrix calculation of infinite closely packed lattices

    International Nuclear Information System (INIS)

    Jowzani-Moghaddam, A.

    1981-01-01

    An integral transport method of calculating the geometrical shadowing factor in multiregion annular cells for infinite closely packed lattices in cylindrical geometry is developed. This analytical method has been programmed in the TPGS code. This method is based upon a consideration of the properties of the integral transport method for a nonuniform body, which together with Bonalumi's approximations allows the determination of the approximate multiregion collision probability matrix for infinite closely packed lattices with sufficient accuracy. The multiregion geometrical shadowing factors have been calculated for variations in fuel pin annular segment rings in a geometry of annular cells. These shadowing factors can then be used in the calculation of neutron transport from one annulus to another in an infinite lattice. The result of this new geometrical shadowing and collision probability matrix are compared with the Dancoff-Ginsburg correction and the probability matrix using constant shadowing on Yankee fuel elements in an infinite lattice. In these cases the Dancoff-Ginsburg correction factor and collision probability matrix using constant shadowing are in difference by at most 6.2% and 6%, respectively

  5. A CLOSED-FORM EXPRESSION APPROXIMATING THE MIE SOLUTION FOR THE REAL-IN-LINE TRANSMISSION OF CERAMICS WITH SPHERICAL INCLUSIONS OR PORES

    Directory of Open Access Journals (Sweden)

    Pabst W.

    2013-06-01

    Full Text Available A new closed-form expression is presented for estimating the real-in-line transmission of ceramics consisting of non-absorbing phases in dependence of the inclusion or pore size. The classic approximations to the exact Mie solution of the scattering problem for spheres are recalled (Rayleigh, Fraunhofer, Rayleigh-Gans-Debye/RGD, van de Hulst, and it is recalled that the large-size variant of the RGD approximation is the basis of the Apetz-van-Bruggen approach. All approximations and our closed-form expression are compared mutually and vis-a-vis the exact Mie solution. A parametric study is performed for monochromatic light in the visible range (600 nm for two model systems corresponding to composites of yttrium aluminum garnet (YAG, refractive index 1.832 with spherical alumina inclusions (refractive index 1.767, and to porous YAG ceramics with spherical pores (refractive index 1. It is shown that for the YAG-alumina composites to achieve maximum transmission with inclusion volume fractions of 1 % (and slab thickness 1 mm, inclusion sizes of up to 100 nm can be tolerated, while pore sizes of 100 nm will be completely detrimental for porosities as low as 0.1 %. While the van-de-Hulst approximation is excellent for small phase contrast and low concentration of inclusions, it fails for principal reasons for small inclusion or pore sizes. Our closed-form expression, while less precise in the aforementioned special case, is always the safer choice and performs better in most cases of practical interest, including high phase contrasts and high concentrations of inclusions or pores.

  6. SU-F-T-144: Analytical Closed Form Approximation for Carbon Ion Bragg Curves in Water

    Energy Technology Data Exchange (ETDEWEB)

    Tuomanen, S; Moskvin, V; Farr, J [St. Jude Children’s Research Hospital, Memphis, TN (United States)

    2016-06-15

    Purpose: Semi-empirical modeling is a powerful computational method in radiation dosimetry. A set of approximations exist for proton ion depth dose distribution (DDD) in water. However, the modeling is more complicated for carbon ions due to fragmentation. This study addresses this by providing and evaluating a new methodology for DDD modeling of carbon ions in water. Methods: The FLUKA, Monte Carlo (MC) general-purpose transport code was used for simulation of carbon DDDs for energies of 100–400 MeV in water as reference data model benchmarking. Based on Thomas Bortfeld’s closed form equation approximating proton Bragg Curves as a basis, we derived the critical constants for a beam of Carbon ions by applying models of radiation transport by Lee et. al. and Geiger to our simulated Carbon curves. We hypothesized that including a new exponential (κ) residual distance parameter to Bortfeld’s fluence reduction relation would improve DDD modeling for carbon ions. We are introducing an additional term to be added to Bortfeld’s equation to describe fragmentation tail. This term accounts for the pre-peak dose from nuclear fragments (NF). In the post peak region, the NF transport will be treated as new beams utilizing the Glauber model for interaction cross sections and the Abrasion- Ablation fragmentation model. Results: The carbon beam specific constants in the developed model were determined to be : p= 1.75, β=0.008 cm-1, γ=0.6, α=0.0007 cm MeV, σmono=0.08, and the new exponential parameter κ=0.55. This produced a close match for the plateau part of the curve (max deviation 6.37%). Conclusion: The derived semi-empirical model provides an accurate approximation of the MC simulated clinical carbon DDDs. This is the first direct semi-empirical simulation for the dosimetry of therapeutic carbon ions. The accurate modeling of the NF tail in the carbon DDD will provide key insight into distal edge dose deposition formation.

  7. Volume Modulated Arc Therapy (VMAT for pulmonary Stereotactic Body Radiotherapy (SBRT in patients with lesions in close approximation to the chest wall

    Directory of Open Access Journals (Sweden)

    Thomas J. FitzGerald

    2013-02-01

    Full Text Available Chest wall pain and discomfort has been recognized as a significant late effect of radiation therapy in historical and modern treatment models. Stereotactic Body Radiotherapy (SBRT is becoming an important treatment tool in oncology care for patients with intrathoracic lesions. For lesions in close approximation to the chest wall including lesions requiring motion management, SBRT techniques can deliver high dose to the chest wall. As an unintended target of consequence, there is possibility of generating significant chest wall pain and discomfort as a late effect of therapy. The purpose of this paper is to evaluate the potential role of Volume Modulated Arc Therapy (VMAT technologies in decreasing chest wall dose in SBRT treatment of pulmonary lesions in close approximation to the chest wall.Ten patients with pulmonary lesions of various sizes and topography in close approximation to the chest wall were selected for retrospective review. All volumes including target, chest wall, ribs, and lung were contoured with maximal intensity projection maps and four-dimensional computer tomography planning. Radiation therapy planning consisted of static techniques including Intensity Modulated Radiation Therapy compared to VMAT therapy to a dose of 60Gy in 12Gy fractions. Dose volume histogram to rib, chest wall, and lung were compared between plans with statistical analysis.In all patients dose and volume were improved to ribs and chest wall using VMAT technologies compared to static field techniques. On average, volume receiving 30Gy to the chest wall was improved by 72%;the ribs by 60%. In only one patient did the VMAT treatment technique increase pulmonary volume receiving 20Gy (V20.VMAT technology has potential of limiting radiation dose to sensitive chest wall regions in patients with lesions in close approximation to this structure. This would also have potential value to lesions treated with SBRT in other body regions where targets abut critical

  8. Gas and liquid sampling for closed canisters in KW Basin - Work Plan

    International Nuclear Information System (INIS)

    Pitkoff, C.C.

    1995-01-01

    Work Plan for the design and fabrication of gas/liquid sampler for closed canister sampling in KW Basin. This document defines the tasks associated with the design, fabrication, assembly, and acceptance testing equipment necessary for gas and liquid sampling of the Mark I and Mark II canisters in the K-West basin. The sampling of the gas space and the remaining liquid inside the closed canisters will be used to help understand any changes to the fuel elements and the canisters. Specifically, this work plan will define the scope of work and required task structure, list the technical requirements, describe design configuration control and verification methodologies, detail quality assurance requirements, and present a baseline estimate and schedule

  9. ABOUT SOME APPROXIMATIONS TO THE CLOSED SET OF NOT TRIVIAL SOLUTIONS OF THE EQUATIONS OF GINZBURG - LANDAU

    Directory of Open Access Journals (Sweden)

    A. A. Fonarev

    2014-01-01

    Full Text Available Possibility of use of a projective iterative method for search of approximations to the closed set of not trivial generalised solutions of a boundary value problem for Ginzburg - Landau's equations of the phenomenological theory of superconduction is investigated. The projective iterative method combines a projective method and iterative process. The generalised solutions of a boundary value problem for Ginzburg - Landau's equations are critical points of a functional of a superconductor free energy.

  10. Functional approximations to posterior densities: a neural network approach to efficient sampling

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2002-01-01

    textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate

  11. Tracking control of a closed-chain five-bar robot with two degrees of freedom by integration of an approximation-based approach and mechanical design.

    Science.gov (United States)

    Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J

    2012-10-01

    The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.

  12. Combination closed-cycle refrigerator/liquid-He4 cryostat for e- damage of bulk samples

    International Nuclear Information System (INIS)

    Johnson, E.C.

    1987-01-01

    A closed-cycle refrigerator/cryostat system for use in ultrasonic studies of electron irradiation damaged bulk specimens is described. The closed-cycle refrigerator provides a convenient means for long-term (several days) sample irradiation at low temperatures. A neon filled ''thermal diode'' is employed to permit efficient cooling, via liquid helium, of the sample below the base temperature of the refrigerator

  13. Gas and liquid sampling for closed canisters in K-West basins - functional design criteria

    International Nuclear Information System (INIS)

    Pitkoff, C.C.

    1994-01-01

    The purpose of this document is to provide functions and requirements for the design and fabrication of equipment for sampling closed canisters in the K-West basin. The samples will be used to help determine the state of the fuel elements in closed canisters. The characterization information obtained will support evaluation and development of processes required for safe storage and disposition of Spent Nuclear Fuel (SNF) materials

  14. A novel condition for stable nonlinear sampled-data models using higher-order discretized approximations with zero dynamics.

    Science.gov (United States)

    Zeng, Cheng; Liang, Shan; Xiang, Shuwen

    2017-05-01

    Continuous-time systems are usually modelled by the form of ordinary differential equations arising from physical laws. However, the use of these models in practice and utilizing, analyzing or transmitting these data from such systems must first invariably be discretized. More importantly, for digital control of a continuous-time nonlinear system, a good sampled-data model is required. This paper investigates the new consistency condition which is weaker than the previous similar results presented. Moreover, given the stability of the high-order approximate model with stable zero dynamics, the novel condition presented stabilizes the exact sampled-data model of the nonlinear system for sufficiently small sampling periods. An insightful interpretation of the obtained results can be made in terms of the stable sampling zero dynamics, and the new consistency condition is surprisingly associated with the relative degree of the nonlinear continuous-time system. Our controller design, based on the higher-order approximate discretized model, extends the existing methods which mainly deal with the Euler approximation. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Polygonal approximation and scale-space analysis of closed digital curves

    CERN Document Server

    Ray, Kumar S

    2013-01-01

    This book covers the most important topics in the area of pattern recognition, object recognition, computer vision, robot vision, medical computing, computational geometry, and bioinformatics systems. Students and researchers will find a comprehensive treatment of polygonal approximation and its real life applications. The book not only explains the theoretical aspects but also presents applications with detailed design parameters. The systematic development of the concept of polygonal approximation of digital curves and its scale-space analysis are useful and attractive to scholars in many fi

  16. A fast direct sampling algorithm for equilateral closed polygons

    International Nuclear Information System (INIS)

    Cantarella, Jason; Duplantier, Bertrand; Shonkwiler, Clayton; Uehara, Erica

    2016-01-01

    Sampling equilateral closed polygons is of interest in the statistical study of ring polymers. Over the past 30 years, previous authors have proposed a variety of simple Markov chain algorithms (but have not been able to show that they converge to the correct probability distribution) and complicated direct samplers (which require extended-precision arithmetic to evaluate numerically unstable polynomials). We present a simple direct sampler which is fast and numerically stable, and analyze its runtime using a new formula for the volume of equilateral polygon space as a Dirichlet-type integral. (paper)

  17. Kirchhoff approximation and closed-form expressions for atom-surface scattering

    International Nuclear Information System (INIS)

    Marvin, A.M.

    1980-01-01

    In this paper an approximate solution for atom-surface scattering is presented beyond the physical optics approximation. The potential is well represented by a hard corrugated surface but includes an attractive tail in front. The calculation is carried out analytically by two different methods, and the limit of validity of our formulas is well established in the text. In contrast with other workers, I find those expressions to be exact in both limits of small (Rayleigh region) and large momenta (classical region), with the correct behavior at the threshold. The result is attained through a particular use of the extinction theorem in writing the scattered amplitudes, hitherto not employed, and not for particular boundary values of the field. An explicit evaluation of the field on the surface shows in fact the present formulas to be simply related to the well known Kirchhoff approximation (KA) or more generally to an ''extended'' KA fit to the potential model above. A possible application of the theory to treat strong resonance-overlapping effects is suggested in the last part of the work

  18. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  19. Sample summary report for ARG 1 pressure tube sample

    International Nuclear Information System (INIS)

    Belinco, C.

    2006-01-01

    The ARG 1 sample is made from an un-irradiated Zr-2.5% Nb pressure tube. The sample has 103.4 mm ID, 112 mm OD and approximately 500 mm length. A punch mark was made very close to one end of the sample. The punch mark indicates the 12 O'clock position and also identifies the face of the tube for making all the measurements. ARG 1 sample contains flaws on ID and OD surface. There was no intentional flaw within the wall of the pressure tube sample. Once the flaws are machined the pressure tube sample was covered from outside to hide the OD flaws. Approximately 50 mm length of pressure tube was left open at both the ends to facilitate the holding of sample in the fixtures for inspection. No flaw was machined in this zone of 50 mm on either end of the pressure tube sample. A total of 20 flaws were machined in ARG 1 sample. Out of these, 16 flaws were on the OD surface and the remaining 4 on the ID surface of the pressure tube. The flaws were characterized in to various groups like axial flaws, circumferential flaws, etc

  20. Approximate Models for Closed-Loop Trajectory Tracking in Underactuated Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — Control of robotic systems, as a field, spans both traditional closed-loop feedback techniques and modern machine learning strategies, which are primarily open-loop....

  1. Approximating centrality in evolving graphs: toward sublinearity

    Science.gov (United States)

    Priest, Benjamin W.; Cybenko, George

    2017-05-01

    The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.

  2. Approximate number word knowledge before the cardinal principle.

    Science.gov (United States)

    Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C

    2015-02-01

    Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  4. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  5. Approximation of the inverse G-frame operator

    Indian Academy of Sciences (India)

    ... projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.

  6. Integrative analysis of single nucleotide polymorphisms and gene expression efficiently distinguishes samples from closely related ethnic populations

    Directory of Open Access Journals (Sweden)

    Yang Hsin-Chou

    2012-07-01

    Full Text Available Abstract Background Ancestry informative markers (AIMs are a type of genetic marker that is informative for tracing the ancestral ethnicity of individuals. Application of AIMs has gained substantial attention in population genetics, forensic sciences, and medical genetics. Single nucleotide polymorphisms (SNPs, the materials of AIMs, are useful for classifying individuals from distinct continental origins but cannot discriminate individuals with subtle genetic differences from closely related ancestral lineages. Proof-of-principle studies have shown that gene expression (GE also is a heritable human variation that exhibits differential intensity distributions among ethnic groups. GE supplies ethnic information supplemental to SNPs; this motivated us to integrate SNP and GE markers to construct AIM panels with a reduced number of required markers and provide high accuracy in ancestry inference. Few studies in the literature have considered GE in this aspect, and none have integrated SNP and GE markers to aid classification of samples from closely related ethnic populations. Results We integrated a forward variable selection procedure into flexible discriminant analysis to identify key SNP and/or GE markers with the highest cross-validation prediction accuracy. By analyzing genome-wide SNP and/or GE markers in 210 independent samples from four ethnic groups in the HapMap II Project, we found that average testing accuracies for a majority of classification analyses were quite high, except for SNP-only analyses that were performed to discern study samples containing individuals from two close Asian populations. The average testing accuracies ranged from 0.53 to 0.79 for SNP-only analyses and increased to around 0.90 when GE markers were integrated together with SNP markers for the classification of samples from closely related Asian populations. Compared to GE-only analyses, integrative analyses of SNP and GE markers showed comparable testing

  7. Local approximation of a metapopulation's equilibrium.

    Science.gov (United States)

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  8. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  9. Ancilla-approximable quantum state transformations

    International Nuclear Information System (INIS)

    Blass, Andreas; Gurevich, Yuri

    2015-01-01

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation

  10. Ancilla-approximable quantum state transformations

    Energy Technology Data Exchange (ETDEWEB)

    Blass, Andreas [Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 (United States); Gurevich, Yuri [Microsoft Research, Redmond, Washington 98052 (United States)

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  11. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-01-01

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  12. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-06-23

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  13. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  14. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  15. Use and Subtleties of Saddlepoint Approximation for Minimum Mean-Square Error Estimation

    DEFF Research Database (Denmark)

    Beierholm, Thomas; Nuttall, Albert H.; Hansen, Lars Kai

    2008-01-01

    integral representation. However, the examples also demonstrate that when two saddle points are close or coalesce, then saddle-point approximation based on isolated saddle points is not valid. A saddle-point approximation based on two close or coalesced saddle points is derived and in the examples......, the validity and accuracy of the derivation is demonstrated...

  16. Sample preparation techniques based on combustion reactions in closed vessels - A brief overview and recent applications

    International Nuclear Information System (INIS)

    Flores, Erico M.M.; Barin, Juliano S.; Mesko, Marcia F.; Knapp, Guenter

    2007-01-01

    In this review, a general discussion of sample preparation techniques based on combustion reactions in closed vessels is presented. Applications for several kinds of samples are described, taking into account the literature data reported in the last 25 years. The operational conditions as well as the main characteristics and drawbacks are discussed for bomb combustion, oxygen flask and microwave-induced combustion (MIC) techniques. Recent applications of MIC techniques are discussed with special concern for samples not well digested by conventional microwave-assisted wet digestion as, for example, coal and also for subsequent determination of halogens

  17. Closed-Form Representations of the Density Function and Integer Moments of the Sample Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Serge B. Provost

    2015-07-01

    Full Text Available This paper provides a simplified representation of the exact density function of R, the sample correlation coefficient. The odd and even moments of R are also obtained in closed forms. Being expressed in terms of generalized hypergeometric functions, the resulting representations are readily computable. Some numerical examples corroborate the validity of the results derived herein.

  18. On approximation and energy estimates for delta 6-convex functions.

    Science.gov (United States)

    Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid

    2018-01-01

    The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.

  19. Relative humidity effects on water vapour fluxes measured with closed-path eddy-covariance systems with short sampling lines

    DEFF Research Database (Denmark)

    Fratini, Gerardo; Ibrom, Andreas; Arriga, Nicola

    2012-01-01

    It has been formerly recognised that increasing relative humidity in the sampling line of closed-path eddy-covariance systems leads to increasing attenuation of water vapour turbulent fluctuations, resulting in strong latent heat flux losses. This occurrence has been analyzed for very long (50 m...... from eddy-covariance systems featuring short (4 m) and very short (1 m) sampling lines running at the same clover field and show that relative humidity effects persist also for these setups, and should not be neglected. Starting from the work of Ibrom and co-workers, we propose a mixed method...... and correction method proposed here is deemed applicable to closed-path systems featuring a broad range of sampling lines, and indeed applicable also to passive gases as a special case. The methods described in this paper are incorporated, as processing options, in the free and open-source eddy...

  20. Pipe closing device

    International Nuclear Information System (INIS)

    Klahn, F.C.; Nolan, J.H.; Wills, C.

    1979-01-01

    The closing device closes the upper end of a support tube for monitoring samples. It meshes with the upper connecting piece of the monitorung sample capsule, and loads the capsule within the bore of the support tube, so that it is fixed but can be released. The closing device consists of an interlocking component with a chamber and several ratchets which hang down. The interlocking component surrounds the actuating component for positioning the ratchets. The interlocking and actuating components are movable axially relative to each other. (DG) [de

  1. On approximation and energy estimates for delta 6-convex functions

    Directory of Open Access Journals (Sweden)

    Muhammad Shoaib Saleem

    2018-02-01

    Full Text Available Abstract The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted L2 $L^{2}$-norm.

  2. Analyzing the errors of DFT approximations for compressed water systems

    International Nuclear Information System (INIS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-01-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the

  3. Sampling based motion planning with reachable volumes: Application to manipulators and closed chain systems

    KAUST Repository

    McMahon, Troy

    2014-09-01

    © 2014 IEEE. Reachable volumes are a geometric representation of the regions the joints of a robot can reach. They can be used to generate constraint satisfying samples for problems including complicated linkage robots (e.g. closed chains and graspers). They can also be used to assist robot operators and to help in robot design.We show that reachable volumes have an O(1) complexity in unconstrained problems as well as in many constrained problems. We also show that reachable volumes can be computed in linear time and that reachable volume samples can be generated in linear time in problems without constraints. We experimentally validate reachable volume sampling, both with and without constraints on end effectors and/or internal joints. We show that reachable volume samples are less likely to be invalid due to self-collisions, making reachable volume sampling significantly more efficient for higher dimensional problems. We also show that these samples are easier to connect than others, resulting in better connected roadmaps. We demonstrate that our method can be applied to 262-dof, multi-loop, and tree-like linkages including combinations of planar, prismatic and spherical joints. In contrast, existing methods either cannot be used for these problems or do not produce good quality solutions.

  4. Sampling based motion planning with reachable volumes: Application to manipulators and closed chain systems

    KAUST Repository

    McMahon, Troy; Thomas, Shawna; Amato, Nancy M.

    2014-01-01

    © 2014 IEEE. Reachable volumes are a geometric representation of the regions the joints of a robot can reach. They can be used to generate constraint satisfying samples for problems including complicated linkage robots (e.g. closed chains and graspers). They can also be used to assist robot operators and to help in robot design.We show that reachable volumes have an O(1) complexity in unconstrained problems as well as in many constrained problems. We also show that reachable volumes can be computed in linear time and that reachable volume samples can be generated in linear time in problems without constraints. We experimentally validate reachable volume sampling, both with and without constraints on end effectors and/or internal joints. We show that reachable volume samples are less likely to be invalid due to self-collisions, making reachable volume sampling significantly more efficient for higher dimensional problems. We also show that these samples are easier to connect than others, resulting in better connected roadmaps. We demonstrate that our method can be applied to 262-dof, multi-loop, and tree-like linkages including combinations of planar, prismatic and spherical joints. In contrast, existing methods either cannot be used for these problems or do not produce good quality solutions.

  5. Quenched Approximation to ΔS = 1 K Decay

    International Nuclear Information System (INIS)

    Christ, Norman H.

    2005-01-01

    The importance of explicit quark loops in the amplitudes contributing to ΔS = 1, K meson decays raises potential ambiguities when these amplitudes are evaluated in the quenched approximation. Using the factorization of these amplitudes into short- and long-distance parts provided by the standard low-energy effective weak Hamiltonian, we argue that the quenched approximation can be conventionally justified if it is applied to the long-distance portion of each amplitude. The result is a reasonably well-motivated definition of the quenched approximation that is close to that employed in the RBC and CP-PACS calculations of these quantities

  6. Photoelectron spectroscopy and the dipole approximation

    Energy Technology Data Exchange (ETDEWEB)

    Hemmers, O.; Hansen, D.L.; Wang, H. [Univ. of Nevada, Las Vegas, NV (United States)] [and others

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  7. The complex variable boundary element method: Applications in determining approximative boundaries

    Science.gov (United States)

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  8. Behaviour and design considerations for continuous flow closed-open-closed liquid microchannels.

    Science.gov (United States)

    Melin, Jessica; van der Wijngaart, Wouter; Stemme, Göran

    2005-06-01

    This paper introduces a method of combining open and closed microchannels in a single component in a novel way which couples the benefits of both open and closed microfluidic systems and introduces interesting on-chip microfluidic behaviour. Fluid behaviour in such a component, based on continuous pressure driven flow and surface tension, is discussed in terms of cross sectional flow behaviour, robustness, flow-pressure performance, and its application to microfluidic interfacing. The closed-open-closed microchannel possesses the versatility of upstream and downstream closed microfluidics along with open fluidic direct access. The device has the advantage of eliminating gas bubbles present upstream when these enter the open channel section. The unique behaviour of this device opens the door to applications including direct liquid sample interfacing without the need for additional and bulky sample tubing.

  9. Sampling Polya-Gamma random variates: alternate and approximate techniques

    OpenAIRE

    Windle, Jesse; Polson, Nicholas G.; Scott, James G.

    2014-01-01

    Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.

  10. Rational approximations and quantum algorithms with postselection

    NARCIS (Netherlands)

    Mahadev, U.; de Wolf, R.

    2015-01-01

    We study the close connection between rational functions that approximate a given Boolean function, and quantum algorithms that compute the same function using post-selection. We show that the minimal degree of the former equals (up to a factor of 2) the minimal query complexity of the latter. We

  11. Molecular Detection and Identification of Zoonotic Microspor-idia Spore in Fecal Samples of Some Animals with Close-Con-tact to Human

    Directory of Open Access Journals (Sweden)

    Zeinab ASKARI

    2015-10-01

    Full Text Available Background: Microsporidia species are obligatory intracellular agents that can in­fect all major animal groups including mammals, birds, fishes and insects. Whereas world­wide human infection reports are increasing, the cognition of sources of infec­tion particularly zoonotic transmission could be helpful. We aimed to detect zoono­tic microsporidia spore in fecal samples from some animals with close – contact to human.Methods: Overall, 142 fecal samples were collected from animals with closed-con­tact to human, during 2012-2013. Trichrome – blue staining were performed and DNA was then extracted from samples, identified positive, microscopically. Nested PCR was also carried out with primers targeting SSU rRNA gene and PCR products were sequenced.Results: From 142 stool samples, microsporidia spores have been observed microscopi­cally in 15 (10.56% samples. En. cuniculi was found in the faces of 3 (15% small white mice and 1 (10% laboratory rabbits(totally 2.81%. Moreover, E. bieneusi was detected in 3 (10% samples of sheep, 2 (5.12% cattle, 1 (10% rabbit, 3 (11.53% cats and 2 (11.76% ownership dogs (totally 7.74%. Phylogenetic analysis showed interesting data. This is the first study in Iran, which identified E. bieneusi and En. Cuniculi in fecal samples of laboratory animals with close – contact to human as well as domesticated animal and analyzed them in phylogenetic tree. Conclusion: E. bieneusi is the most prevalent microsporidia species in animals. Our results can also alert us about potentially zoonotic transmission of microsporidiosis.

  12. Inspecting close maternal relatedness: Towards better mtDNA population samples in forensic databases.

    Science.gov (United States)

    Bodner, Martin; Irwin, Jodi A; Coble, Michael D; Parson, Walther

    2011-03-01

    Reliable data are crucial for all research fields applying mitochondrial DNA (mtDNA) as a genetic marker. Quality control measures have been introduced to ensure the highest standards in sequence data generation, validation and a posteriori inspection. A phylogenetic alignment strategy has been widely accepted as a prerequisite for data comparability and database searches, for forensic applications, for reconstructions of human migrations and for correct interpretation of mtDNA mutations in medical genetics. There is continuing effort to enhance the number of worldwide population samples in order to contribute to a better understanding of human mtDNA variation. This has often lead to the analysis of convenience samples collected for other purposes, which might not meet the quality requirement of random sampling for mtDNA data sets. Here, we introduce an additional quality control means that deals with one aspect of this limitation: by combining autosomal short tandem repeat (STR) marker with mtDNA information, it helps to avoid the bias introduced by related individuals included in the same (small) sample. By STR analysis of individuals sharing their mitochondrial haplotype, pedigree construction and subsequent software-assisted calculation of likelihood ratios based on the allele frequencies found in the population, closely maternally related individuals can be identified and excluded. We also discuss scenarios that allow related individuals in the same set. An ideal population sample would be representative for its population: this new approach represents another contribution towards this goal. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Subquadratic medial-axis approximation in $\\mathbb{R}^3$

    Directory of Open Access Journals (Sweden)

    Christian Scheffer

    2015-09-01

    Full Text Available We present an algorithm that approximates the medial axis of a smooth manifold in $\\mathbb{R}^3$ which is given by a sufficiently dense point sample. The resulting, non-discrete approximation is shown to converge to the medial axis as the sampling density approaches infinity. While all previous algorithms guaranteeing convergence have a running time quadratic in the size $n$ of the point sample, we achieve a running time of at most $\\mathcal{O}(n\\log^3 n$. While there is no subquadratic upper bound on the output complexity of previous algorithms for non-discrete medial axis approximation, the output of our algorithm is guaranteed to be of linear size.

  14. Approximating a DSM-5 Diagnosis of PTSD Using DSM-IV Criteria

    Science.gov (United States)

    Rosellini, Anthony J.; Stein, Murray B.; Colpe, Lisa J.; Heeringa, Steven G.; Petukhova, Maria V.; Sampson, Nancy A.; Schoenbaum, Michael; Ursano, Robert J.; Kessler, Ronald C.

    2015-01-01

    Background Diagnostic criteria for DSM-5 posttraumatic stress disorder (PTSD) are in many ways similar to DSM-IV criteria, raising the possibility that it might be possible to closely approximate DSM-5 diagnoses using DSM-IV symptoms. If so, the resulting transformation rules could be used to pool research data based on the two criteria sets. Methods The Pre-Post Deployment Study (PPDS) of the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) administered a blended 30-day DSM-IV and DSM-5 PTSD symptom assessment based on the civilian PTSD Checklist for DSM-IV (PCL-C) and the PTSD Checklist for DSM-5 (PCL-5). This assessment was completed by 9,193 soldiers from three US Army Brigade Combat Teams approximately three months after returning from Afghanistan. PCL-C items were used to operationalize conservative and broad approximations of DSM-5 PTSD diagnoses. The operating characteristics of these approximations were examined compared to diagnoses based on actual DSM-5 criteria. Results The estimated 30-day prevalence of DSM-5 PTSD based on conservative (4.3%) and broad (4.7%) approximations of DSM-5 criteria using DSM-IV symptom assessments were similar to estimates based on actual DSM-5 criteria (4.6%). Both approximations had excellent sensitivity (92.6-95.5%), specificity (99.6-99.9%), total classification accuracy (99.4-99.6%), and area under the receiver operating characteristic curve (0.96-0.98). Conclusions DSM-IV symptoms can be used to approximate DSM-5 diagnoses of PTSD among recently-deployed soldiers, making it possible to recode symptom-level data from earlier DSM-IV studies to draw inferences about DSM-5 PTSD. However, replication is needed in broader trauma-exposed samples to evaluate the external validity of this finding. PMID:25845710

  15. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  16. On the approximative normal values of multivalued operators in topological vector space

    International Nuclear Information System (INIS)

    Nguyen Minh Chuong; Khuat van Ninh

    1989-09-01

    In this paper the problem of approximation of normal values of multivalued linear closed operators from topological vector Mackey space into E-space is considered. Existence of normal value and convergence of approximative values to normal value are proved. (author). 4 refs

  17. APPROXIMATION OF FREE-FORM CURVE – AIRFOIL SHAPE

    Directory of Open Access Journals (Sweden)

    CHONG PERK LIN

    2013-12-01

    Full Text Available Approximation of free-form shape is essential in numerous engineering applications, particularly in automotive and aircraft industries. Commercial CAD software for the approximation of free-form shape is based almost exclusively on parametric polynomial and rational parametric polynomial. The parametric curve is defined by vector function of one independent variable R(u = (x(u, y(u, z(u, where 0≤u≤1. Bézier representation is one of the parametric functions, which is widely used in the approximating of free-form shape. Given a string of points with the assumption of sufficiently dense to characterise airfoil shape, it is desirable to approximate the shape with Bézier representation. The expectation is that the representation function is close to the shape within an acceptable working tolerance. In this paper, the aim is to explore the use of manual and automated methods for approximating section curve of airfoil with Bézier representation.

  18. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2015-04-08

    Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

  19. Optical bistability without the rotating wave approximation

    Energy Technology Data Exchange (ETDEWEB)

    Sharaby, Yasser A., E-mail: Yasser_Sharaby@hotmail.co [Physics Department, Faculty of Applied Sciences, Suez Canal University, Suez (Egypt); Joshi, Amitabh, E-mail: ajoshi@eiu.ed [Department of Physics, Eastern Illinois University, Charleston, IL 61920 (United States); Hassan, Shoukry S., E-mail: Shoukryhassan@hotmail.co [Mathematics Department, College of Science, University of Bahrain, P.O. Box 32038 (Bahrain)

    2010-04-26

    Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.

  20. Optical bistability without the rotating wave approximation

    International Nuclear Information System (INIS)

    Sharaby, Yasser A.; Joshi, Amitabh; Hassan, Shoukry S.

    2010-01-01

    Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.

  1. An Approximate Solution for Predicting the Heat Extraction and Preventing Heat Loss from a Closed-Loop Geothermal Reservoir

    Directory of Open Access Journals (Sweden)

    Bisheng Wu

    2017-01-01

    Full Text Available Approximate solutions are found for a mathematical model developed to predict the heat extraction from a closed-loop geothermal system which consists of two vertical wells (one for injection and the other for production and one horizontal well which connects the two vertical wells. Based on the feature of slow heat conduction in rock formation, the fluid flow in the well is divided into three stages, that is, in the injection, horizontal, and production wells. The output temperature of each stage is regarded as the input of the next stage. The results from the present model are compared with those obtained from numerical simulator TOUGH2 and show first-order agreement with a temperature difference less than 4°C for the case where the fluid circulated for 2.74 years. In the end, a parametric study shows that (1 the injection rate plays dominant role in affecting the output performance, (2 higher injection temperature produces larger output temperature but decreases the total heat extracted given a specific time, (3 the output performance of geothermal reservoir is insensitive to fluid viscosity, and (4 there exists a critical point that indicates if the fluid releases heat into or absorbs heat from the surrounding formation.

  2. Comparison of four support-vector based function approximators

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2004-01-01

    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been

  3. Legendre-tau approximation for functional differential equations. II - The linear quadratic optimal control problem

    Science.gov (United States)

    Ito, Kazufumi; Teglas, Russell

    1987-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  4. Effect of cosine current approximation in lattice cell calculations in cylindrical geometry

    International Nuclear Information System (INIS)

    Mohanakrishnan, P.

    1978-01-01

    It is found that one-dimensional cylindrical geometry reactor lattice cell calculations using cosine angular current approximation at spatial mesh interfaces give results surprisingly close to the results of accurate neutron transport calculations as well as experimental measurements. This is especially true for tight light water moderated lattices. Reasons for this close agreement are investigated here. By re-examining the effects of reflective and white cell boundary conditions in these calculations it is concluded that one major reason is the use of white boundary condition necessitated by the approximation of the two-dimensional reactor lattice cell by a one-dimensional one. (orig.) [de

  5. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. On Nash-Equilibria of Approximation-Stable Games

    Science.gov (United States)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  7. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    Science.gov (United States)

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  8. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  9. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  10. An improved JWKB approximation for multiple-humped fission barriers

    International Nuclear Information System (INIS)

    Martinelli, T.; Menapace, E.; Ventura, A.

    1977-01-01

    Penetrabilities of two- and three-humped fission barriers are calculated in semiclassical (JWKB) approximation, valid also in the case where classical turning points are close, or coincident (excitation energy near the top of some hump). Numerical results are shown for Th 234 . (author)

  11. A short version of the revised 'experience of close relationships questionnaire': investigating non-clinical and clinical samples.

    Science.gov (United States)

    Wongpakaran, Tinakon; Wongpakaran, Nahathai

    2012-01-01

    This study seeks to investigate the psychometric properties of the short version of the revised 'Experience of Close Relationships' questionnaire, comparing non-clinical and clinical samples. In total 702 subjects participated in this study, of whom 531 were non-clinical participants and 171 were psychiatric patients. They completed the short version of the revised 'Experience of Close Relationships' questionnaire (ECR-R-18), the Perceived Stress Scale-10(PSS-10), the Rosenberg Self-Esteem Scale (RSES) and the UCLA Loneliness scale. A retest of the ECR-R-18 was then performed at four-week intervals. Then, confirmatory factor analyses were performed to test the validity of the new scale. The ECR-R-18 showed a fair to good internal consistency (α 0.77 to 0.87) for both samples, and the test-retest reliability was found to be satisfactory (ICC = 0.75). The anxiety sub-scale demonstrated concurrent validity with PSS-10 and RSES, while the avoidance sub-scale showed concurrent validity with the UCLA Loneliness Scale. Confirmatory factor analysis using method factors yielded two factors with an acceptable model fit for both groups. An invariance test revealed that the ECR-R-18 when used on the clinical group differed from when used with the non-clinical group. The ECR-R-18 questionnaire revealed an overall better level of fit than the original 36 item questionnaire, indicating its suitability for use with a broader group of samples, including clinical samples. The reliability of the ECR-R- 18 might be increased if a modified scoring system is used and if our suggestions with regard to future studies are followed up.

  12. Solving Math Problems Approximately: A Developmental Perspective.

    Directory of Open Access Journals (Sweden)

    Dana Ganor-Stern

    Full Text Available Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger than the exact answer and when it was far (vs. close from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.

  13. Approximate solutions to Mathieu's equation

    Science.gov (United States)

    Wilkinson, Samuel A.; Vogt, Nicolas; Golubev, Dmitry S.; Cole, Jared H.

    2018-06-01

    Mathieu's equation has many applications throughout theoretical physics. It is especially important to the theory of Josephson junctions, where it is equivalent to Schrödinger's equation. Mathieu's equation can be easily solved numerically, however there exists no closed-form analytic solution. Here we collect various approximations which appear throughout the physics and mathematics literature and examine their accuracy and regimes of applicability. Particular attention is paid to quantities relevant to the physics of Josephson junctions, but the arguments and notation are kept general so as to be of use to the broader physics community.

  14. Generalized Gradient Approximation Made Simple

    International Nuclear Information System (INIS)

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-01-01

    Generalized gradient approximations (GGA close-quote s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. copyright 1996 The American Physical Society

  15. Elastic anisotropy of core samples from the Taiwan Chelungpu Fault Drilling Project (TCDP): direct 3-D measurements and weak anisotropy approximations

    Science.gov (United States)

    Louis, Laurent; David, Christian; Špaček, Petr; Wong, Teng-Fong; Fortin, Jérôme; Song, Sheng Rong

    2012-01-01

    The study of seismic anisotropy has become a powerful tool to decipher rock physics attributes in reservoirs or in complex tectonic settings. We compare direct 3-D measurements of P-wave velocity in 132 different directions on spherical rock samples to the prediction of the approximate model proposed by Louis et al. based on a tensorial approach. The data set includes measurements on dry spheres under confining pressure ranging from 5 to 200 MPa for three sandstones retrieved at a depth of 850, 1365 and 1394 metres in TCDP hole A (Taiwan Chelungpu Fault Drilling Project). As long as the P-wave velocity anisotropy is weak, we show that the predictions of the approximate model are in good agreement with the measurements. As the tensorial method is designed to work with cylindrical samples cored in three orthogonal directions, a significant gain both in the number of measurements involved and in sample preparation is achieved compared to measurements on spheres. We analysed the pressure dependence of the velocity field and show that as the confining pressure is raised the velocity increases, the anisotropy decreases but remains significant even at high pressure, and the shape of the ellipsoid representing the velocity (or elastic) fabric evolves from elongated to planar. These observations can be accounted for by considering the existence of both isotropic and anisotropic crack distributions and their evolution with applied pressure.

  16. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  17. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  18. Approximate determination of efficiency for activity measurements of cylindrical samples

    Energy Technology Data Exchange (ETDEWEB)

    Helbig, W [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany); Bothe, M [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany)

    1997-03-01

    Some calibration samples are necessary with the same geometrical parameters but of different materials, containing known activities A homogeniously distributed. Their densities are measured, their mass absorption coefficients may be unknown. These calibration samples are positioned in the counting geometry, for instance directly on the detector. The efficiency function {epsilon}(E) for each sample is gained by measuring the gamma spectra and evaluating all usable gamma energy peaks. From these {epsilon}(E) the common valid {epsilon}{sub geom}(E) will be deduced. For this purpose the functions {epsilon}{sub mu}(E) for these samples have to be established. (orig.)

  19. Legendre-tau approximation for functional differential equations. Part 2: The linear quadratic optimal control problem

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1984-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  20. Approximating Markov Chains: What and why

    International Nuclear Information System (INIS)

    Pincus, S.

    1996-01-01

    Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics

  1. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  2. Application of the probabilistic approximate analysis method to a turbopump blade analysis. [for Space Shuttle Main Engine

    Science.gov (United States)

    Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.

    1990-01-01

    An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.

  3. Validity of the broken-pair approximation for N = 50, even-A nuclei

    International Nuclear Information System (INIS)

    Haq, S.; Gambhir, Y.K.

    1977-01-01

    The validity of the broken-pair approximation as an approximation to the seniority shell model is investigated. The results of the broken-pair approximation and the seniority shell model, obtained by employing identical input information (single-particle levels and their energies, effective two-body matrix elements, 88 Sr inert core) for N = 50, even-A nuclei are compared. A close agreement obtained between the calculated broken-pair approximation and the seniority shell model energies for 90 Zr, 92 Mo, 94 Ru, and 96 Pd nuclei and large (95--100 %) overlaps between the broken-pair approximation and the senority shell model wave functions for 92 Mo, demonstrates the validity of the broken-pair approximation in this region and in general its usefulness as a good approximation to the seniority shell model

  4. Close Online Relationships in a National Sample of Adolescents.

    Science.gov (United States)

    Wolak, Janis; Mitchell, Kimberly J.; Finkelhor, David

    2002-01-01

    Uses data from a national survey of adolescent Internet users to describe online relationships. Fourteen percent of the youths interviewed reported close online friendships during the past year, 7% reported face-to-face meetings, and 2% reported online romances. Few youths reported bad experiences with online friends. (GCP)

  5. Validity of the independent-processes approximation for resonance structures in electron-ion scattering cross sections

    International Nuclear Information System (INIS)

    Badnell, N.R.; Pindzola, M.S.; Griffin, D.C.

    1991-01-01

    The total inelastic cross section for electron-ion scattering may be found in the independent-processes approximation by adding the resonant cross section to the nonresonant background cross section. We study the validity of this approximation for electron excitation of multiply charged ions. The resonant-excitation cross section is calculated independently using distorted waves for various Li-like and Na-like ions using (N+1)-electron atomic-structure methods previously developed for the calculation of dielectronic-recombination cross sections. To check the effects of interference between the two scattering processes, we also carry out detailed close-coupling calculations for the same atomic ions using the R-matrix method. For low ionization stages, interference effects manifest themselves sometimes as strong window features in the close-coupling cross section, which are not present in the independent-processes cross section. For higher ionization stages, however, the resonance features found in the independent-processes approximation are found to be in good agreement with the close-coupling results

  6. Search for open-quote open-quote polarized close-quote close-quote instantons in the vacuum

    International Nuclear Information System (INIS)

    Kuchiev, M.Y.

    1996-01-01

    The new phase of a gauge theory in which the instantons are open-quote open-quote polarized,close-quote close-quote i.e., have the preferred orientation, is discussed. A class of gauge theories with the specific condensates of the scalar fields is considered. In these models there exists an interaction between instantons resulting from one-fermion loop corrections. The interaction makes the identical orientation of instantons the most probable, permitting one to expect the system to undergo a phase transition into the state with polarized instantons. The existence of this phase is confirmed in the mean-field approximation in which there is a first-order phase transition separating the open-quote open-quote polarized phase close-quote close-quote from the usual nonpolarized one. The considered phase can be important for the description of gravity in the framework of the gauge field theory. copyright 1996 The American Physical Society

  7. Approximate reasoning in physical systems

    International Nuclear Information System (INIS)

    Mutihac, R.

    1991-01-01

    The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)

  8. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    Science.gov (United States)

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright

  9. Modeling Rocket Flight in the Low-Friction Approximation

    Directory of Open Access Journals (Sweden)

    Logan White

    2014-09-01

    Full Text Available In a realistic model for rocket dynamics, in the presence of atmospheric drag and altitude-dependent gravity, the exact kinematic equation cannot be integrated in closed form; even when neglecting friction, the exact solution is a combination of elliptic functions of Jacobi type, which are not easy to use in a computational sense. This project provides a precise analysis of the various terms in the full equation (such as gravity, drag, and exhaust momentum, and the numerical ranges for which various approximations are accurate to within 1%. The analysis leads to optimal approximations expressed through elementary functions, which can be implemented for efficient flight prediction on simple computational devices, such as smartphone applications.

  10. Gaussian-approximation formalism for evaluating decay of NMR spin echoes

    International Nuclear Information System (INIS)

    Recchia, C.H.; Gorny, K.; Pennington, C.H.

    1996-01-01

    We present a formalism for evaluating the amplitude of the NMR spin echo and stimulated echo as a function of pulse spacings, for situations in which the nuclear spins experience an effective longitudinal magnetic field h z (t) resulting from an arbitrary number of independent sources, each characterized by its own arbitrary time correlation function. The distribution of accumulated phase angles for the ensemble of nuclear spins at the time of the echo is approximated as a Gaussian. The development of the formalism is motivated by the need to understand the transverse relaxation of 89 Y in YBa 2 Cu 3 O 7 , in which the 89 Y experiences 63,65 Cu dipolar fields which fluctuate due to 63,65 Cu T 1 processes. The formalism is applied successfully to this example, and to the case of nuclei diffusing in a spatially varying magnetic field. Then we examine a situation in which the approximation fails emdash the classic problem of chemical exchange in dimethylformamide, where the methyl protons experience a chemical shift which fluctuates between two discrete values. In this case the Gaussian approximation yields a monotonic decay of the echo amplitude with increasing pulse spacing, while the exact solution yields distinct open-quote open-quote beats close-quote close-quote in the echo height, which we confirm experimentally. In light of this final example the limits of validity of the approximation are discussed. copyright 1996 The American Physical Society

  11. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    , obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose

  12. The wet destruction of dry organic material in a closed quartz tube

    International Nuclear Information System (INIS)

    Faanhof, A.; Das, H.A.

    1977-01-01

    Quantitative, rapid destruction of dry organic materials is necessary in many cases. The application of wet mineralization in a closed quartz tube and the optimization of destruction conditions are described. The procedure takes approximately 3.5 hours for 20 samples. The method was checked by radiotracer and activation experiments. The ratio of the Hg-203 count-rates of spiked and destructed samples to that of standards is 0.991+-0.014. The results of the instrumental analysis for tobacco and standard kale are: tobacco (84+-13) ng.g -1 , standard kale (155+-7)ng.g -1 . The average of the data for kale, reported in literature, is (150+-8)ng.g -1 . Results after destruction are: tobacco (82+-6)ng.g -1 , standard kale (158+-16)ng.g -1 . (N.L.Gy.)

  13. Multilevel Approximations of Markovian Jump Processes with Applications in Communication Networks

    KAUST Repository

    Vilanova, Pedro

    2015-05-04

    This thesis focuses on the development and analysis of efficient simulation and inference techniques for Markovian pure jump processes with a view towards applications in dense communication networks. These techniques are especially relevant for modeling networks of smart devices —tiny, abundant microprocessors with integrated sensors and wireless communication abilities— that form highly complex and diverse communication networks. During 2010, the number of devices connected to the Internet exceeded the number of people on Earth: over 12.5 billion devices. By 2015, Cisco’s Internet Business Solutions Group predicts that this number will exceed 25 billion. The first part of this work proposes novel numerical methods to estimate, in an efficient and accurate way, observables from realizations of Markovian jump processes. In particular, hybrid Monte Carlo type methods are developed that combine the exact and approximate simulation algorithms to exploit their respective advantages. These methods are tailored to keep a global computational error below a prescribed global error tolerance and within a given statistical confidence level. Indeed, the computational work of these methods is similar to the one of an exact method, but with a smaller constant. Finally, the methods are extended to systems with a disparity of time scales. The second part develops novel inference methods to estimate the parameters of Markovian pure jump process. First, an indirect inference approach is presented, which is based on upscaled representations and does not require sampling. This method is simpler than dealing directly with the likelihood of the process, which, in general, cannot be expressed in closed form and whose maximization requires computationally intensive sampling techniques. Second, a forward-reverse Monte Carlo Expectation-Maximization algorithm is provided to approximate a local maximum or saddle point of the likelihood function of the parameters given a set of

  14. Rational approximations for tomographic reconstructions

    International Nuclear Information System (INIS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-01-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)

  15. Quantum close coupling calculation of transport and relaxation properties for Hg-H_2 system

    International Nuclear Information System (INIS)

    Nemati-Kande, Ebrahim; Maghari, Ali

    2016-01-01

    Highlights: • Several relaxation cross sections are calculated for Hg-H_2 van der Waals complex. • These cross sections are calculated from exact close-coupling method. • Energy-dependent SBE cross sections are calculated for ortho- and para-H_2 + Hg systems. • Viscosity and diffusion coefficients are calculated using Mason-Monchick approximation. • The results obtained by Mason-Monchick approximation are compared to the exact close-coupling results. - Abstract: Quantum mechanical close coupling calculation of the state-to-state transport and relaxation cross sections have been done for Hg-H_2 molecular system using a high-level ab initio potential energy surface. Rotationally averaged cross sections were also calculated to obtain the energy dependent Senftleben-Beenakker cross sections at the energy range of 0.005–25,000 cm"−"1. Boltzmann averaging of the energy dependent Senftleben-Beenakker cross sections showed the temperature dependency over a wide temperature range of 50–2500 K. Interaction viscosity and diffusion coefficients were also calculated using close coupling cross sections and full classical Mason-Monchick approximation. The results were compared with each other and with the available experimental data. It was found that Mason-Monchick approximation for viscosity is more reliable than diffusion coefficient. Furthermore, from the comparison of the experimental diffusion coefficients with the result of the close coupling and Mason-Monchick approximation, it was found that the Hg-H_2 potential energy surface used in this work can reliably predict diffusion coefficient data.

  16. Fast multigrid solution of the advection problem with closed characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Yavneh, I. [Israel Inst. of Technology, Haifa (Israel); Venner, C.H. [Univ. of Twente, Enschede (Netherlands); Brandt, A. [Weizmann Inst. of Science, Rehovot (Israel)

    1996-12-31

    The numerical solution of the advection-diffusion problem in the inviscid limit with closed characteristics is studied as a prelude to an efficient high Reynolds-number flow solver. It is demonstrated by a heuristic analysis and numerical calculations that using upstream discretization with downstream relaxation-ordering and appropriate residual weighting in a simple multigrid V cycle produces an efficient solution process. We also derive upstream finite-difference approximations to the advection operator, whose truncation terms approximate {open_quotes}physical{close_quotes} (Laplacian) viscosity, thus avoiding spurious solutions to the homogeneous problem when the artificial diffusivity dominates the physical viscosity.

  17. Parallel magnetic resonance imaging as approximation in a reproducing kernel Hilbert space

    International Nuclear Information System (INIS)

    Athalye, Vivek; Lustig, Michael; Martin Uecker

    2015-01-01

    In magnetic resonance imaging data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. To understand and design k-space sampling patterns, a theoretical framework is needed to analyze how well arbitrary sampling patterns reconstruct unsampled k-space using receive coil information. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a reproducing kernel Hilbert space with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional image-domain g-factor noise analysis to both noise amplification and approximation errors in k-space. This is demonstrated with numerical examples. (paper)

  18. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  19. Approximation of the decay of fission and activation product mixtures

    International Nuclear Information System (INIS)

    Henderson, R.W.

    1991-01-01

    The decay of the exposure rate from a mixture of fission and activation products is a complex function of time. The exact solution of the problem involves the solution of more than 150 tenth order Bateman equations. An approximation of this function is required for the practical solution of problems involving multiple integrations of this function. Historically this has been a power function, or a series of power functions, of time. The approach selected here has been to approximate the decay with a sum of exponential functions. This produces a continuous, single valued function, that can be made to approximate the given decay scheme to any desired degree of closeness. Further, the integral of the sum is easily calculated over any period. 3 refs

  20. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  1. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  2. Leaky-box approximation to the fractional diffusion model

    International Nuclear Information System (INIS)

    Uchaikin, V V; Sibatov, R T; Saenko, V V

    2013-01-01

    Two models based on fractional differential equations for galactic cosmic ray diffusion are applied to the leaky-box approximation. One of them (Lagutin-Uchaikin, 2000) assumes a finite mean free path of cosmic ray particles, another one (Lagutin-Tyumentsev, 2004) uses distribution with infinite mean distance between collision with magnetic clouds, when the trajectories have form close to ballistic. Calculations demonstrate that involving boundary conditions is incompatible with spatial distributions given by the second model.

  3. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  4. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  5. Measuring radioactive half-lives via statistical sampling in practice

    Science.gov (United States)

    Lorusso, G.; Collins, S. M.; Jagan, K.; Hitt, G. W.; Sadek, A. M.; Aitken-Smith, P. M.; Bridi, D.; Keightley, J. D.

    2017-10-01

    The statistical sampling method for the measurement of radioactive decay half-lives exhibits intriguing features such as that the half-life is approximately the median of a distribution closely resembling a Cauchy distribution. Whilst initial theoretical considerations suggested that in certain cases the method could have significant advantages, accurate measurements by statistical sampling have proven difficult, for they require an exercise in non-standard statistical analysis. As a consequence, no half-life measurement using this method has yet been reported and no comparison with traditional methods has ever been made. We used a Monte Carlo approach to address these analysis difficulties, and present the first experimental measurement of a radioisotope half-life (211Pb) by statistical sampling in good agreement with the literature recommended value. Our work also focused on the comparison between statistical sampling and exponential regression analysis, and concluded that exponential regression achieves generally the highest accuracy.

  6. Inference Under a Wright-Fisher Model Using an Accurate Beta Approximation

    DEFF Research Database (Denmark)

    Tataru, Paula; Bataillon, Thomas; Hobolth, Asger

    2015-01-01

    frequencies and the influence of evolutionary pressures, such as mutation and selection. Despite its simple mathematical formulation, exact results for the distribution of allele frequency (DAF) as a function of time are not available in closed analytic form. Existing approximations build......, the probability of being on the boundary can be positive, corresponding to the allele being either lost or fixed. Here, we introduce the beta with spikes, an extension of the beta approximation, which explicitly models the loss and fixation probabilities as two spikes at the boundaries. We show that the addition...

  7. Nucleonics, and nuclear matter in 10{sup {minus}20} secs. before the close of {open_quotes}Big Bang{close_quotes}

    Energy Technology Data Exchange (ETDEWEB)

    Ayub, S.M.

    1995-10-01

    The Nuclear picture 10{sup -20} secs. after the thermonuclear creation of the Universe {approximately}8 Billion years ago (as also evidenced by Hubble Telescope) was published. Relativity concepts predict the Nuclear picture l0{sup -20} secs. before the {open_quote}G{close_quote} collapse of the Universe, by the progressive decline of expansion,and H. Constant. No double-nuclei, anymore. Only Neutrinos, as predicted by International Scientists, and fragments of Black Holes. Universe {open_quote}r{close_quote}=60 Billion Light-Years. At the Zero Point, N.Force,and 3 other Forces merging into Super-G. Time, Space, becoming identical, and all Physical Laws vanishing. The final will be Nuclear Matter compact of {approximately} <13Km.> 10Km., {open_quote}r{close_quote}, P=10{sup 15}-10{sup 18} Temp. > 10{sup 10} Deg. C. P> 10{sup 18} will cause another thermonuclear Bang`. Super-computers, also, cannot predict beyond this point. There will be the Creator, and a compact of Nuclear Matter. In the absence of Physical Laws, there can be no further predictability. What initiated by the N. Force, has culminated into a compact of Nuclear Matter- - how interesting!

  8. An improved coupled-states approximation including the nearest neighbor Coriolis couplings for diatom-diatom inelastic collision

    Science.gov (United States)

    Yang, Dongzheng; Hu, Xixi; Zhang, Dong H.; Xie, Daiqian

    2018-02-01

    Solving the time-independent close coupling equations of a diatom-diatom inelastic collision system by using the rigorous close-coupling approach is numerically difficult because of its expensive matrix manipulation. The coupled-states approximation decouples the centrifugal matrix by neglecting the important Coriolis couplings completely. In this work, a new approximation method based on the coupled-states approximation is presented and applied to time-independent quantum dynamic calculations. This approach only considers the most important Coriolis coupling with the nearest neighbors and ignores weaker Coriolis couplings with farther K channels. As a result, it reduces the computational costs without a significant loss of accuracy. Numerical tests for para-H2+ortho-H2 and para-H2+HD inelastic collision were carried out and the results showed that the improved method dramatically reduces the errors due to the neglect of the Coriolis couplings in the coupled-states approximation. This strategy should be useful in quantum dynamics of other systems.

  9. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations

    International Nuclear Information System (INIS)

    Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George

    2016-01-01

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.

  10. An overview on polynomial approximation of NP-hard problems

    Directory of Open Access Journals (Sweden)

    Paschos Vangelis Th.

    2009-01-01

    Full Text Available The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.

  11. Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.

    Science.gov (United States)

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.

  12. On exact and approximate exchange-energy densities

    DEFF Research Database (Denmark)

    Springborg, Michael; Dahl, Jens Peder

    1999-01-01

    Based on correspondence rules between quantum-mechanical operators and classical functions in phase space we construct exchange-energy densities in position space. Whereas these are not unique but depend on the chosen correspondence rule, the exchange potential is unique. We calculate this exchange......-energy density for 15 closed-shell atoms, and compare it with kinetic- and Coulomb-energy densities. It is found that it has a dominating local-density character, but electron-shell effects are recognizable. The approximate exchange-energy functionals that have been proposed so far are found to account only...

  13. Quantum close coupling calculation of transport and relaxation properties for Hg-H{sub 2} system

    Energy Technology Data Exchange (ETDEWEB)

    Nemati-Kande, Ebrahim; Maghari, Ali, E-mail: maghari@ut.ac.ir

    2016-11-10

    Highlights: • Several relaxation cross sections are calculated for Hg-H{sub 2} van der Waals complex. • These cross sections are calculated from exact close-coupling method. • Energy-dependent SBE cross sections are calculated for ortho- and para-H{sub 2} + Hg systems. • Viscosity and diffusion coefficients are calculated using Mason-Monchick approximation. • The results obtained by Mason-Monchick approximation are compared to the exact close-coupling results. - Abstract: Quantum mechanical close coupling calculation of the state-to-state transport and relaxation cross sections have been done for Hg-H{sub 2} molecular system using a high-level ab initio potential energy surface. Rotationally averaged cross sections were also calculated to obtain the energy dependent Senftleben-Beenakker cross sections at the energy range of 0.005–25,000 cm{sup −1}. Boltzmann averaging of the energy dependent Senftleben-Beenakker cross sections showed the temperature dependency over a wide temperature range of 50–2500 K. Interaction viscosity and diffusion coefficients were also calculated using close coupling cross sections and full classical Mason-Monchick approximation. The results were compared with each other and with the available experimental data. It was found that Mason-Monchick approximation for viscosity is more reliable than diffusion coefficient. Furthermore, from the comparison of the experimental diffusion coefficients with the result of the close coupling and Mason-Monchick approximation, it was found that the Hg-H{sub 2} potential energy surface used in this work can reliably predict diffusion coefficient data.

  14. Sampling and analyses report for June 1992 semiannual postburn sampling at the RM1 UCG site, Hanna, Wyoming

    International Nuclear Information System (INIS)

    Lindblom, S.R.

    1992-08-01

    The Rocky Mountain 1 (RMl) underground coal gasification (UCG) test was conducted from November 16, 1987 through February 26, 1988 (United Engineers and Constructors 1989) at a site approximately one mile south of Hanna, Wyoming. The test consisted of dual module operation to evaluate the controlled retracting injection point (CRIP) technology, the elongated linked well (ELW) technology, and the interaction of closely spaced modules operating simultaneously. The test caused two cavities to be formed in the Hanna No. 1 coal seam and associated overburden. The Hanna No. 1 coal seam is approximately 30 ft thick and lays at depths between 350 ft and 365 ft below the surface in the test area. The coal seam is overlain by sandstones, siltstones and claystones deposited by various fluvial environments. The groundwater monitoring was designed to satisfy the requirements of the Wyoming Department of Environmental Quality (WDEQ) in addition to providing research data toward the development of UCG technology that minimizes environmental impacts. The June 1992 semiannual groundwater.sampling took place from June 10 through June 13, 1992. This event occurred nearly 34 months after the second groundwater restoration at the RM1 site and was the fifteenth sampling event since UCG operations ceased. Samples were collected for analyses of a limited suite set of parameters as listed in Table 1. With a few exceptions, the groundwater is near baseline conditions. Data from the field measurements and analysis of samples are presented. Benzene concentrations in the groundwater were below analytical detection limits

  15. Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...

  16. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay; Jo, Seongil; Nott, David; Shoemaker, Christine; Tempone, Raul

    2017-01-01

    is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  17. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  18. Numerical Approximation of Elasticity Tensor Associated With Green-Naghdi Rate.

    Science.gov (United States)

    Liu, Haofei; Sun, Wei

    2017-08-01

    Objective stress rates are often used in commercial finite element (FE) programs. However, deriving a consistent tangent modulus tensor (also known as elasticity tensor or material Jacobian) associated with the objective stress rates is challenging when complex material models are utilized. In this paper, an approximation method for the tangent modulus tensor associated with the Green-Naghdi rate of the Kirchhoff stress is employed to simplify the evaluation process. The effectiveness of the approach is demonstrated through the implementation of two user-defined fiber-reinforced hyperelastic material models. Comparisons between the approximation method and the closed-form analytical method demonstrate that the former can simplify the material Jacobian evaluation with satisfactory accuracy while retaining its computational efficiency. Moreover, since the approximation method is independent of material models, it can facilitate the implementation of complex material models in FE analysis using shell/membrane elements in abaqus.

  19. A novel Cs-(129)Xe atomic spin gyroscope with closed-loop Faraday modulation.

    Science.gov (United States)

    Fang, Jiancheng; Wan, Shuangai; Qin, Jie; Zhang, Chen; Quan, Wei; Yuan, Heng; Dong, Haifeng

    2013-08-01

    We report a novel Cs-(129)Xe atomic spin gyroscope (ASG) with closed-loop Faraday modulation method. This ASG requires approximately 30 min to start-up and 110 °C to operate. A closed-loop Faraday modulation method for measurement of the optical rotation was used in this ASG. This method uses an additional Faraday modulator to suppress the laser intensity fluctuation and Faraday modulator thermal induced fluctuation. We theoretically and experimentally validate this method in the Cs-(129)Xe ASG and achieved a bias stability of approximately 3.25 °∕h.

  20. Approximate self-consistent potentials for density-functional-theory exchange-correlation functionals

    International Nuclear Information System (INIS)

    Cafiero, Mauricio; Gonzalez, Carlos

    2005-01-01

    We show that potentials for exchange-correlation functionals within the Kohn-Sham density-functional-theory framework may be written as potentials for simpler functionals multiplied by a factor close to unity, and in a self-consistent field calculation, these effective potentials find the correct self-consistent solutions. This simple theory is demonstrated with self-consistent exchange-only calculations of the atomization energies of some small molecules using the Perdew-Kurth-Zupan-Blaha (PKZB) meta-generalized-gradient-approximation (meta-GGA) exchange functional. The atomization energies obtained with our method agree with or surpass previous meta-GGA calculations performed in a non-self-consistent manner. The results of this work suggest the utility of this simple theory to approximate exchange-correlation potentials corresponding to energy functionals too complicated to generate closed forms for their potentials. We hope that this method will encourage the development of complex functionals which have correct boundary conditions and are free of self-interaction errors without the worry that the functionals are too complex to differentiate to obtain potentials

  1. Closed orbit analysis for RHIC

    International Nuclear Information System (INIS)

    Milutinovic, J.; Ruggiero, A.G.

    1989-01-01

    We examine the effects of four types of errors in the RHIC dipoles and quadrupoles on the on-momentum closed orbit in the machine. We use PATRIS both to handle statistically the effects of kick-modeled errors and to check the performance of the Fermilab correcting scheme in a framework of a more realistic modeling. On the basis of the accepted rms values of the lattice errors, we conclude that in about 40% of all studied cases the lattice must be to some extent pre-corrected in the framework of the so-called ''first turn around strategy,'' in order to get a closed orbit within the aperture limitations at all and, furthermore, for approximately 2/3 of the remaining cases we find that a single pass algorithm of the Fermilab scheme is not sufficient to bring closed orbit distortions down to acceptable levels. We have modified the scheme and have allowed repeated applications of the otherwise unchanged three bump method and in doing so we have been able to correct the orbit in a satisfactory manner. 4 refs., 2 figs., 3 tabs

  2. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  3. Transverse signal decay under the weak field approximation: Theory and validation.

    Science.gov (United States)

    Berman, Avery J L; Pike, G Bruce

    2018-07-01

    To derive an expression for the transverse signal time course from systems in the motional narrowing regime, such as water diffusing in blood. This was validated in silico and experimentally with ex vivo blood samples. A closed-form solution (CFS) for transverse signal decay under any train of refocusing pulses was derived using the weak field approximation. The CFS was validated via simulations of water molecules diffusing in the presence of spherical perturbers, with a range of sizes and under various pulse sequences. The CFS was compared with more conventional fits assuming monoexponential decay, including chemical exchange, using ex vivo blood Carr-Purcell-Meiboom-Gill data. From simulations, the CFS was shown to be valid in the motional narrowing regime and partially into the intermediate dephasing regime, with increased accuracy with increasing Carr-Purcell-Meiboom-Gill refocusing rate. In theoretical calculations of the CFS, fitting for the transverse relaxation rate (R 2 ) gave excellent agreement with the weak field approximation expression for R 2 for Carr-Purcell-Meiboom-Gill sequences, but diverged for free induction decay. These same results were confirmed in the ex vivo analysis. Transverse signal decay in the motional narrowing regime can be accurately described analytically. This theory has applications in areas such as tissue iron imaging, relaxometry of blood, and contrast agent imaging. Magn Reson Med 80:341-350, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Measuring the critical current in superconducting samples made of NT-50 under pulse irradiation by high-energy particles

    International Nuclear Information System (INIS)

    Vasilev, P.G.; Vladimirova, N.M.; Volkov, V.I.; Goncharov, I.N.; Zajtsev, L.N.; Zel'dich, B.D.; Ivanov, V.I.; Kleshchenko, E.D.; Khvostov, V.B.

    1981-01-01

    The results of tests of superconducting samples of an uninsulated wire of the 0.5 mm diameter, containing 1045 superconducting filaments of the 10 μm diameter made of NT-50 superconductor in a copper matrix, are given. The upper part of the sample (''closed'') is placed between two glass-cloth-base laminate plates of the 50 mm length, and the lower part (''open'') of the 45 mm length is immerged into liquid helium. The sample is located perpendicular to the magnetic field of a superconducting solenoid and it is irradiated by charged particle beams at the energy of several GeV. The measurement results of permissible energy release in the sample depending on subcriticality (I/Isub(c) where I is an operating current through the sample, and Isub(c) is a critical current for lack of the beam) and the particle flux density, as well as of the maximum permissible fluence depending on subcriticality. In case of the ''closed'' sample irradiated by short pulses (approximately 1 ms) for I/Isub(c) [ru

  5. The random phase approximation

    International Nuclear Information System (INIS)

    Schuck, P.

    1985-01-01

    RPA is the adequate theory to describe vibrations of the nucleus of very small amplitudes. These vibrations can either be forced by an external electromagnetic field or can be eigenmodes of the nucleus. In a one dimensional analogue the potential corresponding to such eigenmodes of very small amplitude should be rather stiff otherwise the motion risks to be a large amplitude one and to enter a region where the approximation is not valid. This means that nuclei which are supposedly well described by RPA must have a very stable groundstate configuration (must e.g. be very stiff against deformation). This is usually the case for doubly magic nuclei or close to magic nuclei which are in the middle of proton and neutron shells which develop a very stable groundstate deformation; we take the deformation as an example but there are many other possible degrees of freedom as, for example, compression modes, isovector degrees of freedom, spin degrees of freedom, and many more

  6. A proof of the Woodward-Lawson sampling method for a finite linear array

    Science.gov (United States)

    Somers, Gary A.

    1993-01-01

    An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.

  7. Function approximation using combined unsupervised and supervised learning.

    Science.gov (United States)

    Andras, Peter

    2014-03-01

    Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.

  8. Supernova explosions in close binary systems. Pt. 2

    International Nuclear Information System (INIS)

    Sutantyo, W.

    1975-01-01

    The effects of a spherically symmetric explosion on the runaway velocity of a close binary system with an initial circular orbit is considered. It is shown that the runaway velocity is completely determined by the final orbital parameters regardless of the initial condition. The galactic z distribution of the known massive X-ray binaries indicates that the runaway velocities of these systems are very probably smaller than approximately 100 km/s with the most likely values of approximately 25-50 km/s. Such runaway velocities can be obtained if the post-explosion eccentricities are less than approximately 0.25. This then has the concequence that the mass of the exploded star which produced the neutron stars in the massive X-ray binaries can in most cases not have been larger than approximately 7-8 M(S) with the most likely values of approximately 3-4 M(S) if the supergiants in these systems have mass (M 2 ) of approximately 20 M(S). For Cyg X-1, the upper mass limit of the exploded star is found to be approximately 16 M(S). For M 2 = 30 M(S) these upper limit becomes approximately 9-10 M(S) and 19 M(S) respectively. (orig.) [de

  9. Approximation of the Thomas-Fermi-Dirac potential for neutral atoms

    International Nuclear Information System (INIS)

    Jablonski, A.

    1992-01-01

    The frequently used analytical expression of Bonham and Strand approximating the Thomas-Fermi-Dirac (TFD) potential is closely analyzed. This expression does not satisfy the boundary conditions of the TFD differential equation, in particular, does not comprise the finite radius of the TFD potential. A modification of the analytical expression is proposed to adjust it to the boundary conditions. A new fit is made on the basis of the variational formulation of the TFD problem. An attempt is also made in the present work to develop a new numerical procedure providing very accurate solutions of this problem. Such solutions form a reference to check the quality of analytical approximations. Exemplary calculations of the elastic scattering cross sections are made for different expressions approximating the TFD potential to visualize the influence of the inaccuracies of the fit. It seems that the elastic scattering calculations should be based on extensive tables with the accurate values of the TFD screening function rather than on fitted analytical expressions. (orig.)

  10. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  11. Validation of a 16-Item Short Form of the Czech Version of the Experiences in Close Relationships Revised Questionnaire in a Representative Sample

    Czech Academy of Sciences Publication Activity Database

    Kaščáková, N.; Husárová, D.; Hašto, J.; Kolarčik, P.; Poláčková Šolcová, Iva; Madarasová Gecková, A.; Tavel, P.

    2016-01-01

    Roč. 119, č. 3 (2016), s. 804-825 ISSN 0033-2941 Institutional support: RVO:68081740 Keywords : Short form of the ECR-R * Experiences in Close Relationships Revised Questionnaire * validation * attachment anxiety * attachment avoidance * attachment styles * representative sample Subject RIV: AN - Psychology Impact factor: 0.629, year: 2016

  12. Pornography use and closeness with others in women

    Directory of Open Access Journals (Sweden)

    Popović Miodrag

    2011-01-01

    Full Text Available Introduction. Closeness/intimacy and pornography are sometimes linked and frequently presented as competing with each other. They have been the subject of some research but many issues in the area remain controversial and indeterminate. Objective. The aim of this pilot study was to establish whether female pornography users and non-users’ ratings in terms of socio-emotional closeness differed, i.e. to examine the association between pornography use and aspects of socio-emotional closeness in a non-clinical sample of females. Methods. Sixty-six females participated in the study. Their actual and ideal socioemotional closeness was measured by the Perceived Interpersonal Closeness Scale/PICS, while their pornography use was examined by the Background and Pornography Use Information Questionnaire. Potential links between the two variables and comparisons with the relevant results obtained by males are presented. Results. The results showed that there were no significant differences between self-reported female pornography users and non-users in terms of total closeness numbers and scores and also in specific socioemotional closeness with the most significant adults in their lives (i.e., partners, closest friends, mothers and fathers. Conclusion. The results confirmed that there were differences between females and males’ approaches to pornography and closeness; females had lower interest in pornography and their use of it had not been associated with higher total closeness numbers and scores. Due to the participant group’s size (N limitations, this sample was rather used for preliminary investigations that would enable some elementary insight into females’ relevant behaviours. Further investigations of pornography’s complex links with socio-emotional and sexual closeness on larger samples may allow more reliable comparisons between gender and pornography users groups.

  13. Impulse approximation in solid helium

    International Nuclear Information System (INIS)

    Glyde, H.R.

    1985-01-01

    The incoherent dynamic form factor S/sub i/(Q, ω) is evaluated in solid helium for comparison with the impulse approximation (IA). The purpose is to determine the Q values for which the IA is valid for systems such a helium where the atoms interact via a potential having a steeply repulsive but not infinite hard core. For 3 He, S/sub i/(Q, ω) is evaluated from first principles, beginning with the pair potential. The density of states g(ω) is evaluated using the self-consistent phonon theory and S/sub i/(Q,ω) is expressed in terms of g(ω). For solid 4 He resonable models of g(ω) using observed input parameters are used to evaluate S/sub i/(Q,ω). In both cases S/sub i/(Q, ω) is found to approach the impulse approximation S/sub IA/(Q, ω) closely for wave vector transfers Q> or approx. =20 A -1 . The difference between S/sub i/ and S/sub IA/, which is due to final state interactions of the scattering atom with the remainder of the atoms in the solid, is also predominantly antisymmetric in (ω-ω/sub R/), where ω/sub R/ is the recoil frequency. This suggests that the symmetrization procedure proposed by Sears to eliminate final state contributions should work well in solid helium

  14. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  15. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    International Nuclear Information System (INIS)

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-01-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm

  16. Mathieu functions and its useful approximation for elliptical waveguides

    Science.gov (United States)

    Pillay, Shamini; Kumar, Deepak

    2017-11-01

    The standard form of the Mathieu differential equation is where a and q are real parameters and q > 0. In this paper we obtain closed formula for the generic term of expansions of modified Mathieu functions in terms of Bessel and modified Bessel functions in the following cases: Let ξ0 = ξ0, where i can take the values 1 and 2 corresponding to the first and the second boundary. These approximations also provide alternative methods for numerical evaluation of Mathieu functions.

  17. Determination of the complex refractive index segments of turbid sample with multispectral spatially modulated structured light and models approximation

    Science.gov (United States)

    Meitav, Omri; Shaul, Oren; Abookasis, David

    2017-09-01

    Spectral data enabling the derivation of a biological tissue sample's complex refractive index (CRI) can provide a range of valuable information in the clinical and research contexts. Specifically, changes in the CRI reflect alterations in tissue morphology and chemical composition, enabling its use as an optical marker during diagnosis and treatment. In the present work, we report a method for estimating the real and imaginary parts of the CRI of a biological sample using Kramers-Kronig (KK) relations in the spatial frequency domain. In this method, phase-shifted sinusoidal patterns at single high spatial frequency are serially projected onto the sample surface at different near-infrared wavelengths while a camera mounted normal to the sample surface acquires the reflected diffuse light. In the offline analysis pipeline, recorded images at each wavelength are converted to spatial phase maps using KK analysis and are then calibrated against phase-models derived from diffusion approximation. The amplitude of the reflected light, together with phase data, is then introduced into Fresnel equations to resolve both real and imaginary segments of the CRI at each wavelength. The technique was validated in tissue-mimicking phantoms with known optical parameters and in mouse models of ischemic injury and heat stress. Experimental data obtained indicate variations in the CRI among brain tissue suffering from injury. CRI fluctuations correlated with alterations in the scattering and absorption coefficients of the injured tissue are demonstrated. This technique for deriving dynamic changes in the CRI of tissue may be further developed as a clinical diagnostic tool and for biomedical research applications. To the best of our knowledge, this is the first report of the estimation of the spectral CRI of a mouse head following injury obtained in the spatial frequency domain.

  18. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  19. Complete hierarchies of efficient approximations to problems in entanglement theory

    International Nuclear Information System (INIS)

    Eisert, Jens; Hyllus, Philipp; Guehne, Otfried; Curty, Marcos

    2004-01-01

    We investigate several problems in entanglement theory from the perspective of convex optimization. This list of problems comprises (A) the decision whether a state is multiparty entangled, (B) the minimization of expectation values of entanglement witnesses with respect to pure product states, (C) the closely related evaluation of the geometric measure of entanglement to quantify pure multiparty entanglement, (D) the test whether states are multiparty entangled on the basis of witnesses based on second moments and on the basis of linear entropic criteria, and (E) the evaluation of instances of maximal output purities of quantum channels. We show that these problems can be formulated as certain optimization problems: as polynomially constrained problems employing polynomials of degree 3 or less. We then apply very recently established known methods from the theory of semidefinite relaxations to the formulated optimization problems. By this construction we arrive at a hierarchy of efficiently solvable approximations to the solution, approximating the exact solution as closely as desired, in a way that is asymptotically complete. For example, this results in a hierarchy of efficiently decidable sufficient criteria for multiparticle entanglement, such that every entangled state will necessarily be detected in some step of the hierarchy. Finally, we present numerical examples to demonstrate the practical accessibility of this approach

  20. Fast and Analytical EAP Approximation from a 4th-Order Tensor.

    Science.gov (United States)

    Ghosh, Aurobrata; Deriche, Rachid

    2012-01-01

    Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.

  1. Respiratory Motion Correction for Compressively Sampled Free Breathing Cardiac MRI Using Smooth l1-Norm Approximation

    Directory of Open Access Journals (Sweden)

    Muhammad Bilal

    2018-01-01

    Full Text Available Transformed domain sparsity of Magnetic Resonance Imaging (MRI has recently been used to reduce the acquisition time in conjunction with compressed sensing (CS theory. Respiratory motion during MR scan results in strong blurring and ghosting artifacts in recovered MR images. To improve the quality of the recovered images, motion needs to be estimated and corrected. In this article, a two-step approach is proposed for the recovery of cardiac MR images in the presence of free breathing motion. In the first step, compressively sampled MR images are recovered by solving an optimization problem using gradient descent algorithm. The L1-norm based regularizer, used in optimization problem, is approximated by a hyperbolic tangent function. In the second step, a block matching algorithm, known as Adaptive Rood Pattern Search (ARPS, is exploited to estimate and correct respiratory motion among the recovered images. The framework is tested for free breathing simulated and in vivo 2D cardiac cine MRI data. Simulation results show improved structural similarity index (SSIM, peak signal-to-noise ratio (PSNR, and mean square error (MSE with different acceleration factors for the proposed method. Experimental results also provide a comparison between k-t FOCUSS with MEMC and the proposed method.

  2. Implementation of the CCGM approximation for surface diffraction using Wigner R-matrix theory

    International Nuclear Information System (INIS)

    Lauderdale, J.G.; McCurdy, C.W.

    1983-01-01

    The CCGM approximation for surface scattering proposed by Cabrera, Celli, Goodman, and Manson [Surf. Sci. 19, 67 (1970)] is implemented for realistic surface interaction potentials using Wigner R-matrix theory. The resulting procedure is highly efficient computationally and is in no way limited to hard wall or purely repulsive potentials. Comparison is made with the results of close-coupling calculations of other workers which include the same diffraction channels in order to fairly evaluate the CCGM approximation which is an approximation to the coupled channels Lippman--Schwinger equation for the T matrix. The shapes of selective adsorption features, whether maxima or minima, in the scattered intensity are well represented in this approach for cases in which the surface corrugation is not too strong

  3. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  4. Hydrogen atom in a magnetic field: Ghost orbits, catastrophes, and uniform semiclassical approximations

    International Nuclear Information System (INIS)

    Main, J.; Wunner, G.

    1997-01-01

    Applying closed-orbit theory to the recurrence spectra of the hydrogen atom in a magnetic field, one can interpret most, but not all, structures semiclassically in terms of closed classical orbits. In particular, conventional closed-orbit theory fails near bifurcations of orbits where semiclassical amplitudes exhibit unphysical divergences. Here we analyze the role of ghost orbits living in complex phase space. The ghosts can explain resonance structures in the spectra of the hydrogen atom in a magnetic field at positions where no real orbits exist. For three different types of catastrophes, viz. fold, cusp, and butterfly catastrophes, we construct uniform semiclassical approximations and demonstrate that these solutions are completely determined by classical parameters of the real orbits and complex ghosts. copyright 1997 The American Physical Society

  5. Strong semiclassical approximation of Wigner functions for the Hartree dynamics

    KAUST Repository

    Athanassoulis, Agissilaos; Paul, Thierry; Pezzotti, Federica; Pulvirenti, Mario

    2011-01-01

    We consider the Wigner equation corresponding to a nonlinear Schrödinger evolution of the Hartree type in the semiclassical limit h → 0. Under appropriate assumptions on the initial data and the interaction potential, we show that the Wigner function is close in L 2 to its weak limit, the solution of the corresponding Vlasov equation. The strong approximation allows the construction of semiclassical operator-valued observables, approximating their quantum counterparts in Hilbert-Schmidt topology. The proof makes use of a pointwise-positivity manipulation, which seems necessary in working with the L 2 norm and the precise form of the nonlinearity. We employ the Husimi function as a pivot between the classical probability density and the Wigner function, which - as it is well known - is not pointwise positive in general.

  6. New fuzzy approximate model for indirect adaptive control of distributed solar collectors

    KAUST Repository

    Elmetennani, Shahrazed

    2014-06-01

    This paper studies the problem of controlling a parabolic solar collectors, which consists of forcing the outlet oil temperature to track a set reference despite possible environmental disturbances. An approximate model is proposed to simplify the controller design. The presented controller is an indirect adaptive law designed on the fuzzy model with soft-sensing of the solar irradiance intensity. The proposed approximate model allows the achievement of a simple low dimensional set of nonlinear ordinary differential equations that reproduces the dynamical behavior of the system taking into account its infinite dimension. Stability of the closed loop system is ensured by resorting to Lyapunov Control functions for an indirect adaptive controller.

  7. New fuzzy approximate model for indirect adaptive control of distributed solar collectors

    KAUST Repository

    Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem

    2014-01-01

    This paper studies the problem of controlling a parabolic solar collectors, which consists of forcing the outlet oil temperature to track a set reference despite possible environmental disturbances. An approximate model is proposed to simplify the controller design. The presented controller is an indirect adaptive law designed on the fuzzy model with soft-sensing of the solar irradiance intensity. The proposed approximate model allows the achievement of a simple low dimensional set of nonlinear ordinary differential equations that reproduces the dynamical behavior of the system taking into account its infinite dimension. Stability of the closed loop system is ensured by resorting to Lyapunov Control functions for an indirect adaptive controller.

  8. Links between Adolescents’ Closeness to Adoptive Parents and Attachment Style in Young Adulthood

    Science.gov (United States)

    Grant-Marsney, Holly A.; Grotevant, Harold D.; Sayer, Aline G.

    2014-01-01

    This study examined whether adolescents’ closeness to adoptive parents (APs) predicted attachment styles in close relationships outside their family during young adulthood. In a longitudinal study of domestic infant adoptions, closeness to adoptive mother and adoptive father was assessed in 156 adolescents (M = 15.7 years). Approximately nine years later (M = 25.0 years), closeness to parents was assessed again as well as attachment style in their close relationships. Multilevel modeling was used to predict attachment style in young adulthood from the average and discrepancy of closeness to adolescents’ adoptive mothers and fathers and the change over time in closeness to APs. Less avoidant attachment style was predicted by stronger closeness to both APs during adolescence. Increased closeness to APs over time was related to less anxiety in close relationships. Higher closeness over time to either AP was related to less avoidance and anxiety in close relationships. PMID:25859067

  9. Links between Adolescents' Closeness to Adoptive Parents and Attachment Style in Young Adulthood.

    Science.gov (United States)

    Grant-Marsney, Holly A; Grotevant, Harold D; Sayer, Aline G

    2015-04-01

    This study examined whether adolescents' closeness to adoptive parents (APs) predicted attachment styles in close relationships outside their family during young adulthood. In a longitudinal study of domestic infant adoptions, closeness to adoptive mother and adoptive father was assessed in 156 adolescents ( M = 15.7 years). Approximately nine years later ( M = 25.0 years), closeness to parents was assessed again as well as attachment style in their close relationships. Multilevel modeling was used to predict attachment style in young adulthood from the average and discrepancy of closeness to adolescents' adoptive mothers and fathers and the change over time in closeness to APs. Less avoidant attachment style was predicted by stronger closeness to both APs during adolescence. Increased closeness to APs over time was related to less anxiety in close relationships. Higher closeness over time to either AP was related to less avoidance and anxiety in close relationships.

  10. The triangular density to approximate the normal density: decision rules-of-thumb

    International Nuclear Information System (INIS)

    Scherer, William T.; Pomroy, Thomas A.; Fuller, Douglas N.

    2003-01-01

    In this paper we explore the approximation of the normal density function with the triangular density function, a density function that has extensive use in risk analysis. Such an approximation generates a simple piecewise-linear density function and a piecewise-quadratic distribution function that can be easily manipulated mathematically and that produces surprisingly accurate performance under many instances. This mathematical tractability proves useful when it enables closed-form solutions not otherwise possible, as with problems involving the embedded use of the normal density. For benchmarking purposes we compare the basic triangular approximation with two flared triangular distributions and with two simple uniform approximations; however, throughout the paper our focus is on using the triangular density to approximate the normal for reasons of parsimony. We also investigate the logical extensions of using a non-symmetric triangular density to approximate a lognormal density. Several issues associated with using a triangular density as a substitute for the normal and lognormal densities are discussed, and we explore the resulting numerical approximation errors for the normal case. Finally, we present several examples that highlight simple decision rules-of-thumb that the use of the approximation generates. Such rules-of-thumb, which are useful in risk and reliability analysis and general business analysis, can be difficult or impossible to extract without the use of approximations. These examples include uses of the approximation in generating random deviates, uses in mixture models for risk analysis, and an illustrative decision analysis problem. It is our belief that this exploratory look at the triangular approximation to the normal will provoke other practitioners to explore its possible use in various domains and applications

  11. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  12. Asteroseismic effects in close binary stars

    Science.gov (United States)

    Springer, Ofer M.; Shaviv, Nir J.

    2013-09-01

    Turbulent processes in the convective envelopes of the Sun and stars have been shown to be a source of internal acoustic excitations. In single stars, acoustic waves having frequencies below a certain cut-off frequency propagate nearly adiabatically and are effectively trapped below the photosphere where they are internally reflected. This reflection essentially occurs where the local wavelength becomes comparable to the pressure scale height. In close binary stars, the sound speed is a constant on equipotentials, while the pressure scale height, which depends on the local effective gravity, varies on equipotentials and may be much greater near the inner Lagrangian point (L1). As a result, waves reaching the vicinity of L1 may propagate unimpeded into low-density regions, where they tend to dissipate quickly due to non-linear and radiative effects. We study the three-dimensional propagation and enhanced damping of such waves inside a set of close binary stellar models using a WKB approximation of the acoustic field. We find that these waves can have much higher damping rates in close binaries, compared to their non-binary counterparts. We also find that the relative distribution of acoustic energy density at the visible surface of close binaries develops a ring-like feature at specific acoustic frequencies and binary separations.

  13. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  14. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  15. A Bayesian Method for Weighted Sampling

    OpenAIRE

    Lo, Albert Y.

    1993-01-01

    Bayesian statistical inference for sampling from weighted distribution models is studied. Small-sample Bayesian bootstrap clone (BBC) approximations to the posterior distribution are discussed. A second-order property for the BBC in unweighted i.i.d. sampling is given. A consequence is that BBC approximations to a posterior distribution of the mean and to the sampling distribution of the sample average, can be made asymptotically accurate by a proper choice of the random variables that genera...

  16. Dispersive and Covalent Interactions between Graphene and Metal Surfaces from the Random Phase Approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Yan, Jun; Mortensen, Jens Jørgen

    2011-01-01

    We calculate the potential energy surfaces for graphene adsorbed on Cu(111), Ni(111), and Co(0001) using density functional theory and the random phase approximation (RPA). For these adsorption systems covalent and dispersive interactions are equally important and while commonly used approximations...... for exchange-correlation functionals give inadequate descriptions of either van der Waals or chemical bonds, RPA accounts accurately for both. It is found that the adsorption is a delicate competition between a weak chemisorption minimum close to the surface and a physisorption minimum further from the surface....

  17. Fast and Analytical EAP Approximation from a 4th-Order Tensor

    Directory of Open Access Journals (Sweden)

    Aurobrata Ghosh

    2012-01-01

    Full Text Available Generalized diffusion tensor imaging (GDTI was developed to model complex apparent diffusivity coefficient (ADC using higher-order tensors (HOTs and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP. Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF, since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.

  18. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    Science.gov (United States)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  19. Approximation of the Monte Carlo Sampling Method for Reliability Analysis of Structures

    Directory of Open Access Journals (Sweden)

    Mahdi Shadab Far

    2016-01-01

    Full Text Available Structural load types, on the one hand, and structural capacity to withstand these loads, on the other hand, are of a probabilistic nature as they cannot be calculated and presented in a fully deterministic way. As such, the past few decades have witnessed the development of numerous probabilistic approaches towards the analysis and design of structures. Among the conventional methods used to assess structural reliability, the Monte Carlo sampling method has proved to be very convenient and efficient. However, it does suffer from certain disadvantages, the biggest one being the requirement of a very large number of samples to handle small probabilities, leading to a high computational cost. In this paper, a simple algorithm was proposed to estimate low failure probabilities using a small number of samples in conjunction with the Monte Carlo method. This revised approach was then presented in a step-by-step flowchart, for the purpose of easy programming and implementation.

  20. Function approximation of tasks by neural networks

    International Nuclear Information System (INIS)

    Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.

    2008-01-01

    For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem

  1. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  2. Random-phase approximation and broken symmetry

    International Nuclear Information System (INIS)

    Davis, E.D.; Heiss, W.D.

    1986-01-01

    The validity of the random-phase approximation (RPA) in broken-symmetry bases is tested in an appropriate many-body system for which exact solutions are available. Initially the regions of stability of the self-consistent quasiparticle bases in this system are established and depicted in a 'phase' diagram. It is found that only stable bases can be used in an RPA calculation. This is particularly true for those RPA modes which are not associated with the onset of instability of the basis; it is seen that these modes do not describe any excited state when the basis is unstable, although from a formal point of view they remain acceptable. The RPA does well in a stable broken-symmetry basis provided one is not too close to a point where a phase transition occurs. This is true for both energies and matrix elements. (author)

  3. GEO-MIX-SELF calculations of the elastic properties of a textured graphite sample at different hydrostatic pressures

    International Nuclear Information System (INIS)

    Matthies, Siegfried

    2012-01-01

    The recently developed GEO-MIX-SELF approximation (GMS) is applied to interpret the pressure dependence of the longitudinal ultrasonic wave velocities in a polycrystalline graphite sample that has already been investigated in a wide range of experimental contexts. Graphite single crystals have extremely anisotropic elastic properties, making this sample a challenging test to demonstrate the potential of the GMS method. GMS combines elements of well known self-consistent algorithms and of the geometric mean approximation. It is able to consider mixtures of different polycrystalline phases, each with its own nonspherical grain shape and preferred orientation (texture). Pores and 'cracks', typical for bulk graphite, are modeled as phases with 'empty' grains. The pressure dependence (up to 150 MPa) of the experimental wave velocities can be well explained using the known texture of the sample by fitting the shape parameters and volume fractions of the graphite grains, cracks and spherical pores. The pressure dependence of these parameters describes a reasonable scenario for the closing of the cracks and pores with increasing pressure. (orig.)

  4. Approximate controllability of the Navier-Stokes system in unbounded domains

    International Nuclear Information System (INIS)

    Shorygin, P O

    2003-01-01

    The question of the approximate controllability for the 2- and the 3-dimensional Navier-Stokes system defined in the exterior of a bounded domain ω or in the entire space is studied. It is shown that one can find boundary controls or locally distributed controls (having support in a prescribed bounded domain) defined on the right-hand side of the system such that in prescribed time the solution of the Navier-Stokes system becomes arbitrarily close to an arbitrary prescribed divergence-free vector field

  5. Piecewise quadratic Lyapunov functions for stability verification of approximate explicit MPC

    Directory of Open Access Journals (Sweden)

    Morten Hovd

    2010-04-01

    Full Text Available Explicit MPC of constrained linear systems is known to result in a piecewise affine controller and therefore also piecewise affine closed loop dynamics. The complexity of such analytic formulations of the control law can grow exponentially with the prediction horizon. The suboptimal solutions offer a trade-off in terms of complexity and several approaches can be found in the literature for the construction of approximate MPC laws. In the present paper a piecewise quadratic (PWQ Lyapunov function is used for the stability verification of an of approximate explicit Model Predictive Control (MPC. A novel relaxation method is proposed for the LMI criteria on the Lyapunov function design. This relaxation is applicable to the design of PWQ Lyapunov functions for discrete-time piecewise affine systems in general.

  6. An explicit approximate solution to the Duffing-harmonic oscillator by a cubication method

    International Nuclear Information System (INIS)

    Belendez, A.; Mendez, D.I.; Fernandez, E.; Marini, S.; Pascual, I.

    2009-01-01

    The nonlinear oscillations of a Duffing-harmonic oscillator are investigated by an approximated method based on the 'cubication' of the initial nonlinear differential equation. In this cubication method the restoring force is expanded in Chebyshev polynomials and the original nonlinear differential equation is approximated by a Duffing equation in which the coefficients for the linear and cubic terms depend on the initial amplitude, A. The replacement of the original nonlinear equation by an approximate Duffing equation allows us to obtain explicit approximate formulas for the frequency and the solution as a function of the complete elliptic integral of the first kind and the Jacobi elliptic function, respectively. These explicit formulas are valid for all values of the initial amplitude and we conclude this cubication method works very well for the whole range of initial amplitudes. Excellent agreement of the approximate frequencies and periodic solutions with the exact ones is demonstrated and discussed and the relative error for the approximate frequency is as low as 0.071%. Unlike other approximate methods applied to this oscillator, which are not capable to reproduce exactly the behaviour of the approximate frequency when A tends to zero, the cubication method used in this Letter predicts exactly the behaviour of the approximate frequency not only when A tends to infinity, but also when A tends to zero. Finally, a closed-form expression for the approximate frequency is obtained in terms of elementary functions. To do this, the relationship between the complete elliptic integral of the first kind and the arithmetic-geometric mean as well as Legendre's formula to approximately obtain this mean are used.

  7. Non-critical Poincare invariant bosonic string backgrounds and closed string tachyons

    International Nuclear Information System (INIS)

    Alvarez, Enrique; Gomez, Cesar; Hernandez, Lorenzo

    2001-01-01

    A new family of non critical bosonic string backgrounds in arbitrary space-time dimension D and with ISO(1,D-2) Poincare invariance are presented. The metric warping factor and dilaton agree asymptotically with the linear dilaton background. The closed string tachyon equation of motion enjoys, in the linear approximation, an exact solution of 'kink' type interpolating between different expectation values. A renormalization group flow interpretation, based on a closed string tachyon potential of type -T 2 e -T , is suggested

  8. Robust approximation-free prescribed performance control for nonlinear systems and its application

    Science.gov (United States)

    Sun, Ruisheng; Na, Jing; Zhu, Bin

    2018-02-01

    This paper presents a robust prescribed performance control approach and its application to nonlinear tail-controlled missile systems with unknown dynamics and uncertainties. The idea of prescribed performance function (PPF) is incorporated into the control design, such that both the steady-state and transient control performance can be strictly guaranteed. Unlike conventional PPF-based control methods, we further tailor a recently proposed systematic control design procedure (i.e. approximation-free control) using the transformed tracking error dynamics, which provides a proportional-like control action. Hence, the function approximators (e.g. neural networks, fuzzy systems) that are widely used to address the unknown nonlinearities in the nonlinear control designs are not needed. The proposed control design leads to a robust yet simplified function approximation-free control for nonlinear systems. The closed-loop system stability and the control error convergence are all rigorously proved. Finally, comparative simulations are conducted based on nonlinear missile systems to validate the improved response and the robustness of the proposed control method.

  9. Inelastic collisions between an atom and a diatomic molecule. I. Theoretical and numerical considerations for the close coupling approximation

    International Nuclear Information System (INIS)

    Choi, B.H.; Tang, K.T.

    1975-01-01

    The close coupled differential equations for rotational excitation in collisions between an atom and a diatomic molecule are reformulated. Although it is equivalent to other formulations, it is computationally more convenient and gives a simpler expression for differential cross sections. Questions concerning real boundary conditions and the unitarity of the S matrix are discussed. Stormer's algorithm for solving coupled differential equations is introduced for molecular scatterings. This numerical procedure, which is known to be very useful in nuclear scattering problems, has to be modified for molecular systems. It is capable of treating the case where all channels are open as well as the case where some of the channels are closed. This algorithm is compared with other typical procedures of solving coupled differential equations

  10. Accuracy of the ''decoupled l-dominant'' approximation for atom--molecule scattering

    International Nuclear Information System (INIS)

    Green, S.

    1976-01-01

    Cross sections for rotational excitation and spectral pressure broadening of HD, HCl, CO, and HCN due to collisions with low energy He atoms have been computed within the ''decoupled l-dominant'' (DLD) approximation recently suggested by DePristo and Alexander. These are compared with accurate close coupling results and also with two similar approximations, the effective potential of Rabitz and the coupled states of McGuire and Kouri. These collision systems are all dominated by short-range repulsive interactions although they have varying degrees of anisotropy and inelasticity. The coupled states method is expected to be valid for such systems, but they should be a severe test to the DLD approximation which is expected to be better for long-range interactions. Nonetheless, DLD predictions of state-to-state cross sections are rather good, being only slightly less accurate than coupled states results. DLD is far superior to either the coupled states or effective potential methods for pressure broadening calculations, although it may not be uniformly of the quantitative accuracy desirable for obtaining intermolecular potentials from experimental data

  11. Corrigendum to “Relative humidity effects on water vapour fluxes measured with closed-path eddy-covariance systems with short sampling lines” [Agric. Forest Meteorol. 165 (2012) 53–63

    DEFF Research Database (Denmark)

    Fratini, Gerardo; Ibrom, Andreas; Arriga, Nicola

    2012-01-01

    It has been formerly recognised that increasing relative humidity in the sampling line of closed-path eddy-covariance systems leads to increasing attenuation of water vapour turbulent fluctuations, resulting in strong latent heat flux losses. This occurrence has been analyzed for very long (50 m...... from eddy-covariance systems featuring short (4 m) and very short (1 m) sampling lines running at the same clover field and show that relative humidity effects persist also for these setups, and should not be neglected. Starting from the work of Ibrom and co-workers, we propose a mixed method...... and correction method proposed here is deemed applicable to closed-path systems featuring a broad range of sampling lines, and indeed applicable also to passive gases as a special case. The methods described in this paper are incorporated, as processing options, in the free and open-source eddy...

  12. Analytical approximation of the erosion rate and electrode wear in micro electrical discharge machining

    International Nuclear Information System (INIS)

    Kurnia, W; Tan, P C; Yeo, S H; Wong, M

    2008-01-01

    Theoretical models have been used to predict process performance measures in electrical discharge machining (EDM), namely the material removal rate (MRR), tool wear ratio (TWR) and surface roughness (SR). However, these contributions are mainly applicable to conventional EDM due to limits on the range of energy and pulse-on-time adopted by the models. This paper proposes an analytical approximation of micro-EDM performance measures, based on the crater prediction using a developed theoretical model. The results show that the analytical approximation of the MRR and TWR is able to provide a close approximation with the experimental data. The approximation results for the MRR and TWR are found to have a variation of up to 30% and 24%, respectively, from their associated experimental values. Since the voltage and current input used in the computation are captured in real time, the method can be applied as a reliable online monitoring system for the micro-EDM process

  13. Generation, combination and extension of random set approximations to coherent lower and upper probabilities

    International Nuclear Information System (INIS)

    Hall, Jim W.; Lawry, Jonathan

    2004-01-01

    Random set theory provides a convenient mechanism for representing uncertain knowledge including probabilistic and set-based information, and extending it through a function. This paper focuses upon the situation when the available information is in terms of coherent lower and upper probabilities, which are encountered, for example, when a probability distribution is specified by interval parameters. We propose an Iterative Rescaling Method (IRM) for constructing a random set with corresponding belief and plausibility measures that are a close outer approximation to the lower and upper probabilities. The approach is compared with the discrete approximation method of Williamson and Downs (sometimes referred to as the p-box), which generates a closer approximation to lower and upper cumulative probability distributions but in most cases a less accurate approximation to the lower and upper probabilities on the remainder of the power set. Four combination methods are compared by application to example random sets generated using the IRM

  14. Approximate symmetries of Hamiltonians

    Science.gov (United States)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  15. Relativistic time-dependent local-density approximation theory and applications to atomic physics

    International Nuclear Information System (INIS)

    Parpia, F.Z.

    1984-01-01

    A time-dependent linear-response theory appropriate to the relativistic local-density approximation (RLDA) to quantum electrodynamics (QED) is developed. The resulting theory, the relativistic time-dependent local-density approximation (RTDLDA) is specialized to the treatment of electric excitations in closed-shell atoms. This formalism is applied to the calculation of atomic photoionization parameters in the dipole approximation. The static-field limit of the RTDLDA is applied to the calculation of dipole polarizabilities. Extensive numerical calculations of the photoionization parameters for the rare gases neon, argon, krypton, and xenon, and for mercury from the RTDLDA are presented and compared in detail with the results of other theories, in particular the relativistic random-phase approximation (RRPA), and with experimental measurements. The predictions of the RTDLDA are comparable with the RRPA calculations made to date. This is remarkable in that the RTDLDA entails appreciably less computational effort. Finally, the dipole polarizabilities predicted by the static-field RTDLDA are compared with other determinations of these quantities. In view of its simplicity, the static-field RTDLDA demonstrates itself to be one of the most powerful theories available for the calculation of dipole polarizabilities

  16. Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation

    Directory of Open Access Journals (Sweden)

    Rachmawati Ro’fah Nur

    2014-03-01

    Full Text Available Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.

  17. Minimal-Approximation-Based Distributed Consensus Tracking of a Class of Uncertain Nonlinear Multiagent Systems With Unknown Control Directions.

    Science.gov (United States)

    Choi, Yun Ho; Yoo, Sung Jin

    2017-03-28

    A minimal-approximation-based distributed adaptive consensus tracking approach is presented for strict-feedback multiagent systems with unknown heterogeneous nonlinearities and control directions under a directed network. Existing approximation-based consensus results for uncertain nonlinear multiagent systems in lower-triangular form have used multiple function approximators in each local controller to approximate unmatched nonlinearities of each follower. Thus, as the follower's order increases, the number of the approximators used in its local controller increases. However, the proposed approach employs only one function approximator to construct the local controller of each follower regardless of the order of the follower. The recursive design methodology using a new error transformation is derived for the proposed minimal-approximation-based design. Furthermore, a bounding lemma on parameters of Nussbaum functions is presented to handle the unknown control direction problem in the minimal-approximation-based distributed consensus tracking framework and the stability of the overall closed-loop system is rigorously analyzed in the Lyapunov sense.

  18. Sparse linear models: Variational approximate inference and Bayesian experimental design

    International Nuclear Information System (INIS)

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  19. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  20. Detection of cracks in shafts with the Approximated Entropy algorithm

    Science.gov (United States)

    Sampaio, Diego Luchesi; Nicoletti, Rodrigo

    2016-05-01

    The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.

  1. Formulation and Analysis of an Approximate Expression for Voltage Sensitivity in Radial DC Distribution Systems

    Directory of Open Access Journals (Sweden)

    Ho-Yong Jeong

    2015-08-01

    Full Text Available Voltage is an important variable that reflects system conditions in DC distribution systems and affects many characteristics of a system. In a DC distribution system, there is a close relationship between the real power and the voltage magnitude, and this is one of major differences from the characteristics of AC distribution systems. One such relationship is expressed as the voltage sensitivity, and an understanding of voltage sensitivity is very useful to describe DC distribution systems. In this paper, a formulation for a novel approximate expression for the voltage sensitivity in a radial DC distribution system is presented. The approximate expression is derived from the power flow equation with some additional assumptions. The results of approximate expression is compared with an exact calculation, and relations between the voltage sensitivity and electrical quantities are analyzed analytically using both the exact form and the approximate voltage sensitivity equation.

  2. Predicted macroinvertebrate response to water diversion from a montane stream using two-dimensional hydrodynamic models and zero flow approximation

    Science.gov (United States)

    Holmquist, Jeffrey G.; Waddle, Terry J.

    2013-01-01

    We used two-dimensional hydrodynamic models for the assessment of water diversion effects on benthic macroinvertebrates and associated habitat in a montane stream in Yosemite National Park, Sierra Nevada Mountains, CA, USA. We sampled the macroinvertebrate assemblage via Surber sampling, recorded detailed measurements of bed topography and flow, and coupled a two-dimensional hydrodynamic model with macroinvertebrate indicators to assess habitat across a range of low flows in 2010 and representative past years. We also made zero flow approximations to assess response of fauna to extreme conditions. The fauna of this montane reach had a higher percentage of Ephemeroptera, Plecoptera, and Trichoptera (%EPT) than might be expected given the relatively low faunal diversity of the study reach. The modeled responses of wetted area and area-weighted macroinvertebrate metrics to decreasing discharge indicated precipitous declines in metrics as flows approached zero. Changes in area-weighted metrics closely approximated patterns observed for wetted area, i.e., area-weighted invertebrate metrics contributed relatively little additional information above that yielded by wetted area alone. Loss of habitat area in this montane stream appears to be a greater threat than reductions in velocity and depth or changes in substrate, and the modeled patterns observed across years support this conclusion. Our models suggest that step function losses of wetted area may begin when discharge in the Merced falls to 0.02 m3/s; proportionally reducing diversions when this threshold is reached will likely reduce impacts in low flow years.

  3. Comment on the accuracy of Rabitz' effective potential approximation for rotational excitation by collisions

    International Nuclear Information System (INIS)

    Green, S.

    1975-01-01

    Cross sections for rotational excitation of HCN by low energy collisions with He have been computed with the effective potential approximation of Rabitz and compared with accurate quantum close-coupling results. Elastic cross sections are found to agree to about 20%; inelastic cross sections agree in general magnitude but not in detailed values for specific quantum transitions

  4. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  5. On the validity of localized approximations for Bessel beams: All N-Bessel beams are identically equal to zero

    International Nuclear Information System (INIS)

    Gouesbet, Gérard

    2016-01-01

    Localized approximation procedures are efficient ways to evaluate beam shape coefficients of a laser beam. They are particularly useful when other methods are ineffective or inefficient. Several papers in the literature have reported the use of such procedures to evaluate the beam shape coefficients of Bessel beams. Relying on the concept of N-beams, it is demonstrated that care must be taken when constructing a localized approximation for a Bessel beam, namely a localized Bessel beam is satisfactorily close enough to the intended beam only when the axicon angle is small enough. - Highlights: • Localized approximation has been used to evaluate BSCs of Bessel beams. • N-beam procedure fails to provide a localized approximation for Bessel beams. • Localized approximation should be used only for small axicon angles.

  6. A study on the weather sampling method for probabilistic consequence analysis

    International Nuclear Information System (INIS)

    Oh, Hae Cheol

    1996-02-01

    The main task of probabilistic accident consequence analysis model is to predict the radiological situation and to provide a reliable quantitative data base for making decisions on countermeasures. The magnitude of accident consequence is depended on the characteristic of the accident and the weather coincident. In probabilistic accident consequence analysis, it is necessary to repeat the atmospheric dispersion calculation with several hundreds of weather sequences to predict the full distribution of consequences which may occur following a postulated accident release. It is desirable to select a representative sample of weather sequences from a meteorological record which is typical of the area over which the released radionuclides will disperse and which spans a sufficiently long period. The selection process is done by means of sampling techniques from a full year of hourly weather data characteristic of the plant site. In this study, the proposed Weighted importance sampling method selects proportional to the each bin size to closely approximate the true frequency distribution of weather condition at the site. The Weighted importance sampling method results in substantially less sampling uncertainty than the previous technique. The proposed technique can result in improve confidence in risk estimates

  7. Leveraging Gaussian process approximations for rapid image overlay production

    CSIR Research Space (South Africa)

    Burke, Michael

    2017-10-01

    Full Text Available value, xs = argmax x∗ [ K (x∗, x∗) − K (x∗, x)K (x, x)−1K (x, x∗) ] . (10) Figure 2 illustrates this sampling strategy more clearly. This selec- tion process can be slow, but could be bootstrapped using Latin hypercube sampling [16]. 3 RESULTS Empirical... point - a 240 sample Gaussian process approximation takes roughly the same amount of time to compute as the full blanked overlay. GP 50 GP 100 GP 150 GP 200 GP 250 GP 300 GP 350 GP 400 Full Itti-Koch 0 2 4 6 8 10 Method R at in g Boxplot of storyboard...

  8. The Incorporation of Truncated Fourier Series into Finite Difference Approximations of Structural Stability Equations

    Science.gov (United States)

    Hannah, S. R.; Palazotto, A. N.

    1978-01-01

    A new trigonometric approach to the finite difference calculus was applied to the problem of beam buckling as represented by virtual work and equilibrium equations. The trigonometric functions were varied by adjusting a wavelength parameter in the approximating Fourier series. Values of the critical force obtained from the modified approach for beams with a variety of boundary conditions were compared to results using the conventional finite difference method. The trigonometric approach produced significantly more accurate approximations for the critical force than the conventional approach for a relatively wide range in values of the wavelength parameter; and the optimizing value of the wavelength parameter corresponded to the half-wavelength of the buckled mode shape. It was found from a modal analysis that the most accurate solutions are obtained when the approximating function closely represents the actual displacement function and matches the actual boundary conditions.

  9. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    Energy Technology Data Exchange (ETDEWEB)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.

  10. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  11. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM.

    Science.gov (United States)

    Singh, Brajesh K; Srivastava, Vineet K

    2015-04-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.

  12. General Rytov approximation.

    Science.gov (United States)

    Potvin, Guy

    2015-10-01

    We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.

  13. Derivation of fluid dynamics from kinetic theory with the 14-moment approximation

    International Nuclear Information System (INIS)

    Denicol, G.S.; Molnar, E.; Niemi, H.; Rischke, D.H.

    2012-01-01

    We review the traditional derivation of the fluid-dynamical equations from kinetic theory according to Israel and Stewart. We show that their procedure to close the fluid-dynamical equations of motion is not unique. Their approach contains two approximations, the first being the so-called 14-moment approximation to truncate the single-particle distribution function. The second consists in the choice of equations of motion for the dissipative currents. Israel and Stewart used the second moment of the Boltzmann equation, but this is not the only possible choice. In fact, there are infinitely many moments of the Boltzmann equation which can serve as equations of motion for the dissipative currents. All resulting equations of motion have the same form, but the transport coefficients are different in each case. (orig.)

  14. Black-pigmented anaerobic rods in closed periapical lesions.

    Science.gov (United States)

    Bogen, G; Slots, J

    1999-05-01

    This study determined the frequency of Porphyromonas endodontalis, Porphyromonas gingivalis, Prevotella intermedia and Prevotella nigrescens in 20 closed periapical lesions associated with symptomatic and asymptomatic refractory endodontic disease. To deliniate possible oral sources of P. endodontalis, the presence of the organism was assessed in selected subgingival sites and saliva in the same study patients. Periapical samples were obtained by paper points during surgical endodontic procedures using methods designed to minimize contamination by non-endodontic microorganisms. Subgingival plaque samples were obtained by paper points from three periodontal pockets and from the pocket of the tooth associated with the closed periapical lesion. Unstimulated saliva was collected from the surface of the soft palate. Bacterial identification was performed using a species-specific polymerase chain reaction (PCR) detection method. P. endodontalis was not identified in any periapical lesion, even though subgingival samples from eight patients (40%) revealed the P. endodontalis-specific amplicon. P. gingivalis occurred in one periapical lesion that was associated with moderate pain. P. nigrescens, P. endodontalis and P. intermedia were not detected in any periapical lesion studied. Black-pigmented anaerobic rods appear to be infrequent inhabitants of the closed periapical lesion.

  15. A method for the approximate solutions of the unsteady boundary layer equations

    International Nuclear Information System (INIS)

    Abdus Sattar, Md.

    1990-12-01

    The approximate integral method proposed by Bianchini et al. to solve the unsteady boundary layer equations is considered here with a simple modification to the scale function for the similarity variable. This is done by introducing a time dependent length scale. The closed form solutions, thus obtained, give satisfactory results for the velocity profile and the skin friction to a limiting case in comparison with the results of the past investigators. (author). 7 refs, 2 figs

  16. Electron impact excitation of positive ions calculated in the Coulomb-Born approximation

    International Nuclear Information System (INIS)

    Nakazaki, Shinobu; Hashino, Tasuke

    1979-08-01

    Theoretical results on the electron impact excitation of positive ions are surveyed through the end of 1978. As a guide to the available data, a list of references is made. The list shows ion species, transitions, energy range and methods of calculation for the respective data. Based on the literature survey, the validity of the Coulomb-Born approximation is investigated. Comparisons with the results of the close-coupling and the distorted-wave methods are briefly summarized. (author)

  17. ISO 3171 production hydrocarbons allocation sampling for challenging ''tie-ins''with pressures close to RVP breakout

    Energy Technology Data Exchange (ETDEWEB)

    Jiskoot, Mark

    2005-07-01

    There are an increasing number of applications where sampling is required for crude oils and condensates at close to vapour breakout in production environments. These are typified where the quality measurement must be extracted/made in the ''oil'' leg of a separator. The exertion of back pressure (pressure loss) on the liquid leg to expand the operating envelope is generally not acceptable due to the effect that this will have on the separator efficiency and this frequently precludes the use of normal metering technologies. These conditions represent some of the hardest for quality determination. In conventional separators, velocities are low to promote separation of the gas and liquids and the therefore the gas pressure within the liquid phase is close to breakout. Frequently there is an envelope of less than 0.5 bar below the operating pressure at which the fluid will start to cavitate. Concurrent with the low velocity is the expectation of some free water that would require mixing to ensure representativity. There are frequently also severe restrictions on space and the piping arrangement in which to achieve a representative off-take. The ''rules'' that are applied to representative sampling are equally applicable to the use of water monitors (water cut, OWD) and to densitometers. An integrated quality measurement system that prevents any impact on the process and also provides the required mixing to allow extraction of a representative off take that can be used for physical samplers as well as process instrumentation such as densitometers, viscometers and water cut monitors can be achieved by careful design. Jiskoot has adapted its CoJetix technology to meet these goals. (author) (tk)

  18. Mean-field approximations of fixation time distributions of evolutionary game dynamics on graphs

    Science.gov (United States)

    Ying, Li-Min; Zhou, Jie; Tang, Ming; Guan, Shu-Guang; Zou, Yong

    2018-02-01

    The mean fixation time is often not accurate for describing the timescales of fixation probabilities of evolutionary games taking place on complex networks. We simulate the game dynamics on top of complex network topologies and approximate the fixation time distributions using a mean-field approach. We assume that there are two absorbing states. Numerically, we show that the mean fixation time is sufficient in characterizing the evolutionary timescales when network structures are close to the well-mixing condition. In contrast, the mean fixation time shows large inaccuracies when networks become sparse. The approximation accuracy is determined by the network structure, and hence by the suitability of the mean-field approach. The numerical results show good agreement with the theoretical predictions.

  19. Approximal sealings on lesions in neighbouring teeth requiring operative treatment: an in vitro study.

    Science.gov (United States)

    Cartagena, Alvaro; Bakhshandeh, Azam; Ekstrand, Kim Rud

    2018-02-07

    With this in vitro study we aimed to assess the possibility of precise application of sealant on accessible artificial white spot lesions (WSL) on approximal surfaces next to a tooth surface under operative treatment. A secondary aim was to evaluate whether the use of magnifying glasses improved the application precision. Fifty-six extracted premolars were selected, approximal WSL lesions were created with 15% HCl gel and standardized photographs were taken. The premolars were mounted in plaster-models in contact with a neighbouring molar with Class II/I-II restoration (Sample 1) or approximal, cavitated dentin lesion (Sample 2). The restorations or the lesion were removed, and Clinpro Sealant was placed over the WSL. Magnifying glasses were used when sealing half the study material. The sealed premolar was removed from the plaster-model and photographed. Adobe Photoshop was used to measure the size of WSL and sealed area. The degree of match between the areas was determined in Photoshop. Interclass agreement for WSL, sealed, and matched areas were found as excellent (κ = 0.98-0.99). The sealant covered 48-100% of the WSL-area (median = 93%) in Sample 1 and 68-100% of the WSL-area (median = 95%) in Sample 2. No statistical differences were observed concerning uncovered proportions of the WSL-area between groups with and without using magnifying glasses (p values ≥ .19). However, overextended sealed areas were more pronounced when magnification was used (p = .01). The precision did not differ between the samples (p = .31). It was possible to seal accessible approximal lesions with high precision. Use of magnifying glasses did not improve the precision.

  20. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    Science.gov (United States)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  1. Thermal probe design for Europa sample acquisition

    Science.gov (United States)

    Horne, Mera F.

    2018-01-01

    The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.

  2. A Poisson process approximation for generalized K-5 confidence regions

    Science.gov (United States)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  3. Theory of inelastic electron tunneling from a localized spin in the impulsive approximation.

    Science.gov (United States)

    Persson, Mats

    2009-07-31

    A simple expression for the conductance steps in inelastic electron tunneling from spin excitations in a single magnetic atom adsorbed on a nonmagnetic metal surface is derived. The inelastic coupling between the tunneling electron and the spin is via the exchange coupling and is treated in an impulsive approximation using the Tersoff-Hamann approximation for the tunneling between the tip and the sample.

  4. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  5. Spurious results from Fourier analysis of data with closely spaced frequencies

    International Nuclear Information System (INIS)

    Loumos, G.L.; Deeming, T.J.

    1978-01-01

    It is shown how erroneous results can occur using some period-finding methods, such as Fourier analysis, on data containing closely spaced frequencies. The frequency spacing accurately resolvable with data of length T is increased from the standard value of about 1/T quoted in the literature to approximately 1.5/T. (Auth.)

  6. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  7. Radiographic diagnoses and treatment decisions on approximal caries

    International Nuclear Information System (INIS)

    Espelid, I.

    1987-01-01

    Mineral loss which represents a threshold value for radiographic diagnosis, cannot be defined exactly. For clinical use 10% mineral loss in the direction of the X-ray beam may constitute a border line lesion for radiographic detection, and caries lesions without cavitation seemed to be beyond this diagnostic threshold. The degree of caries estimated by using radiographs is fairly closely related to the depth of the tissue changes recorded in the prepared cavity. Radiographic examinations more often lead to underestimation than overestimation of the degree of caries. Radiographic caries diagnoses made at different degrees of penetration toward the pulp showed insignificant variations with respect to quality, but the observers were more confident of caries being present (used more strict criterion) when they scored caries in inner dentin. Consensus on diagnostic criteria and improved diagnostic quality are considerably more important to the quality of therapeutic decisions on approximal caries than viewing conditions and film density. A semi-radiopaque material in Class II fillings seems to offer advantages compared to amalgam in respect of the diagnosis of secondary caries and marginal defects. There is a danger that dentists will restore approximal caries lesions too early and before these can be diagnosed in dentin radiographically

  8. Wave vector modification of the infinite order sudden approximation

    International Nuclear Information System (INIS)

    Sachs, J.G.; Bowman, J.M.

    1980-01-01

    A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities P/sub n/1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=such thatub f/-n/sub i/ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison

  9. Wave vector modification of the infinite order sudden approximation

    Science.gov (United States)

    Sachs, Judith Grobe; Bowman, Joel M.

    1980-10-01

    A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities Pn1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=‖nf-ni‖ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.

  10. Quantitative microwave impedance microscopy with effective medium approximations

    Directory of Open Access Journals (Sweden)

    T. S. Jones

    2017-02-01

    Full Text Available Microwave impedance microscopy (MIM is a scanning probe technique to measure local changes in tip-sample admittance. The imaginary part of the reported change is calibrated with finite element simulations and physical measurements of a standard capacitive sample, and thereafter the output ΔY is given a reference value in siemens. Simulations also provide a means of extracting sample conductivity and permittivity from admittance, a procedure verified by comparing the estimated permittivity of polytetrafluoroethlyene (PTFE to the accepted value. Simulations published by others have investigated the tip-sample system for permittivity at a given conductivity, or conversely conductivity and a given permittivity; here we supply the full behavior for multiple values of both parameters. Finally, the well-known effective medium approximation of Bruggeman is considered as a means of estimating the volume fractions of the constituents in inhomogeneous two-phase systems. Specifically, we consider the estimation of porosity in carbide-derived carbon, a nanostructured material known for its use in energy storage devices.

  11. Two-body density matrix for closed s-d shell nuclei

    International Nuclear Information System (INIS)

    Dimitrova, S.S.; Kadrev, D.N.; Antonov, A.N.; Stoitsov, M.V.

    2000-01-01

    The two-body density matrix for 4 He, 16 O and 40 Ca within the Low-order approximation of the Jastrow correlation method is considered. Closed analytical expressions for the two-body density matrix, the center of mass and relative local densities and momentum distributions are presented. The effects of the short-range correlations on the two-body nuclear characteristics are investigated. (orig.)

  12. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  14. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  15. Dynamic hysteresis of a uniaxial superparamagnet: Semi-adiabatic approximation

    International Nuclear Information System (INIS)

    Poperechny, I.S.; Raikher, Yu.L.; Stepanov, V.I.

    2014-01-01

    The semi-adiabatic theory of magnetic response of a uniaxial single-domain ferromagnetic particle is presented. The approach is developed in the context of the kinetic theory and allows for any orientation of the external field. Within this approximation, the dynamic magnetic hysteresis loops in an ac field are calculated. It is demonstrated that they very closely resemble those obtained by the full kinetic theory. The behavior of the effective coercive force is analyzed in detail, and for it a simple formula is proposed. This relation accounts not only for the temperature behavior of the coercive force, as the previous ones do, but also yields the dependence on the frequency and amplitude of the applied field

  16. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2013-01-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  17. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan

    2013-06-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  18. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  19. Approximation of rejective sampling inclusion probabilities and application to high order correlations

    NARCIS (Netherlands)

    Boistard, H.; Lopuhää, H.P.; Ruiz-Gazen, A.

    2012-01-01

    This paper is devoted to rejective sampling. We provide an expansion of joint inclusion probabilities of any order in terms of the inclusion probabilities of order one, extending previous results by Hájek (1964) and Hájek (1981) and making the remainder term more precise. Following Hájek (1981), the

  20. Maintaining close relationships: Gratitude as a motivator and a detector of maintenance behavior

    NARCIS (Netherlands)

    Kubacka, K.E.; Finkenauer, C.; Rusbult, C.E.; Keijsers, L.

    2011-01-01

    This research examined the dual function of gratitude for relationship maintenance in close relationships. In a longitudinal study among married couples, the authors tested the dyadic effects of gratitude over three time points for approximately 4 years following marriage. They found that feelings

  1. Accelerating Approximate Bayesian Computation with Quantile Regression: application to cosmological redshift distributions

    Science.gov (United States)

    Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.

    2018-02-01

    Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.

  2. Obtaining Approximate Values of Exterior Orientation Elements of Multi-Intersection Images Using Particle Swarm Optimization

    Science.gov (United States)

    Li, X.; Li, S. W.

    2012-07-01

    In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are

  3. OBTAINING APPROXIMATE VALUES OF EXTERIOR ORIENTATION ELEMENTS OF MULTI-INTERSECTION IMAGES USING PARTICLE SWARM OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    X. Li

    2012-07-01

    Full Text Available In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO, is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm

  4. Detecting Change-Point via Saddlepoint Approximations

    Institute of Scientific and Technical Information of China (English)

    Zhaoyuan LI; Maozai TIAN

    2017-01-01

    It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.

  5. The infinite limit as an eliminable approximation for phase transitions

    Science.gov (United States)

    Ardourel, Vincent

    2018-05-01

    It is generally claimed that infinite idealizations are required for explaining phase transitions within statistical mechanics (e.g. Batterman 2011). Nevertheless, Menon and Callender (2013) have outlined theoretical approaches that describe phase transitions without using the infinite limit. This paper closely investigates one of these approaches, which consists of studying the complex zeros of the partition function (Borrmann et al., 2000). Based on this theory, I argue for the plausibility for eliminating the infinite limit for studying phase transitions. I offer a new account for phase transitions in finite systems, and I argue for the use of the infinite limit as an approximation for studying phase transitions in large systems.

  6. Reliable Approximation of Long Relaxation Timescales in Molecular Dynamics

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2017-07-01

    Full Text Available Many interesting rare events in molecular systems, like ligand association, protein folding or conformational changes, occur on timescales that often are not accessible by direct numerical simulation. Therefore, rare event approximation approaches like interface sampling, Markov state model building, or advanced reaction coordinate-based free energy estimation have attracted huge attention recently. In this article we analyze the reliability of such approaches. How precise is an estimate of long relaxation timescales of molecular systems resulting from various forms of rare event approximation methods? Our results give a theoretical answer to this question by relating it with the transfer operator approach to molecular dynamics. By doing so we also allow for understanding deep connections between the different approaches.

  7. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    Science.gov (United States)

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  8. Convergence acceleration for time-independent first-order PDE using optimal PNB-approximations

    Energy Technology Data Exchange (ETDEWEB)

    Holmgren, S.; Branden, H. [Uppsala Univ. (Sweden)

    1996-12-31

    We consider solving time-independent (steady-state) flow problems in 2D or 3D governed by hyperbolic or {open_quotes}almost hyperbolic{close_quotes} systems of partial differential equations (PDE). Examples of such PDE are the Euler and the Navier-Stokes equations. The PDE is discretized using a finite difference or finite volume scheme with arbitrary order of accuracy. If the matrix B describes the discretized differential operator and u denotes the approximate solution, the discrete problem is given by a large system of equations.

  9. Approximate analytical solution to the Boussinesq equation with a sloping water-land boundary

    Science.gov (United States)

    Tang, Yuehao; Jiang, Qinghui; Zhou, Chuangbing

    2016-04-01

    An approximate solution is presented to the 1-D Boussinesq equation (BEQ) characterizing transient groundwater flow in an unconfined aquifer subject to a constant water variation at the sloping water-land boundary. The flow equation is decomposed to a linearized BEQ and a head correction equation. The linearized BEQ is solved using a Laplace transform. By means of the frozen-coefficient technique and Gauss function method, the approximate solution for the head correction equation can be obtained, which is further simplified to a closed-form expression under the condition of local energy equilibrium. The solutions of the linearized and head correction equations are discussed from physical concepts. Especially for the head correction equation, the well posedness of the approximate solution obtained by the frozen-coefficient method is verified to demonstrate its boundedness, which can be further embodied as the upper and lower error bounds to the exact solution of the head correction by statistical analysis. The advantage of this approximate solution is in its simplicity while preserving the inherent nonlinearity of the physical phenomenon. Comparisons between the analytical and numerical solutions of the BEQ validate that the approximation method can achieve desirable precisions, even in the cases with strong nonlinearity. The proposed approximate solution is applied to various hydrological problems, in which the algebraic expressions that quantify the water flow processes are derived from its basic solutions. The results are useful for the quantification of stream-aquifer exchange flow rates, aquifer response due to the sudden reservoir release, bank storage and depletion, and front position and propagation speed.

  10. A bidirectional brain-machine interface algorithm that approximates arbitrary force-fields.

    Directory of Open Access Journals (Sweden)

    Alessandro Vato

    Full Text Available We examine bidirectional brain-machine interfaces that control external devices in a closed loop by decoding motor cortical activity to command the device and by encoding the state of the device by delivering electrical stimuli to sensory areas. Although it is possible to design this artificial sensory-motor interaction while maintaining two independent channels of communication, here we propose a rule that closes the loop between flows of sensory and motor information in a way that approximates a desired dynamical policy expressed as a field of forces acting upon the controlled external device. We previously developed a first implementation of this approach based on linear decoding of neural activity recorded from the motor cortex into a set of forces (a force field applied to a point mass, and on encoding of position of the point mass into patterns of electrical stimuli delivered to somatosensory areas. However, this previous algorithm had the limitation that it only worked in situations when the position-to-force map to be implemented is invertible. Here we overcome this limitation by developing a new non-linear form of the bidirectional interface that can approximate a virtually unlimited family of continuous fields. The new algorithm bases both the encoding of position information and the decoding of motor cortical activity on an explicit map between spike trains and the state space of the device computed with Multi-Dimensional-Scaling. We present a detailed computational analysis of the performance of the interface and a validation of its robustness by using synthetic neural responses in a simulated sensory-motor loop.

  11. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    Science.gov (United States)

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  12. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  13. Conversion and matched filter approximations for serial minimum-shift keyed modulation

    Science.gov (United States)

    Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.

    1982-01-01

    Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.

  14. Analytic Approximations to the Free Boundary and Multi-dimensional Problems in Financial Derivatives Pricing

    Science.gov (United States)

    Lau, Chun Sing

    This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in

  15. Preadolescents' and Adolescents' Online Communication and Their Closeness to Friends

    Science.gov (United States)

    Valkenburg, Patti M.; Peter, Jochen

    2007-01-01

    The 1st goal of this study was to investigate how online communication is related to the closeness of existing friendships. Drawing from a sample of 794 preadolescents and adolescents, the authors found that online communication was positively related to the closeness of friendships. However, this effect held only for respondents who primarily…

  16. Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...

  17. Collective coordinate approximation to the scattering of solitons in modified NLS and sine-Gordon models

    International Nuclear Information System (INIS)

    Baron, H.E.; Zakrzewski, W.J.

    2016-01-01

    We investigate the validity of collective coordinate approximations to the scattering of two solitons in several classes of (1+1) dimensional field theory models. We consider models which are deformations of the sine-Gordon (SG) or the nonlinear Schrödinger (NLS) model which posses soliton solutions (which are topological (SG) or non-topological (NLS)). Our deformations preserve their topology (SG), but change their integrability properties, either completely or partially (models become ‘quasi-integrable’). As the collective coordinate approximation does not allow for the radiation of energy out of a system we look, in some detail, at how the approximation fares in models which are ‘quasi-integrable’ and therefore have asymptotically conserved charges (i.e. charges Q(t) for which Q(t→−∞)=Q(t→∞)). We find that our collective coordinate approximation, based on geodesic motion etc, works amazingly well in all cases where it is expected to work. This is true for the physical properties of the solitons and even for their quasi-conserved (or not) charges. The only time the approximation is not very reliable (and even then the qualitative features are reasonable, but some details are not reproduced well) involves the processes when the solitons come very close together (within one width of each other) during their scattering.

  18. Attachment predicts cortisol response and closeness in dyadic social interaction.

    Science.gov (United States)

    Ketay, Sarah; Beck, Lindsey A

    2017-06-01

    The present study examined how the interplay of partners' attachment styles influences cortisol response, actual closeness, and desired closeness during friendship initiation. Participants provided salivary cortisol samples at four timepoints throughout either a high or low closeness task that facilitated high or low levels of self-disclosure with a potential friend (i.e., another same-sex participant). Levels of actual closeness and desired closeness following the task were measured via inclusion of other in the self. Results from multi-level modeling indicated that the interaction of both participants' attachment avoidance predicted cortisol response patterns, with participants showing the highest cortisol response when there was a mismatch between their own and their partners' attachment avoidance. Further, the interaction between both participants' attachment anxiety predicted actual closeness and desired closeness, with participants both feeling and wanting the most closeness with partners when both they and their partners were low in attachment anxiety. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The Close AGN Reference Survey (CARS)

    Science.gov (United States)

    Husemann, B.; Tremblay, G.; Davis, T.; Busch, G.; McElroy, R.; Neumann, J.; Urrutia, T.; Krumpe, M.; Scharwächter, J.; Powell, M.; Perez-Torres, M.; The CARS Team

    2017-09-01

    The role of active galactic nuclei (AGN) in the evolution of galaxies remains a mystery. The energy released by these accreting supermassive black holes can vastly exceed the entire binding energy of their host galaxies, yet it remains unclear how this energy is dissipated throughout the galaxy, and how that might couple to the galaxy's evolution. The Close AGN Reference Survey (CARS) is a multi-wavelength survey of a representative sample of luminous Type I AGN at redshifts 0.01 connection. These AGN are more luminous than very nearby AGN but are still close enough for spatially resolved mapping at sub-kpc scales with various state- of-the art facilities and instruments, such as VLT-MUSE, ALMA, JVLA, Chandra, SOFIA, and many more. In this article we showcase the power of CARS with examples of a multi-phase AGN outflow, diverse views on star formation activity and a unique changing-look AGN. CARS will provide an essential low-redshift reference sample for ongoing and forthcoming AGN surveys at high redshift.

  20. Characteristics of Mae Moh lignite: Hardgrove grindability index and approximate work index

    OpenAIRE

    Wutthiphong Tara; Chairoj Rattanakawin

    2012-01-01

    The purpose of this research was to preliminarily study the Mae Moh lignite grindability tests emphasizing onHardgrove grindability and approximate work index determination respectively. Firstly, the lignite samples were collected,prepared and analyzed for calorific value, total sulfur content, and proximate analysis. After that the Hardgrove grindabilitytest using ball-race test mill was performed. Knowing the Hardgrove indices, the Bond work indices of some samples wereestimated using the A...

  1. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  2. Approximate cohomology in Banach algebras | Pourabbas ...

    African Journals Online (AJOL)

    We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...

  3. A spectral mean for random closed curves

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2016-01-01

    textabstractWe propose a spectral mean for closed sets described by sample points on their boundaries subject to mis-alignment and noise. We derive maximum likelihood estimators for the model and noise parameters in the Fourier domain. We estimate the unknown mean boundary curve by

  4. Convergent close-coupling calculations of electron-hydrogen scattering

    International Nuclear Information System (INIS)

    Bray, Igor; Stelbovics, A.T.

    1992-04-01

    The convergence of the close-coupling formalism is studied by expanding the target states in an orthogonal L 2 Laguerre basis. The theory is without approximation and convergence is established by simply increasing the basis size. The convergent elastic, 2s, and 2p differential cross sections, spin asymmetries, and angular correlation parameters for the 2p excitation at 35, 54.4, and 100 eV are calculated. Integrated and total cross sections as well as T-matrix elements for the first five partial waves are also given. 30 refs., 3 tabs., 9 figs

  5. Hot sample archiving. Revision 3

    International Nuclear Information System (INIS)

    McVey, C.B.

    1995-01-01

    This Engineering Study revision evaluated the alternatives to provide tank waste characterization analytical samples for a time period as recommended by the Tank Waste Remediation Systems Program. The recommendation of storing 40 ml segment samples for a period of approximately 18 months (6 months past the approval date of the Tank Characterization Report) and then composite the core segment material in 125 ml containers for a period of five years. The study considers storage at 222-S facility. It was determined that the critical storage problem was in the hot cell area. The 40 ml sample container has enough material for approximately 3 times the required amount for a complete laboratory re-analysis. The final result is that 222-S can meet the sample archive storage requirements. During the 100% capture rate the capacity is exceeded in the hot cell area, but quick, inexpensive options are available to meet the requirements

  6. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  7. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  8. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  9. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  10. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  11. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  12. Applying Agrep to r-NSA to solve multiple sequences approximate matching.

    Science.gov (United States)

    Ni, Bing; Wong, Man-Hon; Lam, Chi-Fai David; Leung, Kwong-Sak

    2014-01-01

    This paper addresses the approximate matching problem in a database consisting of multiple DNA sequences, where the proposed approach applies Agrep to a new truncated suffix array, r-NSA. The construction time of the structure is linear to the database size, and the computations of indexing a substring in the structure are constant. The number of characters processed in applying Agrep is analysed theoretically, and the theoretical upper-bound can approximate closely the empirical number of characters, which is obtained through enumerating the characters in the actual structure built. Experiments are carried out using (synthetic) random DNA sequences, as well as (real) genome sequences including Hepatitis-B Virus and X-chromosome. Experimental results show that, compared to the straight-forward approach that applies Agrep to multiple sequences individually, the proposed approach solves the matching problem in much shorter time. The speed-up of our approach depends on the sequence patterns, and for highly similar homologous genome sequences, which are the common cases in real-life genomes, it can be up to several orders of magnitude.

  13. Approximations of Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Vinai K. Singh

    2013-03-01

    Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions

  14. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  15. New closed-form approximation for skin chromophore mapping.

    Science.gov (United States)

    Välisuo, Petri; Kaartinen, Ilkka; Tuchin, Valery; Alander, Jarmo

    2011-04-01

    The concentrations of blood and melanin in skin can be estimated based on the reflectance of light. Many models for this estimation have been built, such as Monte Carlo simulation, diffusion models, and the differential modified Beer-Lambert law. The optimization-based methods are too slow for chromophore mapping of high-resolution spectral images, and the differential modified Beer-Lambert is not often accurate enough. Optimal coefficients for the differential Beer-Lambert model are calculated by differentiating the diffusion model, optimized to the normal skin spectrum. The derivatives are then used in predicting the difference in chromophore concentrations from the difference in absorption spectra. The accuracy of the method is tested both computationally and experimentally using a Monte Carlo multilayer simulation model, and the data are measured from the palm of a hand during an Allen's test, which modulates the blood content of skin. The correlations of the given and predicted blood, melanin, and oxygen saturation levels are correspondingly r = 0.94, r = 0.99, and r = 0.73. The prediction of the concentrations for all pixels in a 1-megapixel image would take ∼ 20 min, which is orders of magnitude faster than the methods based on optimization during the prediction.

  16. A spectral mean for random closed curves

    NARCIS (Netherlands)

    van Lieshout, Maria Nicolette Margaretha

    2016-01-01

    We propose a spectral mean for closed sets described by sample points on their boundaries subject to mis-alignment and noise. We derive maximum likelihood estimators for the model and noise parameters in the Fourier domain. We estimate the unknown mean boundary curve by back-transformation and

  17. Cosmological applications of Padé approximant

    International Nuclear Information System (INIS)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation

  18. Cosmological applications of Padé approximant

    Science.gov (United States)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

  19. Solutions to aggregation-diffusion equations with nonlinear mobility constructed via a deterministic particle approximation

    OpenAIRE

    Fagioli, Simone; Radici, Emanuela

    2018-01-01

    We investigate the existence of weak type solutions for a class of aggregation-diffusion PDEs with nonlinear mobility obtained as large particle limit of a suitable nonlocal version of the follow-the-leader scheme, which is interpreted as the discrete Lagrangian approximation of the target continuity equation. We restrict the analysis to nonnegative initial data in $L^{\\infty} \\cap BV$ away from vacuum and supported in a closed interval with zero-velocity boundary conditions. The main novelti...

  20. The self-consistent effective medium approximation (SEMA): New tricks from an old dog

    International Nuclear Information System (INIS)

    Bergman, David J.

    2007-01-01

    The fact that the self-consistent effective medium approximation (SEMA) leads to incorrect values for the percolation threshold, as well as for the critical exponents which characterize that threshold, has led to a decline in using that approximation. In this article I argue that SEMA has the unique capability, which is lacking in other approximation schemes for macroscopic response of composite media, of leading to the discovery or prediction of new critical points. This is due to the fact that SEMA can often lead to explicit equations for the macroscopic response of a composite medium, even when that medium has a rather complicated character. In such cases, the SEMA equations are usually coupled and nonlinear, often even transcendental in character. Thus there is no question of finding exact solutions. Nevertheless, a useful ansatz, leading to a closed form asymptotic solution, can often be made. In this way, singularities in the macroscopic response can be identified from a theoretical or mathematical treatment of the physical problem. This is demonstrated for two problems of magneto-transport in a composite medium, where the SEMA equations are solved using asymptotic analysis, leading to new types of critical points and critical behavior

  1. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  2. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  3. Generalization of first-principles thermodynamic model: Application to hexagonal close-packed ε-Fe3N

    DEFF Research Database (Denmark)

    Bakkedal, Morten B.; Shang, Shu- Li; Liu, Zi-Kui

    2016-01-01

    A complete first-principles thermodynamic model was developed and applied to hexagonal close-packed structure ε-Fe3N. The electronic structure was calculated using density functional theory and the quasiharmonic phonon approximation to determine macroscopic thermodynamic properties at finite...

  4. CLOSED-LOOP STRIPPING ANALYSIS (CLSA) OF ...

    Science.gov (United States)

    Synthetic musk compounds have been found in surface water, fish tissues, and human breast milk. Current techniques for separating these compounds from fish tissues require tedious sample clean-upprocedures A simple method for the deterrnination of these compounds in fish tissues has been developed. Closed-loop stripping of saponified fish tissues in a I -L Wheaton purge-and-trap vessel is used to strip compounds with high vapor pressures such as synthetic musks from the matrix onto a solid sorbent (Abselut Nexus). This technique is useful for screening biological tissues that contain lipids for musk compounds. Analytes are desorbed from the sorbent trap sequentially with polar and nonpolar solvents, concentrated, and directly analyzed by high resolution gas chromatography coupled to a mass spectrometer operating in the selected ion monitoring mode. In this paper, we analyzed two homogenized samples of whole fish tissues with spiked synthetic musk compounds using closed-loop stripping analysis (CLSA) and pressurized liquid extraction (PLE). The analytes were not recovered quantitatively but the extraction yield was sufficiently reproducible for at least semi-quantitative purposes (screening). The method was less expensive to implement and required significantly less sample preparation than the PLE technique. The research focused on in the subtasks is the development and application of state-of the-art technologies to meet the needs of the public, Office of Water,

  5. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  6. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  7. A surprise in the first Born approximation for electron scattering

    International Nuclear Information System (INIS)

    Treacy, M.M.J.; Van Dyck, D.

    2012-01-01

    A standard textbook derivation for the scattering of electrons by a weak potential under the first Born approximation suggests that the far-field scattered wave should be in phase with the incident wave. However, it is well known that waves scattered from a weak phase object should be phase-shifted by π/2 relative to the incident wave. A disturbing consequence of this missing phase is that, according to the Optical Theorem, the total scattering cross section would be zero in the first Born approximation. We resolve this mystery pedagogically by showing that the first Born approximation fails to conserve electrons even to first order. Modifying the derivation to conserve electrons introduces the correct phase without changing the scattering amplitude. We also show that the far-field expansion for the scattered waves used in many texts is inappropriate for computing an exit wave from a sample, and that the near-field expansion also give the appropriately phase-shifted result. -- Highlights: ► The first Born approximation is usually invoked as the theoretical physical basis for kinematical electron scattering theory. ► Although it predicts the correct scattering amplitude, it predicts the wrong phase; the scattered wave is missing a prefactor of i. ► We show that this arises because the standard textbook version of the first Born approximation does not conserve electrons. ► We show how this can be fixed.

  8. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    Science.gov (United States)

    Kilcrease, D. P.; Brookes, S.

    2013-12-01

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. A simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure for the Born cross-sections that employs the Elwert-Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. We also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.

  9. An asymptotically consistent approximant for the equatorial bending angle of light due to Kerr black holes

    Science.gov (United States)

    Barlow, Nathaniel S.; Weinstein, Steven J.; Faber, Joshua A.

    2017-07-01

    An accurate closed-form expression is provided to predict the bending angle of light as a function of impact parameter for equatorial orbits around Kerr black holes of arbitrary spin. This expression is constructed by assuring that the weak- and strong-deflection limits are explicitly satisfied while maintaining accuracy at intermediate values of impact parameter via the method of asymptotic approximants (Barlow et al 2017 Q. J. Mech. Appl. Math. 70 21-48). To this end, the strong deflection limit for a prograde orbit around an extremal black hole is examined, and the full non-vanishing asymptotic behavior is determined. The derived approximant may be an attractive alternative to computationally expensive elliptical integrals used in black hole simulations.

  10. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....

  11. Some advances in importance sampling of reliability models based on zero variance approximation

    NARCIS (Netherlands)

    Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Scheinhardt, Willem R.W.; Juneja, Sandeep

    We are interested in estimating, through simulation, the probability of entering a rare failure state before a regeneration state. Since this probability is typically small, we apply importance sampling. The method that we use is based on finding the most likely paths to failure. We present an

  12. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    Pedersen, Steffen Højris

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...

  13. Bounded-Degree Approximations of Stochastic Networks

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.

  14. A Sobolev-Type Upper Bound for Rates of Approximation by Linear Combinations of Heaviside Plane Waves

    Czech Academy of Sciences Publication Activity Database

    Kainen, P.C.; Kůrková, Věra; Vogt, A.

    2007-01-01

    Roč. 147, č. 1 (2007), s. 1-10 ISSN 0021-9045 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : characteristic functions of closed half-spaces * perceptron neural networks * integral formulas * variation with respect to half-spaces * Radon transform * Gaussian function * rates of approximation Subject RIV: IN - Informatics, Computer Science Impact factor: 0.697, year: 2007

  15. On H-closed and U-closed functions | Cammaroto | Quaestiones ...

    African Journals Online (AJOL)

    In this article, we extend the work on H-closed functions started by Cammaroto, Fedorchuk and Porter in 1998. Also, U-closed functions are introduced and characterized in terms of filters and adherence. The hereditary and productivity properties are examined and developed for both H-closed and U-closed functions.

  16. A simple multistage closed-(box+reservoir model of chemical evolution

    Directory of Open Access Journals (Sweden)

    Caimmi R.

    2011-01-01

    Full Text Available Simple closed-box (CB models of chemical evolution are extended on two respects, namely (i simple closed-(box+reservoir (CBR models allowing gas outflow from the box into the reservoir (Hartwick 1976 or gas inflow into the box from the reservoir (Caimmi 2007 with rate proportional to the star formation rate, and (ii simple multistage closed-(box+reservoir (MCBR models allowing different stages of evolution characterized by different inflow or outflow rates. The theoretical differential oxygen abundance distribution (TDOD predicted by the model maintains close to a continuous broken straight line. An application is made where a fictitious sample is built up from two distinct samples of halo stars and taken as representative of the inner Galactic halo. The related empirical differential oxygen abundance distribution (EDOD is represented, to an acceptable extent, as a continuous broken line for two viable [O/H]-[Fe/H] empirical relations. The slopes and the intercepts of the regression lines are determined, and then used as input parameters to MCBR models. Within the errors (-+σ, regression line slopes correspond to a large inflow during the earlier stage of evolution and to low or moderate outflow during the subsequent stages. A possible inner halo - outer (metal-poor bulge connection is also briefly discussed. Quantitative results cannot be considered for applications to the inner Galactic halo, unless selection effects and disk contamination are removed from halo samples, and discrepancies between different oxygen abundance determination methods are explained.

  17. Whole-genome gene expression profiling of formalin-fixed, paraffin-embedded tissue samples.

    Directory of Open Access Journals (Sweden)

    Craig April

    2009-12-01

    Full Text Available We have developed a gene expression assay (Whole-Genome DASL, capable of generating whole-genome gene expression profiles from degraded samples such as formalin-fixed, paraffin-embedded (FFPE specimens.We demonstrated a similar level of sensitivity in gene detection between matched fresh-frozen (FF and FFPE samples, with the number and overlap of probes detected in the FFPE samples being approximately 88% and 95% of that in the corresponding FF samples, respectively; 74% of the differentially expressed probes overlapped between the FF and FFPE pairs. The WG-DASL assay is also able to detect 1.3-1.5 and 1.5-2 -fold changes in intact and FFPE samples, respectively. The dynamic range for the assay is approximately 3 logs. Comparing the WG-DASL assay with an in vitro transcription-based labeling method yielded fold-change correlations of R(2 approximately 0.83, while fold-change comparisons with quantitative RT-PCR assays yielded R(2 approximately 0.86 and R(2 approximately 0.55 for intact and FFPE samples, respectively. Additionally, the WG-DASL assay yielded high self-correlations (R(2>0.98 with low intact RNA inputs ranging from 1 ng to 100 ng; reproducible expression profiles were also obtained with 250 pg total RNA (R(2 approximately 0.92, with approximately 71% of the probes detected in 100 ng total RNA also detected at the 250 pg level. When FFPE samples were assayed, 1 ng total RNA yielded self-correlations of R(2 approximately 0.80, while still maintaining a correlation of R(2 approximately 0.75 with standard FFPE inputs (200 ng.Taken together, these results show that WG-DASL assay provides a reliable platform for genome-wide expression profiling in archived materials. It also possesses utility within clinical settings where only limited quantities of samples may be available (e.g. microdissected material or when minimally invasive procedures are performed (e.g. biopsied specimens.

  18. Tube closure device, especially for sample irradiation

    International Nuclear Information System (INIS)

    Klahn, F.C.; Nolan, J.H.; Wills, C.

    1979-01-01

    Device for closing the outlet of a bore and temporarily locking components in this bore. Specifically, it concerns a device for closing a tube containing a set of samples for monitoring irradiation in a nuclear reactor [fr

  19. Closed-form expressions for integrals of MKdV and sine-Gordon maps

    International Nuclear Information System (INIS)

    Kamp, Peter H van der; Rojas, O; Quispel, G R W

    2007-01-01

    We present closed-form expressions for approximately N integrals of 2N-dimensional maps. The maps are obtained by travelling wave reductions of the modified Korteweg-de Vries equation and of the sine-Gordon equation, respectively. We provide the integrating factors corresponding to the integrals. Moreover we show how the integrals and the integrating factors relate to the staircase method

  20. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  1. Limitations of shallow nets approximation.

    Science.gov (United States)

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Exact fluctuations of nonequilibrium steady states from approximate auxiliary dynamics

    OpenAIRE

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2017-01-01

    We describe a framework to significantly reduce the computational effort to evaluate large deviation functions of time integrated observables within nonequilibrium steady states. We do this by incorporating an auxiliary dynamics into trajectory based Monte Carlo calculations, through a transformation of the system's propagator using an approximate guiding function. This procedure importance samples the trajectories that most contribute to the large deviation function, mitigating the exponenti...

  3. Distributed approximating functional fit of the H3 ab initio potential-energy data of Liu and Siegbahn

    International Nuclear Information System (INIS)

    Frishman, A.; Hoffman, D.K.; Kouri, D.J.

    1997-01-01

    We report a distributed approximating functional (DAF) fit of the ab initio potential-energy data of Liu [J. Chem. Phys. 58, 1925 (1973)] and Siegbahn and Liu [ibid. 68, 2457 (1978)]. The DAF-fit procedure is based on a variational principle, and is systematic and general. Only two adjustable parameters occur in the DAF leading to a fit which is both accurate (to the level inherent in the input data; RMS error of 0.2765 kcal/mol) and smooth (open-quotes well-tempered,close quotes in DAF terminology). In addition, the LSTH surface of Truhlar and Horowitz based on this same data [J. Chem. Phys. 68, 2466 (1978)] is itself approximated using only the values of the LSTH surface on the same grid coordinate points as the ab initio data, and the same DAF parameters. The purpose of this exercise is to demonstrate that the DAF delivers a well-tempered approximation to a known function that closely mimics the true potential-energy surface. As is to be expected, since there is only roundoff error present in the LSTH input data, even more significant figures of fitting accuracy are obtained. The RMS error of the DAF fit, of the LSTH surface at the input points, is 0.0274 kcal/mol, and a smooth fit, accurate to better than 1cm -1 , can be obtained using more than 287 input data points. copyright 1997 American Institute of Physics

  4. Leachate characterization of active and closed dump sites in Port ...

    African Journals Online (AJOL)

    This study characterizes the leachate quality of both active and closed dump sites in Port Harcourt City. Leachates were sampled from the base of the dum psites and analysed, pH, dissolved oxygen (DO), electrical conductivity and total dissolved solids were determined on the samples in-situ. While chloride, sulphate ...

  5. Spatial patterns of close relationships across the lifespan

    Science.gov (United States)

    Jo, Hang-Hyun; Saramäki, Jari; Dunbar, Robin I. M.; Kaski, Kimmo

    2014-11-01

    The dynamics of close relationships is important for understanding the migration patterns of individual life-courses. The bottom-up approach to this subject by social scientists has been limited by sample size, while the more recent top-down approach using large-scale datasets suffers from a lack of detail about the human individuals. We incorporate the geographic and demographic information of millions of mobile phone users with their communication patterns to study the dynamics of close relationships and its effect in their life-course migration. We demonstrate how the close age- and sex-biased dyadic relationships are correlated with the geographic proximity of the pair of individuals, e.g., young couples tend to live further from each other than old couples. In addition, we find that emotionally closer pairs are living geographically closer to each other. These findings imply that the life-course framework is crucial for understanding the complex dynamics of close relationships and their effect on the migration patterns of human individuals.

  6. RADIAL VELOCITY STUDIES OF CLOSE BINARY STARS. XIV

    International Nuclear Information System (INIS)

    Pribulla, Theodor; Rucinski, Slavek M.; DeBond, Heide; De Ridder, Archie; Karmo, Toomas; Thomson, J. R.; Croll, Bryce; Ogloza, Waldemar; Pilecki, Bogumil; Siwak, Michal

    2009-01-01

    Radial velocity (RV) measurements and sine curve fits to the orbital RV variations are presented for 10 close binary systems: TZ Boo, VW Boo, EL Boo, VZ CVn, GK Cep, RW Com, V2610 Oph, V1387 Ori, AU Ser, and FT UMa. Our spectroscopy revealed two quadruple systems, TZ Boo and V2610 Oph, while three stars showing small photometric amplitudes, EL Boo, V1387 Ori, and FT UMa, were found to be triple systems. GK Cep is a close binary with a faint third component. While most of the studied eclipsing systems are contact binaries, VZ CVn and GK Cep are detached or semidetached double-lined binaries, and EL Boo, V1387 Ori, and FT UMa are close binaries of uncertain binary type. The large fraction of triple and quadruple systems found in this sample supports the hypothesis of formation of close binaries in multiple stellar systems; it also demonstrates that low photometric amplitude binaries are a fertile ground for further discoveries of multiple systems.

  7. J/sub z/-preserving propensities in molecular collisions. I. Quantal coupled states and classical impulsive approximations

    International Nuclear Information System (INIS)

    Khare, V.; Kouri, D.J.; Hoffman, D.K.

    1981-01-01

    The occurrence of j/sub z/-preserving propensities in atom--linear molecule collisions is considered within the contexts of the quantum mechanical CS approximation and of a classical model collision system. The latter involves an impulsive interaction which is the extreme limit of the class of potentials for which the CS approximation is expected to be valid. The classical model results in exact conservation of j/sub z/ along a ''kinematic apse.'' Quantum mechanically, the CS approximation is reformulated in a manner that clearly shows the relationship between the l choice and the degree and direction of j/sub z/ preservation. Away from the forward direction, the simplest choice obeying time reversal symmetry l=(l-script+l')/2, is shown to result in a propensity for preserving j/sub z/ along a ''geometric apse'' which coincides with the kinematic apse in the energy sudden limit, and for nonenergy sudden systems only differs significantly from it close to the forward direction

  8. Closed Loop Control of Oxygen Delivery and Oxygen Generation

    Science.gov (United States)

    2017-08-01

    were used for this study and were connected via a USB cable to allow communication. The ventilator was modified to allow closed loop control of oxygen...connected via a USB cable to allow communication. The ventilator was modified to allow closed loop control of oxygen based on the oxygen saturation...2017-4119, 28 Aug 2017. oximetry (SpO2) and intermittent arterial blood sampling for arterial oxygen tension (partial pressure of oxygen [PaO2]) and

  9. USING CLOSE WHITE DWARF + M DWARF STELLAR PAIRS TO CONSTRAIN THE FLARE RATES IN CLOSE STELLAR BINARIES

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Dylan P.; West, Andrew A. [Astronomy Department, Boston University, 725 Commonwealth Ave, Boston, MA 02215 (United States); Becker, Andrew C., E-mail: dpmorg@bu.edu [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States)

    2016-05-01

    We present a study of the statistical flare rates of M dwarfs (dMs) with close white dwarf (WD) companions (WD+dM; typical separations <1 au). Our previous analysis demonstrated that dMs with close WD companions are more magnetically active than their field counterparts. One likely implication of having a close binary companion is increased stellar rotation through disk-disruption, tidal effects, and/or angular momentum exchange; increased stellar rotation has long been associated with an increase in stellar activity. Previous studies show a strong correlation between dMs that are magnetically active (showing H α in emission) and the frequency of stellar flare rates. We examine the difference between the flare rates observed in close WD+dM binary systems and field dMs. Our sample consists of a subset of 181 close WD+dM pairs from Morgan et al. observed in the Sloan Digital Sky Survey Stripe 82, where we obtain multi-epoch observations in the Sloan ugriz -bands. We find an increase in the overall flaring fraction in the close WD+dM pairs (0.09 ± 0.03%) compared to the field dMs (0.0108 ± 0.0007%) and a lower flaring fraction for active WD+dMs (0.05 ± 0.03%) compared to active dMs (0.28 ± 0.05%). We discuss how our results constrain both the single and binary dM flare rates. Our results also constrain dM multiplicity, our knowledge of the Galactic transient background, and may be important for the habitability of attending planets around dMs with close companions.

  10. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  11. Identifying multiple influential spreaders based on generalized closeness centrality

    Science.gov (United States)

    Liu, Huan-Li; Ma, Chuang; Xiang, Bing-Bing; Tang, Ming; Zhang, Hai-Feng

    2018-02-01

    To maximize the spreading influence of multiple spreaders in complex networks, one important fact cannot be ignored: the multiple spreaders should be dispersively distributed in networks, which can effectively reduce the redundance of information spreading. For this purpose, we define a generalized closeness centrality (GCC) index by generalizing the closeness centrality index to a set of nodes. The problem converts to how to identify multiple spreaders such that an objective function has the minimal value. By comparing with the K-means clustering algorithm, we find that the optimization problem is very similar to the problem of minimizing the objective function in the K-means method. Therefore, how to find multiple nodes with the highest GCC value can be approximately solved by the K-means method. Two typical transmission dynamics-epidemic spreading process and rumor spreading process are implemented in real networks to verify the good performance of our proposed method.

  12. Closed capture-recapture sampling

    Indian Academy of Sciences (India)

    lyn

    A modern economy growing at 6-9 % per year. “Natural” forests ... TIGER SOCIAL ORGANIZATION AND LAND TENURE. PATTERNS .... survey data). • Hierarchical models under a Bayesian Approach ... science independent of management ...

  13. Approximate circuits for increased reliability

    Science.gov (United States)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  14. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey; Alkhalifah, Tariq Ali

    2013-01-01

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  15. Analytical approximation of neutron physics data

    International Nuclear Information System (INIS)

    Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.

    1984-01-01

    The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy

  16. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  17. Online Adaptive Optimal Control of Vehicle Active Suspension Systems Using Single-Network Approximate Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Zhi-Jun Fu

    2017-01-01

    Full Text Available In view of the performance requirements (e.g., ride comfort, road holding, and suspension space limitation for vehicle suspension systems, this paper proposes an adaptive optimal control method for quarter-car active suspension system by using the approximate dynamic programming approach (ADP. Online optimal control law is obtained by using a single adaptive critic NN to approximate the solution of the Hamilton-Jacobi-Bellman (HJB equation. Stability of the closed-loop system is proved by Lyapunov theory. Compared with the classic linear quadratic regulator (LQR approach, the proposed ADP-based adaptive optimal control method demonstrates improved performance in the presence of parametric uncertainties (e.g., sprung mass and unknown road displacement. Numerical simulation results of a sedan suspension system are presented to verify the effectiveness of the proposed control strategy.

  18. Nuclear Hartree-Fock approximation testing and other related approximations

    International Nuclear Information System (INIS)

    Cohenca, J.M.

    1970-01-01

    Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt

  19. Parallelizable approximate solvers for recursions arising in preconditioning

    Energy Technology Data Exchange (ETDEWEB)

    Shapira, Y. [Israel Inst. of Technology, Haifa (Israel)

    1996-12-31

    For the recursions used in the Modified Incomplete LU (MILU) preconditioner, namely, the incomplete decomposition, forward elimination and back substitution processes, a parallelizable approximate solver is presented. The present analysis shows that the solutions of the recursions depend only weakly on their initial conditions and may be interpreted to indicate that the inexact solution is close, in some sense, to the exact one. The method is based on a domain decomposition approach, suitable for parallel implementations with message passing architectures. It requires a fixed number of communication steps per preconditioned iteration, independently of the number of subdomains or the size of the problem. The overlapping subdomains are either cubes (suitable for mesh-connected arrays of processors) or constructed by the data-flow rule of the recursions (suitable for line-connected arrays with possibly SIMD or vector processors). Numerical examples show that, in both cases, the overhead in the number of iterations required for convergence of the preconditioned iteration is small relatively to the speed-up gained.

  20. Rapid Pneumatic Transport of Radioactive Samples - RaPToRS

    Science.gov (United States)

    Padalino, S.; Barrios, M.; Sangster, C.

    2005-10-01

    Some ICF neutron activation diagnostics require quick retrieval of the activated sample. Minimizing retrieval times is particularly important when the half-life of the activated material is on the order of the transport time or the degree of radioactivity is close to the background counting level. These restrictions exist in current experiments performed at the Laboratory for Laser Energetics, thus motivating the development of the RaPToRS system. The system has been designed to minimize transportation time while requiring no human intervention during transport or counting. These factors will be important if the system is to be used at the NIF where radiological hazards will be present during post activation. The sample carrier is pneumatically transported via a 4 inch ID PVC pipe to a remote location in excess of 100 meters from the activation site at a speed of approximately 7 m/s. It arrives at an end station where it is dismounted robotically from the carrier and removed from its hermetic package. The sample is then placed by the robot in a counting station. This system is currently being developed to measure back-to-back gamma rays produced by positron annihilation which were emitted by activated graphite. Funded in part by the U.S. DOE under sub contract with LLE at the University of Rochester.

  1. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Directory of Open Access Journals (Sweden)

    Andreas Steimer

    Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing

  2. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Science.gov (United States)

    Steimer, Andreas; Schindler, Kaspar

    2015-01-01

    Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational

  3. An asymptotically consistent approximant for the equatorial bending angle of light due to Kerr black holes

    International Nuclear Information System (INIS)

    Barlow, Nathaniel S; Faber, Joshua A; Weinstein, Steven J

    2017-01-01

    An accurate closed-form expression is provided to predict the bending angle of light as a function of impact parameter for equatorial orbits around Kerr black holes of arbitrary spin. This expression is constructed by assuring that the weak- and strong-deflection limits are explicitly satisfied while maintaining accuracy at intermediate values of impact parameter via the method of asymptotic approximants (Barlow et al 2017 Q. J. Mech. Appl. Math . 70 21–48). To this end, the strong deflection limit for a prograde orbit around an extremal black hole is examined, and the full non-vanishing asymptotic behavior is determined. The derived approximant may be an attractive alternative to computationally expensive elliptical integrals used in black hole simulations. (paper)

  4. Continuous approximation for interaction energy of adamantane encapsulated inside carbon nanotubes

    Science.gov (United States)

    Baowan, Duangkamon; Hill, James M.; Bacsa, Wolfgang

    2018-02-01

    The interaction energy for two adjacent adamantane molecules and that of adamantane molecules encapsulated inside carbon nanotubes are investigated considering only dipole-dipole induced interaction. The Lennard-Jones potential and the continuous approximation are utilised to derive analytical expressions for these interaction energies. The equilibrium distance 3.281 Å between two adamantane molecules is determined. The smallest carbon nanotube radius b0 that can encapsulate the adamantane molecule and the radius of the tube bmax that gives the maximum suction energy, linearly depend on the adamantane radius, are calculated. For larger diameter tubes, the off axis position has been calculated, and equilibrium distance between molecule and tube wall is found to be close to the interlayer spacing in graphene.

  5. First occurrence of close-to-ideal Kirkiite at Vulcano (Aeolian Islands, Italy)

    DEFF Research Database (Denmark)

    Pinto, Daniela; Balic-Zunic, Tonci; Garavelli, anna

    2006-01-01

    Samples of kirkiite from the high temperature fumaroles of La Fossa crater of Vulcano (Aeolian islands, Italy) were chemically and structurally investigated in this work. Associated minerals are vurroite, bismuthinite, galenobismutite, cannizzarite, lillianite, heyrovsk ite, galena, and other less...... of the close-to-ideal kirkiite from Vulcano has been compared with the structure of the type specimen. The comparison reveals a variation in As-Bi substitution, with samples from Vulcano probably being close to the maximum possible Bi and the minimum As content for this structure type. This is reflected...

  6. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  7. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  8. THE CLOSE BINARY FRACTION OF DWARF M STARS

    International Nuclear Information System (INIS)

    Clark, Benjamin M.; Blake, Cullen H.; Knapp, Gillian R.

    2012-01-01

    We describe a search for close spectroscopic dwarf M star binaries using data from the Sloan Digital Sky Survey to address the question of the rate of occurrence of multiplicity in M dwarfs. We use a template-fitting technique to measure radial velocities from 145,888 individual spectra obtained for a magnitude-limited sample of 39,543 M dwarfs. Typically, the three or four spectra observed for each star are separated in time by less than four hours, but for ∼17% of the stars, the individual observations span more than two days. In these cases we are sensitive to large-amplitude radial velocity variations on timescales comparable to the separation between the observations. We use a control sample of objects having observations taken within a four-hour period to make an empirical estimate of the underlying radial velocity error distribution and simulate our detection efficiency for a wide range of binary star systems. We find the frequency of binaries among the dwarf M stars with a < 0.4 AU to be 3%-4%. Comparison with other samples of binary stars demonstrates that the close binary fraction, like the total binary fraction, is an increasing function of primary mass.

  9. THE CLOSE BINARY FRACTION OF DWARF M STARS

    Energy Technology Data Exchange (ETDEWEB)

    Clark, Benjamin M. [Penn Manor High School, 100 East Cottage Avenue, Millersville, PA 17551 (United States); Blake, Cullen H.; Knapp, Gillian R. [Princeton University, Department of Astrophysical Sciences, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States)

    2012-01-10

    We describe a search for close spectroscopic dwarf M star binaries using data from the Sloan Digital Sky Survey to address the question of the rate of occurrence of multiplicity in M dwarfs. We use a template-fitting technique to measure radial velocities from 145,888 individual spectra obtained for a magnitude-limited sample of 39,543 M dwarfs. Typically, the three or four spectra observed for each star are separated in time by less than four hours, but for {approx}17% of the stars, the individual observations span more than two days. In these cases we are sensitive to large-amplitude radial velocity variations on timescales comparable to the separation between the observations. We use a control sample of objects having observations taken within a four-hour period to make an empirical estimate of the underlying radial velocity error distribution and simulate our detection efficiency for a wide range of binary star systems. We find the frequency of binaries among the dwarf M stars with a < 0.4 AU to be 3%-4%. Comparison with other samples of binary stars demonstrates that the close binary fraction, like the total binary fraction, is an increasing function of primary mass.

  10. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Science.gov (United States)

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2014-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  11. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Directory of Open Access Journals (Sweden)

    Marco Congedo

    Full Text Available We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD matrices and their approximate joint diagonalization (AJD. Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  12. Modelling of heating and evaporation of gasoline fuel droplets: A comparative analysis of approximations

    KAUST Repository

    Elwardani, Ahmed Elsaid

    2013-09-01

    Modelling of gasoline fuel droplet heating and evaporation processes is investigated using several approximations of this fuel. These are quasi-components used in the quasi-discrete model and the approximations of these quasi-components (Surrogate I (molar fractions: 83.0% n-C 6H14 + 15.6% n-C10H22 + 1.4% n-C14H30) and Surrogate II (molar fractions: 83.0% n-C7H16 + 15.6% n-C11H24 + 1.4% n-C15H32)). Also, we have used Surrogate A (molar fractions: 56% n-C7H16 + 28% iso-C8H 18 + 17% C7H8) and Surrogate B (molar fractions: 63% n-C7H16 + 20% iso-C8H 18 + 17% C7H8), originally introduced based on the closeness of the ignition delay of surrogates to that of gasoline fuel. The predictions of droplet radii and temperatures based on three quasi-components and their approximations (Surrogates I and II) are shown to be much more accurate than the predictions using Surrogates A and B. © 2013 Elsevier Ltd. All rights reserved.

  13. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  14. Wigner’s phase-space function and atomic structure: II. Ground states for closed-shell atoms

    DEFF Research Database (Denmark)

    Springborg, Michael; Dahl, Jens Peder

    1987-01-01

    We present formulas for reduced Wigner phase-space functions for atoms, with an emphasis on the first-order spinless Wigner function. This function can be written as the sum of separate contributions from single orbitals (the natural orbitals). This allows a detailed study of the function. Here we...... display and analyze the function for the closed-shell atoms helium, beryllium, neon, argon, and zinc in the Hartree-Fock approximation. The quantum-mechanical exact results are compared with those obtained with the approximate Thomas-Fermi description of electron densities in phase space....

  15. ABCtoolbox: a versatile toolkit for approximate Bayesian computations

    Directory of Open Access Journals (Sweden)

    Neuenschwander Samuel

    2010-03-01

    Full Text Available Abstract Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC. It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.

  16. Self-consistent GW0 results for the electron gas: Fixed screened potential W0 within the random-phase approximation

    International Nuclear Information System (INIS)

    von Barth, U.; Holm, B.

    1996-01-01

    With the aim of properly understanding the basis for and the utility of many-body perturbation theory as applied to extended metallic systems, we have calculated the electronic self-energy of the homogeneous electron gas within the GW approximation. The calculation has been carried out in a self-consistent way; i.e., the one-electron Green function obtained from Dyson close-quote s equation is the same as that used to calculate the self-energy. The self-consistency is restricted in the sense that the screened interaction W is kept fixed and equal to that of the random-phase approximation for the gas. We have found that the final results are marginally affected by the broadening of the quasiparticles, and that their self-consistent energies are still close to their free-electron counterparts as they are in non-self-consistent calculations. The reduction in strength of the quasiparticles and the development of satellite structure (plasmons) gives, however, a markedly smaller dynamical self-energy leading to, e.g., a smaller reduction in the quasiparticle strength as compared to non-self-consistent results. The relatively bad description of plasmon structure within the non-self-consistent GW approximation is marginally improved. A first attempt at including W in the self-consistency cycle leads to an even broader and structureless satellite spectrum in disagreement with experiment. copyright 1996 The American Physical Society

  17. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    Science.gov (United States)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  18. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    International Nuclear Information System (INIS)

    Jakeman, J.D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation

  19. An Approximation Algorithm for the Facility Location Problem with Lexicographic Minimax Objective

    Directory of Open Access Journals (Sweden)

    Ľuboš Buzna

    2014-01-01

    Full Text Available We present a new approximation algorithm to the discrete facility location problem providing solutions that are close to the lexicographic minimax optimum. The lexicographic minimax optimum is a concept that allows to find equitable location of facilities serving a large number of customers. The algorithm is independent of general purpose solvers and instead uses algorithms originally designed to solve the p-median problem. By numerical experiments, we demonstrate that our algorithm allows increasing the size of solvable problems and provides high-quality solutions. The algorithm found an optimal solution for all tested instances where we could compare the results with the exact algorithm.

  20. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

    Science.gov (United States)

    Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

    2016-12-01

    Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  1. Approximate models for neutral particle transport calculations in ducts

    International Nuclear Information System (INIS)

    Ono, Shizuca

    2000-01-01

    The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)

  2. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...

  3. Spectra theory for nuclei with closed shells (1962)

    International Nuclear Information System (INIS)

    Gillet, V.

    1962-01-01

    A unified theory for the spectra of nuclei with closed shells, based on the elementary particle-hole excitation of these systems, is applied to a study of carbon-12, oxygen-16 and calcium-40. Two approximations are made. The first consists in diagonalizing the residual two-body interaction in a limited sub-space having one particle and one hole configurations. Its validity depends on the high energy necessary for exciting a particle-hole pair. The second approximation consists in re-summing the infinite sub-series of the particle-hole diagrams. It is equivalent to the Hartree-Fock method depending on the time, or to Quasi-Boson method. Its domain of validity in the nuclear case is not thoroughly Understood. The summed diagrams are preponderant at the high density limit, when the nuclear density is about unity. The violation of the Pauli principle in this approximation is only justified if the number of excited pairs is small with respect to the number of particle states available; in the case of light nuclei the degeneracies of the shells are small. Nevertheless this approximation, which postulates the existence of an average nuclear field, varying slowly with time with respect to the nucleons periods has the merit of being self-consistent, of giving orthogonal proper states in the non-physical state of the mass centre, and of improving the calculation of the summation rules. In order to determine and to limit the role of phenomenology in the results obtained using these approximations, a maximum amount of experimental data is calculated. By applying method of least squares to fourteen energy levels of oxygen and carbon, the region of optimum agreement in the effective interaction parameters is determined. This region is in part a function of the numerical approximations made. We hope that it will keep its significance when the theory is improved. It is compatible with certain characteristics of free nucleon-nucleon scattering. The present research favours the

  4. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  5. How do we decide whether the first Born approximation applies to inelastic collisions of charged particles with an atom or molecule

    International Nuclear Information System (INIS)

    Inokuti, M.; Manson, S.T.

    1985-01-01

    A motivation of our study is to help resolve a general issue in atomic-collision physics. There are two major sources of uncertainties in the evaluation of cross sections. First, one uses an approximation for treating the collision process, e.g., the FBA, the distorted-wave approximation, or the close-coupling approximation. Second, explicit evaluation of cross sections within any of these approximations must use as input eigenfunctions for the target in the initial state and in the final state at least, and possibly in the intermediate states. It is important to distinguish these two sources of uncertainties as clearly as possible. For instance, once the authors are sure that the FBA holds, the uncertainties in the cross-section evaluation are fully attributable to the uncertainties in the target eigenfunctions. Strong plausibility arguments are given for the validity of the FBA

  6. Annealing evolutionary stochastic approximation Monte Carlo for global optimization

    KAUST Repository

    Liang, Faming

    2010-04-08

    In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.

  7. Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models

    Science.gov (United States)

    Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas

    2017-02-01

    A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally

  8. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  9. Born approximation to a perturbative numerical method for the solution of the Schroedinger equation

    International Nuclear Information System (INIS)

    Adam, Gh.

    1978-01-01

    A step function perturbative numerical method (SF-PN method) is developed for the solution of the Cauchy problem for the second order liniar differential equation in normal form. An important point stressed in the present paper, which seems to have been previously ignored in the literature devoted to the PN methods, is the close connection between the first order perturbation theory of the PN approach and the wellknown Born approximation, and, in general, the connection between the varjous orders of the PN corrections and the Neumann series. (author)

  10. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  11. Insight into structural phase transitions from the decoupled anharmonic mode approximation.

    Science.gov (United States)

    Adams, Donat J; Passerone, Daniele

    2016-08-03

    We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T  =  0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.

  12. Tunneling effects in electromagnetic wave scattering by nonspherical particles: A comparison of the Debye series and physical-geometric optics approximations

    Science.gov (United States)

    Bi, Lei; Yang, Ping

    2016-07-01

    The accuracy of the physical-geometric optics (PG-O) approximation is examined for the simulation of electromagnetic scattering by nonspherical dielectric particles. This study seeks a better understanding of the tunneling effect on the phase matrix by employing the invariant imbedding method to rigorously compute the zeroth-order Debye series, from which the tunneling efficiency and the phase matrix corresponding to the diffraction and external reflection are obtained. The tunneling efficiency is shown to be a factor quantifying the relative importance of the tunneling effect over the Fraunhofer diffraction near the forward scattering direction. Due to the tunneling effect, different geometries with the same projected cross section might have different diffraction patterns, which are traditionally assumed to be identical according to the Babinet principle. For particles with a fixed orientation, the PG-O approximation yields the external reflection pattern with reasonable accuracy, but ordinarily fails to predict the locations of peaks and minima in the diffraction pattern. The larger the tunneling efficiency, the worse the PG-O accuracy is at scattering angles less than 90°. If the particles are assumed to be randomly oriented, the PG-O approximation yields the phase matrix close to the rigorous counterpart, primarily due to error cancellations in the orientation-average process. Furthermore, the PG-O approximation based on an electric field volume-integral equation is shown to usually be much more accurate than the Kirchhoff surface integral equation at side-scattering angles, particularly when the modulus of the complex refractive index is close to unity. Finally, tunneling efficiencies are tabulated for representative faceted particles.

  13. Outlier Detection in Regression Using an Iterated One-Step Approximation to the Huber-Skip Estimator

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Bent

    2013-01-01

    In regression we can delete outliers based upon a preliminary estimator and reestimate the parameters by least squares based upon the retained observations. We study the properties of an iteratively defined sequence of estimators based on this idea. We relate the sequence to the Huber-skip estima......In regression we can delete outliers based upon a preliminary estimator and reestimate the parameters by least squares based upon the retained observations. We study the properties of an iteratively defined sequence of estimators based on this idea. We relate the sequence to the Huber...... that the normalized estimation errors are tight and are close to a linear function of the kernel, thus providing a stochastic expansion of the estimators, which is the same as for the Huber-skip. This implies that the iterated estimator is a close approximation of the Huber-skip...

  14. Scattering Light by а Cylindrical Capsule with Arbitrary End Caps in the Rayleigh-Gans-Debye Approximation

    Directory of Open Access Journals (Sweden)

    K. A. Shapovalov

    2015-01-01

    Full Text Available The paper concerns the light scattering problem of biological objects of complicated structure.It considers optically “soft” (having a refractive index close to that of a surrounding medium homogeneous cylindrical capsules, composed of three parts: central one that is cylindrical and two symmetrical rounding end caps. Such capsules can model more broad class of biological objects than the ordinary shapes of a spheroid or sphere. But, unfortunately, if a particle has other than a regular geometrical shape, then it is very difficult or impossible to solve the scattering problem analytically in its most general form that oblige us to use numerical and approximate analytical methods. The one of such approximate analytical method is the Rayleigh-Gans-Debye approximation (or the first Born approximation.So, the Rayleigh-Gans-Debye approximation is valid for different objects having size from nanometer to millimeter and depending on wave length and refractive index of an object under small phase shift of central ray.The formulas for light scattering amplitude of cylindrical capsule with arbitrary end caps in the Rayleigh-Gans-Debye approximation in scalar form are obtained. Then the light scattering phase function [or element of scattering matrix f11] for natural incident light (unpolarized or arbitrary polarized light is calculated.Numerical results for light scattering phase functions of cylindrical capsule with conical, spheroidal, paraboloidal ends in the Rayleigh-Gans-Debye approximation are compared. Also numerical results for light scattering phase function of cylindrical capsule with conical ends in the Rayleigh-Gans-Debye approximation and in the method of Purcell-Pennypacker (or Discrete Dipole method are compared. The good agreement within an application range of the RayleighGans-Debye approximation is obtained.Further continuation of the work, perhaps, is a consideration of multilayer cylindrical capsule in the Rayleigh

  15. Computational modeling of fully-ionized, magnetized plasmas using the fluid approximation

    Science.gov (United States)

    Schnack, Dalton

    2005-10-01

    Strongly magnetized plasmas are rich in spatial and temporal scales, making a computational approach useful for studying these systems. The most accurate model of a magnetized plasma is based on a kinetic equation that describes the evolution of the distribution function for each species in six-dimensional phase space. However, the high dimensionality renders this approach impractical for computations for long time scales in relevant geometry. Fluid models, derived by taking velocity moments of the kinetic equation [1] and truncating (closing) the hierarchy at some level, are an approximation to the kinetic model. The reduced dimensionality allows a wider range of spatial and/or temporal scales to be explored. Several approximations have been used [2-5]. Successful computational modeling requires understanding the ordering and closure approximations, the fundamental waves supported by the equations, and the numerical properties of the discretization scheme. We review and discuss several ordering schemes, their normal modes, and several algorithms that can be applied to obtain a numerical solution. The implementation of kinetic parallel closures is also discussed [6].[1] S. Chapman and T.G. Cowling, ``The Mathematical Theory of Non-Uniform Gases'', Cambridge University Press, Cambridge, UK (1939).[2] R.D. Hazeltine and J.D. Meiss, ``Plasma Confinement'', Addison-Wesley Publishing Company, Redwood City, CA (1992).[3] L.E. Sugiyama and W. Park, Physics of Plasmas 7, 4644 (2000).[4] J.J. Ramos, Physics of Plasmas, 10, 3601 (2003).[5] P.J. Catto and A.N. Simakov, Physics of Plasmas, 11, 90 (2004).[6] E.D. Held et al., Phys. Plasmas 11, 2419 (2004)

  16. Approximation of Corrected Calcium Concentrations in Advanced Chronic Kidney Disease Patients with or without Dialysis Therapy

    Directory of Open Access Journals (Sweden)

    Yoshio Kaku

    2015-08-01

    Full Text Available Background: The following calcium (Ca correction formula (Payne is conventionally used for serum Ca estimation: corrected total Ca (TCa (mg/dl = TCa (mg/dl + [4 - albumin (g/dl]; however, it is inapplicable to advanced chronic kidney disease (CKD patients. Methods: 1,922 samples in CKD G4 + G5 patients and 341 samples in CKD G5D patients were collected. Levels of TCa (mg/day, ionized Ca2+ (iCa2+ (mmol/l and other clinical parameters were measured. We assumed the corrected TCa to be equal to eight times the iCa2+ value (measured corrected TCa. We subsequently performed stepwise multiple linear regression analysis using the clinical parameters. Results: The following formula was devised from multiple linear regression analysis. For CKD G4 + G5 patients: approximated corrected TCa (mg/dl = TCa + 0.25 × (4 - albumin + 4 × (7.4 - pH + 0.1 × (6 - P + 0.22. For CKD G5D patients: approximated corrected TCa (mg/dl = TCa + 0.25 × (4 - albumin + 0.1 × (6 - P + 0.05 × (24 - HCO3- + 0.35. Receiver operating characteristic analysis showed the high values of the area under the curve of approximated corrected TCa for the detection of measured corrected TCa ≥8.4 mg/dl and ≤10.4 mg/dl for each CKD sample. Both intraclass correlation coefficients for each CKD sample demonstrated superior agreement using the new formula compared to the previously reported formulas. Conclusion: Compared to other formulas, the approximated corrected TCa values calculated from the new formula for patients with CKD G4 + G5 and CKD G5D demonstrates superior agreement with the measured corrected TCa.

  17. The efficiency of Flory approximation

    International Nuclear Information System (INIS)

    Obukhov, S.P.

    1984-01-01

    The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)

  18. A closed unventilated chamber for the measurement of transepidermal water loss.

    Science.gov (United States)

    Nuutinen, Jouni; Alanen, Esko; Autio, Pekka; Lahtinen, Marjo-Riitta; Harvima, Ilkka; Lahtinen, Tapani

    2003-05-01

    Open chamber systems for measuring transepidermal water loss (TEWL) have limitations related to ambient and body-induced airflows near the probe, probe size, measurement sites and angles, and measurement range. The aim of the present investigation was to develop a closed chamber system for the TEWL measurement without significant blocking of normal evaporation through the skin. Additionally, in order to use the evaporimeter to measure evaporation rates through other biological and non-biological specimens and in the field applications, a small portable, battery-operated device was a design criteria. A closed unventilated chamber (inner volume 2.0 cm(3) was constructed. For the skin measurement, the chamber with one side open (open surface area 1.0 cm(2) is placed on the skin. The skin application time was investigated at low and high evaporation rates in order to assess the blocking effect of the chamber on normal evaporation. From the rising linear part of the relative humidity (RH) in the chamber the slope was registered. The slope was calibrated into a TEWL value by evaporating water at different temperatures and measuring the water loss of heated samples with a laboratory scale. The closed chamber evaporation technique was compared with a conventional evaporimeter based on an open chamber method (DermaLab), Cortex Technology, Hadsund, Denmark). The reproducibility of the closed chamber method was measured with the water samples and with volar forearm and palm of the hand in 10 healthy volunteers. The skin application time varied between 7 and 9 s and the linear slope region between 3 and 5 s at the evaporation rates of 3-220 g/m(2) h. A correlation coefficient between the TEWL value from the closed chamber measurements and the readings of the laboratory scale was 0.99 (P measurements with the water samples was 4.0% at the evaporation rate of 40 g/m(2) h. A correlation coefficient of the TEWL values between the closed chamber and open chamber measurements was 0

  19. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  20. Bent approximations to synchrotron radiation optics

    International Nuclear Information System (INIS)

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors

  1. Sample Preprocessing For Atomic Spectrometry

    International Nuclear Information System (INIS)

    Kim, Sun Tae

    2004-08-01

    This book gives descriptions of atomic spectrometry, which deals with atomic absorption spectrometry such as Maxwell-Boltzmann equation and Beer-Lambert law, atomic absorption spectrometry for solvent extraction, HGAAS, ETASS, and CVAAS and inductively coupled plasma emission spectrometer, such as basic principle, generative principle of plasma and device and equipment, and interferences, and inductively coupled plasma mass spectrometry like device, pros and cons of ICP/MS, sample analysis, reagent, water, acid, flux, materials of experiments, sample and sampling and disassembling of sample and pollution and loss in open system and closed system.

  2. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  3. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Bjö rk, Tomas; Szepessy, Anders; Tempone, Raul; Zouraris, Georgios E.

    2012-01-01

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  4. Convex optimisation approach to constrained fuel optimal control of spacecraft in close relative motion

    Science.gov (United States)

    Massioni, Paolo; Massari, Mauro

    2018-05-01

    This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.

  5. INTOR cost approximation

    International Nuclear Information System (INIS)

    Knobloch, A.F.

    1980-01-01

    A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de

  6. Sampling and sample processing in pesticide residue analysis.

    Science.gov (United States)

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  7. LOCFES-B: Solving the one-dimensional transport equation with user-selected spatial approximations

    International Nuclear Information System (INIS)

    Jarvis, R.D.; Nelson, P.

    1993-01-01

    Closed linear one-cell functional (CLOF) methods constitute an abstractly defined class of spatial approximations to the one-dimensional discrete ordinates equations of linear particle transport that encompass, as specific instances, the vast majority of the spatial approximations that have been either used or suggested in the computational solution of these equations. A specific instance of the class of CLOF methods is defined by a (typically small) number of functions of the cell width, total cross section, and direction cosine of particle motion. The LOCFES code takes advantage of the latter observation by permitting the use, within a more-or-less standard source iteration solution process, of an arbitrary CLOF method as defined by a user-supplied subroutine. The design objective of LOCFES was to provide automated determination of the order of accuracy (i.e., order of the discretization error) in the fine-mesh limit for an arbitrary user-selected CLOF method. This asymptotic order of accuracy is one widely used measure of the merit of a spatial approximation. This paper discusses LOCFES-B, which is a code that uses methods developed in LOCFES to solve one-dimensional linear particle transport problems with any user-selected CLOF method. LOCFES-B provides automatic solution of a given problem to within an accuracy specified by user input and provides comparison of the computational results against results from externally provided benchmark results

  8. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    Science.gov (United States)

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  9. Binary-State Dynamics on Complex Networks: Pair Approximation and Beyond

    Directory of Open Access Journals (Sweden)

    James P. Gleeson

    2013-04-01

    Full Text Available A wide class of binary-state dynamics on networks—including, for example, the voter model, the Bass diffusion model, and threshold models—can be described in terms of transition rates (spin-flip probabilities that depend on the number of nearest neighbors in each of the two possible states. High-accuracy approximations for the emergent dynamics of such models on uncorrelated, infinite networks are given by recently developed compartmental models or approximate master equations (AMEs. Pair approximations (PAs and mean-field theories can be systematically derived from the AME. We show that PA and AME solutions can coincide under certain circumstances, and numerical simulations confirm that PA is highly accurate in these cases. For monotone dynamics (where transitions out of one nodal state are impossible, e.g., susceptible-infected disease spread or Bass diffusion, PA and the AME give identical results for the fraction of nodes in the infected (active state for all time, provided that the rate of infection depends linearly on the number of infected neighbors. In the more general nonmonotone case, we derive a condition—that proves to be equivalent to a detailed balance condition on the dynamics—for PA and AME solutions to coincide in the limit t→∞. This equivalence permits bifurcation analysis, yielding explicit expressions for the critical (ferromagnetic or paramagnetic transition point of such dynamics, that is closely analogous to the critical temperature of the Ising spin model. Finally, the AME for threshold models of propagation is shown to reduce to just two differential equations and to give excellent agreement with numerical simulations. As part of this work, the Octave or Matlab code for implementing and solving the differential-equation systems is made available for download.

  10. P3-approximation for gaseous media and vacuum

    International Nuclear Information System (INIS)

    Raevskaya, V.E.

    1986-01-01

    The problems connected with calculation of neutron field in a fuel assembly (FA) of a gas cooled reactor are discussed. The problem of P 3 -approximation applicability for the description of neutron fields in closed vacuum and gas volumes is considered. Under the assumption of the field azimuthal symmetry derived are the formulas for determination of the field in cylindrical vacuum layer of multizone FA as well as the solution for the cluster central zone, where the rods with vacuum between them are placed. Because of the finiteness of voids surrounded by medium it is possible to use the condition of neutron flux density continuity as the boundary conditions for the interface with vacuum. For representation of boundary conditions for rod surfaces and the cluster central zone with vacuum the addition theorems for the field in vacuum between the roads are derived. The formulas for mean neutron fluxes in vacuum cylindrical layer and in vacuum between rods are derived. The numerical calculations performed according to various programs confirmed the validity of the derived formulas

  11. File Detection On Network Traffic Using Approximate Matching

    Directory of Open Access Journals (Sweden)

    Frank Breitinger

    2014-09-01

    Full Text Available In recent years, Internet technologies changed enormously and allow faster Internet connections, higher data rates and mobile usage. Hence, it is possible to send huge amounts of data / files easily which is often used by insiders or attackers to steal intellectual property. As a consequence, data leakage prevention systems (DLPS have been developed which analyze network traffic and alert in case of a data leak. Although the overall concepts of the detection techniques are known, the systems are mostly closed and commercial.Within this paper we present a new technique for network trac analysis based on approximate matching (a.k.a fuzzy hashing which is very common in digital forensics to correlate similar files. This paper demonstrates how to optimize and apply them on single network packets. Our contribution is a straightforward concept which does not need a comprehensive conguration: hash the file and store the digest in the database. Within our experiments we obtained false positive rates between 10-4 and 10-5 and an algorithm throughput of over 650 Mbit/s.

  12. A unified approach to the Darwin approximation

    International Nuclear Information System (INIS)

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-01-01

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting

  13. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  14. The quasi-diffusive approximation in transport theory: Local solutions

    International Nuclear Information System (INIS)

    Celaschi, M.; Montagnini, B.

    1995-01-01

    The one velocity, plane geometry integral neutron transport equation is transformed into a system of two equations, one of them being the equation of continuity and the other a generalized Fick's law, in which the usual diffusion coefficient is replaced by a self-adjoint integral operator. As the kernel of this operator is very close to the Green function of a diffusion equation, an approximate inversion by means of a second order differential operator allows to transform these equations into a purely differential system which is shown to be equivalent, in the simplest case, to a diffusion-like equation. The method, the principles of which have been exposed in a previous paper, is here extended and applied to a variety of problems. If the inversion is properly performed, the quasi-diffusive solutions turn out to be quite accurate, even in the vicinity of the interface between different material regions, where elementary diffusion theory usually fails. 16 refs., 3 tabs

  15. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  16. Classical and Quantum Models in Non-Equilibrium Statistical Mechanics: Moment Methods and Long-Time Approximations

    Directory of Open Access Journals (Sweden)

    Ramon F. Alvarez-Estrada

    2012-02-01

    Full Text Available We consider non-equilibrium open statistical systems, subject to potentials and to external “heat baths” (hb at thermal equilibrium at temperature T (either with ab initio dissipation or without it. Boltzmann’s classical equilibrium distributions generate, as Gaussian weight functions in momenta, orthogonal polynomials in momenta (the position-independent Hermite polynomialsHn’s. The moments of non-equilibrium classical distributions, implied by the Hn’s, fulfill a hierarchy: for long times, the lowest moment dominates the evolution towards thermal equilibrium, either with dissipation or without it (but under certain approximation. We revisit that hierarchy, whose solution depends on operator continued fractions. We review our generalization of that moment method to classical closed many-particle interacting systems with neither a hb nor ab initio dissipation: with initial states describing thermal equilibrium at T at large distances but non-equilibrium at finite distances, the moment method yields, approximately, irreversible thermalization of the whole system at T, for long times. Generalizations to non-equilibrium quantum interacting systems meet additional difficulties. Three of them are: (i equilibrium distributions (represented through Wigner functions are neither Gaussian in momenta nor known in closed form; (ii they may depend on dissipation; and (iii the orthogonal polynomials in momenta generated by them depend also on positions. We generalize the moment method, dealing with (i, (ii and (iii, to some non-equilibrium one-particle quantum interacting systems. Open problems are discussed briefly.

  17. Analytical solutions for the surface response to small amplitude perturbations in boundary data in the shallow-ice-stream approximation

    Directory of Open Access Journals (Sweden)

    G. H. Gudmundsson

    2008-07-01

    Full Text Available New analytical solutions describing the effects of small-amplitude perturbations in boundary data on flow in the shallow-ice-stream approximation are presented. These solutions are valid for a non-linear Weertman-type sliding law and for Newtonian ice rheology. Comparison is made with corresponding solutions of the shallow-ice-sheet approximation, and with solutions of the full Stokes equations. The shallow-ice-stream approximation is commonly used to describe large-scale ice stream flow over a weak bed, while the shallow-ice-sheet approximation forms the basis of most current large-scale ice sheet models. It is found that the shallow-ice-stream approximation overestimates the effects of bed topography perturbations on surface profile for wavelengths less than about 5 to 10 ice thicknesses, the exact number depending on values of surface slope and slip ratio. For high slip ratios, the shallow-ice-stream approximation gives a very simple description of the relationship between bed and surface topography, with the corresponding transfer amplitudes being close to unity for any given wavelength. The shallow-ice-stream estimates for the timescales that govern the transient response of ice streams to external perturbations are considerably more accurate than those based on the shallow-ice-sheet approximation. In particular, in contrast to the shallow-ice-sheet approximation, the shallow-ice-stream approximation correctly reproduces the short-wavelength limit of the kinematic phase speed given by solving a linearised version of the full Stokes system. In accordance with the full Stokes solutions, the shallow-ice-sheet approximation predicts surface fields to react weakly to spatial variations in basal slipperiness with wavelengths less than about 10 to 20 ice thicknesses.

  18. New realisation of Preisach model using adaptive polynomial approximation

    Science.gov (United States)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  19. Electrical crosstalk-coupling measurement and analysis for digital closed loop fibre optic gyro

    International Nuclear Information System (INIS)

    Jing, Jin; Hai-Ting, Tian; Xiong, Pan; Ning-Fang, Song

    2010-01-01

    The phase modulation and the closed-loop controller can generate electrical crosstalk-coupling in digital closed-loop fibre optic gyro. Four electrical cross-coupling paths are verified by the open-loop testing approach. It is found the variation of ramp amplitude will lead to the alternation of gyro bias. The amplitude and the phase parameters of the electrical crosstalk signal are measured by lock-in amplifier, and the variation of gyro bias is confirmed to be caused by the alternation of phase according to the amplitude of the ramp. A digital closed-loop fibre optic gyro electrical crosstalk-coupling model is built by approximating the electrical cross-coupling paths as a proportion and integration segment. The results of simulation and experiment show that the modulation signal electrical crosstalk-coupling can cause the dead zone of the gyro when a small angular velocity is inputted, and it could also lead to a periodic vibration of the bias error of the gyro when a large angular velocity is inputted

  20. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  1. State-selective charge exchange in slow collisions of Si3+ ions with H atoms: A molecular state close coupling treatment

    International Nuclear Information System (INIS)

    Joseph, Dwayne C; Saha, Bidhan C

    2012-01-01

    Charge transfer cross sections are calculated by employing both the quantal and semiclassical ε(R) molecular orbital close coupling (MOCC) approximations in the adiabatic representation and compared with other theoretical and experimental results

  2. State-selective charge exchange in slow collisions of Si3+ ions with H atoms: A molecular state close coupling treatment*)

    Science.gov (United States)

    Joseph, Dwayne C.; Saha, Bidhan C.

    2012-11-01

    Charge transfer cross sections are calculated by employing both the quantal and semiclassical ɛ(R) molecular orbital close coupling (MOCC) approximations in the adiabatic representation and compared with other theoretical and experimental results

  3. Density functional formulation of the random-phase approximation for inhomogeneous fluids: Application to the Gaussian core and Coulomb particles.

    Science.gov (United States)

    Frydel, Derek; Ma, Manman

    2016-06-01

    Using the adiabatic connection, we formulate the free energy in terms of the correlation function of a fictitious system, h_{λ}(r,r^{'}), in which interactions λu(r,r^{'}) are gradually switched on as λ changes from 0 to 1. The function h_{λ}(r,r^{'}) is then obtained from the inhomogeneous Ornstein-Zernike equation and the two equations constitute a general liquid-state framework for treating inhomogeneous fluids. The two equations do not yet constitute a closed set. In the present work we use the closure c_{λ}(r,r^{'})≈-λβu(r,r^{'}), known as the random-phase approximation (RPA). We demonstrate that the RPA is identical with the variational Gaussian approximation derived within the field-theoretical framework, originally derived and used for charged particles. We apply our generalized RPA approximation to the Gaussian core model and Coulomb charges.

  4.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  5. Adaptive Angular Sampling for SPECT Imaging

    OpenAIRE

    Li, Nan; Meng, Ling-Jian

    2011-01-01

    This paper presents an analytical approach for performing adaptive angular sampling in single photon emission computed tomography (SPECT) imaging. It allows for a rapid determination of the optimum sampling strategy that minimizes image variance in regions-of-interest (ROIs). The proposed method consists of three key components: (a) a set of close-form equations for evaluating image variance and resolution attainable with a given sampling strategy, (b) a gradient-based algor...

  6. Power, attraction, and reference in macrolevel social relations: An analysis of closed groups and closed societies based on the psychology of the “Soviet person”

    Directory of Open Access Journals (Sweden)

    Radina Nadezhda K.

    2017-03-01

    Full Text Available In this article the features of social-relationship systems are analyzed based on the data from a sociopsychological empirical study conducted in two stages (2002 and 2014 on a large sample with the help of g. Kelly’s Repertory grid Technique. A. V. Petrovsky’s three-factor interpersonal-relationships model as interpreted for closed groups by M. Yu. Kondratev and the concept of the closed society as described by Karl Popper provide the foundation for the theoretical hypothesis we tested. The empirical data obtained in 2002 came from 391 participants of different ages who were living in provincial towns in the Nizhny Novgorod region. The elderly respondents (232 people had lived almost all their lives under the Soviet regime; the middle-aged respondents (159 people got their education and started their careers in the USSR. Soviet society is considered to be closed because of its authoritarian and collectivist nature, static social structure, and dogmatic ideology. It is argued that both closed societies and closed groups are characterized by a rigid hierarchical social structure, isolation from other systems, and depersonalization of social relations. We have proved that members of a closed group and citizens of a closed society have similar social-relationship matrices.

  7. Canonical sampling of a lattice gas

    International Nuclear Information System (INIS)

    Mueller, W.F.

    1997-01-01

    It is shown that a sampling algorithm, recently proposed in conjunction with a lattice-gas model of nuclear fragmentation, samples the canonical ensemble only in an approximate fashion. A residual weight factor has to be taken into account to calculate correct thermodynamic averages. Then, however, the algorithm is numerically inefficient. copyright 1997 The American Physical Society

  8. A hybrid solution approach for a multi-objective closed-loop logistics network under uncertainty

    Science.gov (United States)

    Mehrbod, Mehrdad; Tu, Nan; Miao, Lixin

    2015-06-01

    The design of closed-loop logistics (forward and reverse logistics) has attracted growing attention with the stringent pressures of customer expectations, environmental concerns and economic factors. This paper considers a multi-product, multi-period and multi-objective closed-loop logistics network model with regard to facility expansion as a facility location-allocation problem, which more closely approximates real-world conditions. A multi-objective mixed integer nonlinear programming formulation is linearized by defining new variables and adding new constraints to the model. By considering the aforementioned model under uncertainty, this paper develops a hybrid solution approach by combining an interactive fuzzy goal programming approach and robust counterpart optimization based on three well-known robust counterpart optimization formulations. Finally, this paper compares the results of the three formulations using different test scenarios and parameter-sensitive analysis in terms of the quality of the final solution, CPU time, the level of conservatism, the degree of closeness to the ideal solution, the degree of balance involved in developing a compromise solution, and satisfaction degree.

  9. Exact and approximate multiple diffraction calculations

    International Nuclear Information System (INIS)

    Alexander, Y.; Wallace, S.J.; Sparrow, D.A.

    1976-08-01

    A three-body potential scattering problem is solved in the fixed scatterer model exactly and approximately to test the validity of commonly used assumptions of multiple scattering calculations. The model problem involves two-body amplitudes that show diffraction-like differential scattering similar to high energy hadron-nucleon amplitudes. The exact fixed scatterer calculations are compared to Glauber approximation, eikonal-expansion results and a noneikonal approximation

  10. Hanford site environmental surveillance master sampling schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1998-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE). Sampling is conducted to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5400.1 open-quotes General Environmental Protection Program,close quotes and DOE Order 5400.5, open-quotes Radiation Protection of the Public and the Environment.close quotes The sampling methods are described in the Environmental Monitoring Plan, United States Department of Energy, Richland Operations Office, DOE/RL91-50, Rev. 2, U.S. Department of Energy, Richland, Washington. This document contains the 1998 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP) and Drinking Water Monitoring Project. Each section of this document describes the planned sampling schedule for a specific media (air, surface water, biota, soil and vegetation, sediment, and external radiation). Each section includes the sample location, sample type, and analyses to be performed on the sample. In some cases, samples are scheduled on a rotating basis and may not be planned for 1998 in which case the anticipated year for collection is provided. In addition, a map is included for each media showing sample locations

  11. On Covering Approximation Subspaces

    Directory of Open Access Journals (Sweden)

    Xun Ge

    2009-06-01

    Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

  12. A coupled-cluster study of photodetachment cross sections of closed-shell anions

    Science.gov (United States)

    Cukras, Janusz; Decleva, Piero; Coriani, Sonia

    2014-11-01

    We investigate the performance of Stieltjes Imaging applied to Lanczos pseudo-spectra generated at the coupled cluster singles and doubles, coupled cluster singles and approximate iterative doubles and coupled cluster singles levels of theory in modeling the photodetachment cross sections of the closed shell anions H-, Li-, Na-, F-, Cl-, and OH-. The accurate description of double excitations is found to play a much more important role than in the case of photoionization of neutral species.

  13. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    Science.gov (United States)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  14. Accuracy of microbial community diversity estimated by closed- and open-reference OTUs

    Directory of Open Access Journals (Sweden)

    Robert C. Edgar

    2017-10-01

    Full Text Available Next-generation sequencing of 16S ribosomal RNA is widely used to survey microbial communities. Sequences are typically assigned to Operational Taxonomic Units (OTUs. Closed- and open-reference OTU assignment matches reads to a reference database at 97% identity (closed, then clusters unmatched reads using a de novo method (open. Implementations of these methods in the QIIME package were tested on several mock community datasets with 20 strains using different sequencing technologies and primers. Richness (number of reported OTUs was often greatly exaggerated, with hundreds or thousands of OTUs generated on Illumina datasets. Between-sample diversity was also found to be highly exaggerated in many cases, with weighted Jaccard distances between identical mock samples often close to one, indicating very low similarity. Non-overlapping hyper-variable regions in 70% of species were assigned to different OTUs. On mock communities with Illumina V4 reads, 56% to 88% of predicted genus names were false positives. Biological inferences obtained using these methods are therefore not reliable.

  15. THE REAL ISSUE WITH WALL DEPOSITS IN CLOSED FILTER CASSETTES - WHAT'S THE SAMPLE?

    Energy Technology Data Exchange (ETDEWEB)

    Brisson, M.

    2009-09-12

    The measurement of aerosol dusts has long been utilized to assess the exposure of workers to metals. Tools used to sample and measure aerosol dusts have gone through many transitions over the past century. In particular, there have been several different techniques used to sample for beryllium, not all of which might be expected to produce the same result. Today, beryllium samples are generally collected using filters housed in holders of several different designs, some of which are expected to produce a sample that mimics the human capacity for dust inhalation. The presence of dust on the interior walls of cassettes used to hold filters during metals sampling has been discussed in the literature for a number of metals, including beryllium, with widely varying data. It appears that even in the best designs, particulates can enter the sampling cassette and deposit on the interior walls rather than on the sampling medium. The causes are not well understood but are believed to include particle bounce, electrostatic forces, particle size, particle density, and airflow turbulence. Historically, the filter catch has been considered to be the sample, but the presence of wall deposits, and the potential that the filter catch is not representative of the exposure to the worker, puts that historical position into question. This leads to a fundamental question: What is the sample? This article reviews the background behind the issue, poses the above-mentioned question, and discusses options and a possible path forward for addressing that question.

  16. Global sensitivity analysis using low-rank tensor approximations

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.

  17. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  18. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  19. Comparison of the Born series and rational approximants in potential scattering. [Pade approximants, Yikawa and exponential potential

    Energy Technology Data Exchange (ETDEWEB)

    Garibotti, C R; Grinstein, F F [Rosario Univ. Nacional (Argentina). Facultad de Ciencias Exactas e Ingenieria

    1976-05-08

    It is discussed the real utility of Born series for the calculation of atomic collision processes in the Born approximation. It is suggested to make use of Pade approximants and it is shown that this approach provides very fast convergent sequences over all the energy range studied. Yukawa and exponential potential are explicitly considered and the results are compared with high-order Born approximation.

  20. Spherical Approximation on Unit Sphere

    Directory of Open Access Journals (Sweden)

    Eman Samir Bhaya

    2018-01-01

    Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of  functions in  spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in    spaces for  by modulus of smoothness of functions.

  1. Emotional Behavior Problems, Parent Emotion Socialization, and Gender as Determinants of Teacher-Child Closeness

    Science.gov (United States)

    Bardack, Sarah; Obradovic´, Jelena

    2017-01-01

    Research Findings: Drawing from a diverse community sample of 89 children, ages 4-6, their primary caregivers and teachers, this study examined the interplay of child emotional behavior problems, parent emotion socialization practices, and gender in predicting teacher-child closeness. Teachers reported on perceptions of closeness with children.…

  2. Integrated sampling and analysis plan for samples measuring >10 mrem/hour

    International Nuclear Information System (INIS)

    Haller, C.S.

    1992-03-01

    This integrated sampling and analysis plan was prepared to assist in planning and scheduling of Hanford Site sampling and analytical activities for all waste characterization samples that measure greater than 10 mrem/hour. This report also satisfies the requirements of the renegotiated Interim Milestone M-10-05 of the Hanford Federal Facility Agreement and Consent Order (the Tri-Party Agreement). For purposes of comparing the various analytical needs with the Hanford Site laboratory capabilities, the analytical requirements of the various programs were normalized by converting required laboratory effort for each type of sample to a common unit of work, the standard analytical equivalency unit (AEU). The AEU approximates the amount of laboratory resources required to perform an extensive suite of analyses on five core segments individually plus one additional suite of analyses on a composite sample derived from a mixture of the five core segments and prepare a validated RCRA-type data package

  3. Analysis of corrections to the eikonal approximation

    Science.gov (United States)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  4. A novel and highly sensitive real-time nested RT-PCR assay in a single closed tube for detection of enterovirus.

    Science.gov (United States)

    Shen, Xin-Xin; Qiu, Fang-Zhou; Zhao, Huai-Long; Yang, Meng-Jie; Hong, Liu; Xu, Song-Tao; Zhou, Shuai-Feng; Li, Gui-Xia; Feng, Zhi-Shan; Ma, Xue-Jun

    2018-03-01

    The sensitivity of qRT-PCR assay is not adequate for the detection of the samples with lower viral load, particularly in the cerebrospinal fluid (CSF) of patients. Here, we present the development of a highly sensitive real-time nested RT-PCR (RTN RT-PCR) assay in a single closed tube for detection of human enterovirus (HEV). The clinical performance of both RTN RT-PCR and qRT-PCR was also tested and compared using 140 CSF and fecal specimens. The sensitivities of RTN RT-PCR assay for EV71, Coxsackievirus A (CVA)16, CVA6 and CVA10 achieved 10 -8 dilution with a corresponding Ct value of 38.20, 36.45, 36.75, and 36.45, respectively, which is equal to traditional two-step nested RT-PCR assay and approximately 2-10-fold lower than that of qRT-PCR assay. The specificity of RTN RT-PCR assay was extensively analyzed insilico and subsequently verified using the reference isolates and clinical samples. Sixteen qRT-PCR-negative samples were detected by RTN RT-PCR and a variety of enterovirus serotypes was identified by sequencing of inner PCR products. We conclude RTN RT-PCR is more sensitive than qRT-PCR for the detection of HEV in clinical samples. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Mathematics of epidemics on networks from exact to approximate models

    CERN Document Server

    Kiss, István Z; Simon, Péter L

    2017-01-01

    This textbook provides an exciting new addition to the area of network science featuring a stronger and more methodical link of models to their mathematical origin and explains how these relate to each other with special focus on epidemic spread on networks. The content of the book is at the interface of graph theory, stochastic processes and dynamical systems. The authors set out to make a significant contribution to closing the gap between model development and the supporting mathematics. This is done by: Summarising and presenting the state-of-the-art in modeling epidemics on networks with results and readily usable models signposted throughout the book; Presenting different mathematical approaches to formulate exact and solvable models; Identifying the concrete links between approximate models and their rigorous mathematical representation; Presenting a model hierarchy and clearly highlighting the links between model assumptions and model complexity; Providing a reference source for advanced undergraduate...

  6. Comparison between powder and slices diffraction methods in teeth samples

    Energy Technology Data Exchange (ETDEWEB)

    Colaco, Marcos V.; Barroso, Regina C. [Universidade do Estado do Rio de Janeiro (IF/UERJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Aplicada; Porto, Isabel M. [Universidade Estadual de Campinas (FOP/UNICAMP), Piracicaba, SP (Brazil). Fac. de Odontologia. Dept. de Morfologia; Gerlach, Raquel F. [Universidade de Sao Paulo (FORP/USP), Rieirao Preto, SP (Brazil). Fac. de Odontologia. Dept. de Morfologia, Estomatologia e Fisiologia; Costa, Fanny N. [Coordenacao dos Programas de Pos-Graduacao de Engenharia (LIN/COPPE/UFRJ), RJ (Brazil). Lab. de Instrumentacao Nuclear

    2011-07-01

    Propose different methods to obtain crystallographic information about biological materials are important since powder method is a nondestructive method. Slices are an approximation of what would be an in vivo analysis. Effects of samples preparation cause differences in scattering profiles compared with powder method. The main inorganic component of bones and teeth is a calcium phosphate mineral whose structure closely resembles hydroxyapatite (HAp). The hexagonal symmetry, however, seems to work well with the powder diffraction data, and the crystal structure of HAp is usually described in space group P63/m. Were analyzed ten third molar teeth. Five teeth were separated in enamel, detin and circumpulpal detin powder and five in slices. All the scattering profile measurements were carried out at the X-ray diffraction beamline (XRD1) at the National Synchrotron Light Laboratory - LNLS, Campinas, Brazil. The LNLS synchrotron light source is composed of a 1.37 GeV electron storage ring, delivering approximately 4x10{sup -1}0 photons/s at 8 keV. A double-crystal Si(111) pre-monochromator, upstream of the beamline, was used to select a small energy bandwidth at 11 keV . Scattering signatures were obtained at intervals of 0.04 deg for angles from 24 deg to 52 deg. The human enamel experimental crystallite size obtained in this work were 30(3)nm (112 reflection) and 30(3)nm (300 reflection). These values were obtained from measurements of powdered enamel. When comparing the slice obtained 58(8)nm (112 reflection) and 37(7)nm (300 reflection) enamel diffraction patterns with those generated by the powder specimens, a few differences emerge. This work shows differences between powder and slices methods, separating characteristics of sample of the method's influence. (author)

  7. Comparison between powder and slices diffraction methods in teeth samples

    International Nuclear Information System (INIS)

    Colaco, Marcos V.; Barroso, Regina C.; Porto, Isabel M.; Gerlach, Raquel F.; Costa, Fanny N.

    2011-01-01

    Propose different methods to obtain crystallographic information about biological materials are important since powder method is a nondestructive method. Slices are an approximation of what would be an in vivo analysis. Effects of samples preparation cause differences in scattering profiles compared with powder method. The main inorganic component of bones and teeth is a calcium phosphate mineral whose structure closely resembles hydroxyapatite (HAp). The hexagonal symmetry, however, seems to work well with the powder diffraction data, and the crystal structure of HAp is usually described in space group P63/m. Were analyzed ten third molar teeth. Five teeth were separated in enamel, detin and circumpulpal detin powder and five in slices. All the scattering profile measurements were carried out at the X-ray diffraction beamline (XRD1) at the National Synchrotron Light Laboratory - LNLS, Campinas, Brazil. The LNLS synchrotron light source is composed of a 1.37 GeV electron storage ring, delivering approximately 4x10 -1 0 photons/s at 8 keV. A double-crystal Si(111) pre-monochromator, upstream of the beamline, was used to select a small energy bandwidth at 11 keV . Scattering signatures were obtained at intervals of 0.04 deg for angles from 24 deg to 52 deg. The human enamel experimental crystallite size obtained in this work were 30(3)nm (112 reflection) and 30(3)nm (300 reflection). These values were obtained from measurements of powdered enamel. When comparing the slice obtained 58(8)nm (112 reflection) and 37(7)nm (300 reflection) enamel diffraction patterns with those generated by the powder specimens, a few differences emerge. This work shows differences between powder and slices methods, separating characteristics of sample of the method's influence. (author)

  8. An analytical approach to the CMB polarization in a spatially closed background

    Science.gov (United States)

    Niazy, Pedram; Abbassi, Amir H.

    2018-03-01

    The scalar mode polarization of the cosmic microwave background is derived in a spatially closed universe from the Boltzmann equation using the line of sight integral method. The EE and TE multipole coefficients have been extracted analytically by considering some tolerable approximations such as considering the evolution of perturbation hydrodynamically and sudden transition from opacity to transparency at the time of last scattering. As the major advantage of analytic expressions, CEE,ℓS and CTE,ℓ explicitly show the dependencies on baryon density ΩB, matter density ΩM, curvature ΩK, primordial spectral index ns, primordial power spectrum amplitude As, Optical depth τreion, recombination width σt and recombination time tL. Using a realistic set of cosmological parameters taken from a fit to data from Planck, the closed universe EE and TE power spectrums in the scalar mode are compared with numerical results from the CAMB code and also latest observational data. The analytic results agree with the numerical ones on the big and moderate scales. The peak positions are in good agreement with the numerical result on these scales while the peak heights agree with that to within 20% due to the approximations have been considered for these derivations. Also, several interesting properties of CMB polarization are revealed by the analytic spectra.

  9. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  10. Characteristics of Mae Moh lignite: Hardgrove grindability index and approximate work index

    Directory of Open Access Journals (Sweden)

    Wutthiphong Tara

    2012-02-01

    Full Text Available The purpose of this research was to preliminarily study the Mae Moh lignite grindability tests emphasizing onHardgrove grindability and approximate work index determination respectively. Firstly, the lignite samples were collected,prepared and analyzed for calorific value, total sulfur content, and proximate analysis. After that the Hardgrove grindabilitytest using ball-race test mill was performed. Knowing the Hardgrove indices, the Bond work indices of some samples wereestimated using the Aplan’s formula. The approximate work indices were determined by running a batch dry-grinding testusing a laboratory ball mill. Finally, the work indices obtained from both methods were compared. It was found that allsamples could be ranked as lignite B, using the heating value as criteria, if the content of mineral matter is neglected. Similarly,all samples can be classified as lignite with the Hargrove grindability indices ranging from about 40 to 50. However, there isa significant difference in the work indices derived from Hardgrove and simplified Bond grindability tests. This may be due todifference in variability of lignite properties and the test procedures. To obtain more accurate values of the lignite workindex, the time-consuming Bond procedure should be performed with a number of corrections for different milling conditions.With Hardgrove grindability indices and the work indices calculated from Aplan’s formula, capacity of the roller-racepulverizer and grindability of the Mae Moh lignite should be investigated in detail further.

  11. Coupled states approximation for scattering of two diatoms

    International Nuclear Information System (INIS)

    Heil, T.G.; Green, S.; Kouri, D.J.

    1978-01-01

    The coupled states (CS) approximation is developed in detail for the general case of two colliding diatomic molecules. The high energy limit of the exact Lippmann-Schwinger equation is used to obtain the CS equations so that the sufficiency conditions of Kouri, Heil, and Shimoni apply. In addition, care is taken to ensure correct treatment of parity in the CS, as well as correct labeling of the CS by an effective orbital angular momentum. The analysis follows that given by Shimoni and Kouri for atom-diatom collisions where the coupled rotor angular momentum j 12 and projection lambda 12 replace the single diatom angular momentum j and projection lambda. The result is an expression for the differential scattering amplitude which is a generalization of the highly successful McGuire-Kouri differential scattering amplitude for atom-diatom collisions. Also, the opacity function is found to be a generalization of the Clebsch-Gordon weight atom-diatom expression of Shimoni and Kouri. The diatom-diatom CS body frame T matrix T/sup J/(j 1 'j 2 'j 12 'lambda 12 'vertical-bar j 1 j 2 j 12 lambda 12 ) is also found to be nondiagonal in lambda' 12 ,lambda 12 , just as in the atom-diatom case. The parity and identical molecule interchange symmetries are also considered in detail in both the exact close coupling and CS approximations. Symmetrized expressions for all relevant quantities are obtained, along with the symmetrized coupled equations one must solve. The properly labeled and symmetrized CS equations have not been derived before this present work. The present correctly labeled CS theory is tested computationally by applications to three different diatom-diatom potentials. First we carry out calculations for para-para, ortho-ortho, and ortho-para H 2 -H 2 collisions using the experimental potential of Farrar and Lee

  12. Application of Latin hypercube sampling to RADTRAN 4 truck accident risk sensitivity analysis

    International Nuclear Information System (INIS)

    Mills, G.S.; Neuhauser, K.S.; Kanipe, F.L.

    1994-01-01

    The sensitivity of calculated dose estimates to various RADTRAN 4 inputs is an available output for incident-free analysis because the defining equations are linear and sensitivity to each variable can be calculated in closed mathematical form. However, the necessary linearity is not characteristic of the equations used in calculation of accident dose risk, making a similar tabulation of sensitivity for RADTRAN 4 accident analysis impossible. Therefore, a study of sensitivity of accident risk results to variation of input parameters was performed using representative routes, isotopic inventories, and packagings. It was determined that, of the approximately two dozen RADTRAN 4 input parameters pertinent to accident analysis, only a subset of five or six has significant influence on typical analyses or is subject to random uncertainties. These five or six variables were selected as candidates for Latin Hypercube Sampling applications. To make the effect of input uncertainties on calculated accident risk more explicit, distributions and limits were determined for two variables which had approximately proportional effects on calculated doses: Pasquill Category probability (PSPROB) and link population density (LPOPD). These distributions and limits were used as input parameters to Sandia's Latin Hypercube Sampling code to generate 50 sets of RADTRAN 4 input parameters used together with point estimates of other necessary inputs to calculate 50 observations of estimated accident dose risk.Tabulations of the RADTRAN 4 accident risk input variables and their influence on output plus illustrative examples of the LHS calculations, for truck transport situations that are typical of past experience, will be presented

  13. Tunneling effects in electromagnetic wave scattering by nonspherical particles: A comparison of the Debye series and physical-geometric optics approximations

    International Nuclear Information System (INIS)

    Bi, Lei; Yang, Ping

    2016-01-01

    The accuracy of the physical-geometric optics (PG-O) approximation is examined for the simulation of electromagnetic scattering by nonspherical dielectric particles. This study seeks a better understanding of the tunneling effect on the phase matrix by employing the invariant imbedding method to rigorously compute the zeroth-order Debye series, from which the tunneling efficiency and the phase matrix corresponding to the diffraction and external reflection are obtained. The tunneling efficiency is shown to be a factor quantifying the relative importance of the tunneling effect over the Fraunhofer diffraction near the forward scattering direction. Due to the tunneling effect, different geometries with the same projected cross section might have different diffraction patterns, which are traditionally assumed to be identical according to the Babinet principle. For particles with a fixed orientation, the PG-O approximation yields the external reflection pattern with reasonable accuracy, but ordinarily fails to predict the locations of peaks and minima in the diffraction pattern. The larger the tunneling efficiency, the worse the PG-O accuracy is at scattering angles less than 90°. If the particles are assumed to be randomly oriented, the PG-O approximation yields the phase matrix close to the rigorous counterpart, primarily due to error cancellations in the orientation-average process. Furthermore, the PG-O approximation based on an electric field volume-integral equation is shown to usually be much more accurate than the Kirchhoff surface integral equation at side-scattering angles, particularly when the modulus of the complex refractive index is close to unity. Finally, tunneling efficiencies are tabulated for representative faceted particles. - Highlights: • Concepts of diffraction, reflection and tunneling are refined. • The diffraction together with reflection is rigorously treated. • An improved invariant imbedding method is employed to compute the Debye

  14. Recognition of computerized facial approximations by familiar assessors.

    Science.gov (United States)

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  15. Higher-Order Approximations of Motion of a Nonlinear Oscillator Using the Parameter Expansion Technique

    Science.gov (United States)

    Ganji, S. S.; Domairry, G.; Davodi, A. G.; Babazadeh, H.; Seyedalizadeh Ganji, S. H.

    The main objective of this paper is to apply the parameter expansion technique (a modified Lindstedt-Poincaré method) to calculate the first, second, and third-order approximations of motion of a nonlinear oscillator arising in rigid rod rocking back. The dynamics and frequency of motion of this nonlinear mechanical system are analyzed. A meticulous attention is carried out to the study of the introduced nonlinearity effects on the amplitudes of the oscillatory states and on the bifurcation structures. We examine the synchronization and the frequency of systems using both the strong and special method. Numerical simulations and computer's answers confirm and complement the results obtained by the analytical approach. The approach proposes a choice to overcome the difficulty of computing the periodic behavior of the oscillation problems in engineering. The solutions of this method are compared with the exact ones in order to validate the approach, and assess the accuracy of the solutions. In particular, APL-PM works well for the whole range of oscillation amplitudes and excellent agreement of the approximate frequency with the exact one has been demonstrated. The approximate period derived here is accurate and close to the exact solution. This method has a distinguished feature which makes it simple to use, and also it agrees with the exact solutions for various parameters.

  16. A coupled-cluster study of photodetachment cross sections of closed-shell anions

    International Nuclear Information System (INIS)

    Cukras, Janusz; Decleva, Piero; Coriani, Sonia

    2014-01-01

    We investigate the performance of Stieltjes Imaging applied to Lanczos pseudo-spectra generated at the coupled cluster singles and doubles, coupled cluster singles and approximate iterative doubles and coupled cluster singles levels of theory in modeling the photodetachment cross sections of the closed shell anions H − , Li − , Na − , F − , Cl − , and OH − . The accurate description of double excitations is found to play a much more important role than in the case of photoionization of neutral species

  17. The Behavior of Opening and Closing Prices Noise and Overreaction

    Directory of Open Access Journals (Sweden)

    Sumiyana Sumiyana

    2009-01-01

    Full Text Available This study extends several previous studies that conclude that noise and overreaction on intraday data occur. Those studies have yet to be clear about the kind of price that explains for this noise and overreaction. This study examines the opening price and closing price behavior, and tries to explain the noise and overreaction on the Indonesia Stock Exchange using intraday data in every 30-minute interval. Sample is firms listed in LQ45 index. Sequentially, this research sample is filtered to stocks that are the most actively traded on the Indonesia Stock Exchange based on trading frequency in an observation period from January to December 2006. This research finds that noise and overreaction phenomena always occur in the opening and closing prices. In addition, investors actually correct the noise and overreaction that occur simultaneously at the first 30-minute interval on every trading day.

  18. Approximate stresses in 2-D flat elastic contact fretting problems

    Science.gov (United States)

    Urban, Michael Rene

    Fatigue results from the cyclic loading of a solid body. If the body subject to fatigue is in contact with another body and relative sliding motion occurs between these two bodies, then rubbing surface damage can accelerate fatigue failure. The acceleration of fatigue failure is especially important if the relative motion between the two bodies results in surface damage without excessive surface removal via wear. The situation just described is referred to as fretting fatigue. Understanding of fretting fatigue is greatly enhanced if the stress state associated with fretting can be characterized. For Hertzian contact, this can readily be done. Unfortunately, simple stress formulae are not available for flat body contact. The primary result of the present research is the development of a new, reasonably accurate, approximate closed form expression for 2-dimensional contact stresses which has been verified using finite element modeling. This expression is also combined with fracture mechanics to provide a simple method of determining when a crack is long enough to no longer be affected by the contact stress field. Lower bounds on fatigue life can then easily be calculated using fracture mechanics. This closed form expression can also be used to calculate crack propagation within the contact stress field. The problem of determining the cycles required to generate an initial crack and what to choose as an initial crack size is unresolved as it is in non-fretting fatigue.

  19. Development of nodal interface conditions for a PN approximation nodal model

    International Nuclear Information System (INIS)

    Feiz, M.

    1993-01-01

    A relation was developed for approximating higher order odd-moments from lower order odd-moments at the nodal interfaces of a Legendre polynomial nodal model. Two sample problems were tested using different order P N expansions in adjacent nodes. The developed relation proved to be adequate and matched the nodal interface flux accurately. The development allows the use of different order expansions in adjacent nodes, and will be used in a hybrid diffusion-transport nodal model. (author)

  20. Approximating The DCM

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg

    2005-01-01

    The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...

  1. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  2. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  3. Tank 241-TX-105 vapor sampling and analysis tank characterization report

    International Nuclear Information System (INIS)

    Huckaby, J.L.

    1995-01-01

    Tank 241-TX-105 headspace gas and vapor samples were collected and analyzed to help determine the potential risks to tank farm workers due to fugitive emissions from the tank. The drivers and objectives of waste tank headspace sampling and analysis are discussed in open-quotes Program Plan for the Resolution of Tank Vapor Issues.close quotes Tank 241-TX-105 was vapor sampled in accordance with open-quotes Data Quality Objectives for Generic In-Tank Health and Safety Issue Resolution.close quotes

  4. Tank 241-BY-111 vapor sampling and analysis tank characterization report

    International Nuclear Information System (INIS)

    Huckaby, J.L.

    1995-01-01

    Tank 241-BY-111 headspace gas and vapor samples were collected and analyzed to help determine the potential risks to tank farm workers due to fugitive emissions from the tank. The drivers and objectives of waste tank headspace sampling and analysis are discussed in open-quotes Program Plan for the Resolution of Tank Vapor Issues.close quotes Tank 241-BY-111 was vapor sampled in accordance with open-quotes Data Quality Objectives for Generic In-Tank Health and Safety Issue Resolution.close quotes

  5. Tank 241-TX-118 vapor sampling and analysis tank characterization report

    International Nuclear Information System (INIS)

    Huckaby, J.L.

    1995-01-01

    Tank 241-TX-118 headspace gas and vapor samples were collected and analyzed to help determine the potential risks to tank farm workers due to fugitive emissions from the tank. The drivers and objectives of waste tank headspace sampling and analysis are discussed in open-quotes Program Plan for the Resolution of Tank Vapor Issues.close quotes Tank 241-TX-118 was vapor sampled in accordance with open-quotes Data Quality Objectives for Generic In-Tank Health and Safety Issue Resolution.close quotes

  6. Tank 241-BY-112 vapor sampling and analysis tank characterization report

    International Nuclear Information System (INIS)

    Huckaby, J.L.

    1995-01-01

    Tank 241-BY-112 headspace gas and vapor samples were collected and analyzed to help determine the potential risks to tank farm workers due to fugitive emissions from the tank. The drivers and objectives of waste tank headspace sampling and analysis are discussed in open-quotes Program Plan for the Resolution of Tank Vapor Issues.close quotes Tank 241-BY-112 was vapor sampled in accordance with open-quotes Data Quality Objectives for Generic In-Tank Health and Safety Issue Resolution.close quotes

  7. Tank 241-C-104 vapor sampling and analysis tank characterization report

    International Nuclear Information System (INIS)

    Huckaby, J.L.

    1995-01-01

    Tank 241-C-104 headspace gas and vapor samples were collected and analyzed to help determine the potential risks to tank farm workers due to fugitive emissions from the tank. The drivers and objectives of waste tank headspace sampling and analysis are discussed in open-quotes Program Plan for the Resolution of Tank Vapor Issues.close quotes Tank 241-C-104 was vapor sampled in accordance with open-quotes Data Quality Objectives for Generic In-Tank Health and Safety Issue Resolution.close quotes

  8. Tank 241-BY-103 vapor sampling and analysis tank characterization report

    International Nuclear Information System (INIS)

    Huckaby, J.L.

    1995-01-01

    Tank 241-BY-103 headspace gas and vapor samples were collected and analyzed to help determine the potential risks to tank farm workers due to fugitive emissions from the tank. The drivers and objectives of waste tank headspace sampling and analysis are discussed in open-quotes Program Plan for the Resolution of Tank Vapor Issues.close quotes Tank 241-BY-103 was vapor sampled in accordance with open-quotes Data Quality Objectives for Generic In-Tank Health and Safety Issue Resolution.close quotes

  9. Tank 241-U-107 vapor sampling and analysis tank characterization report

    Energy Technology Data Exchange (ETDEWEB)

    Huckaby, J.L.

    1995-05-31

    Tank 241-U-107 headspace gas and vapor samples were collected and analyzed to help determine the potential risks to tank farm workers due to fugitive emissions from the tank. The drivers and objectives of waste tank headspace sampling and analysis are discussed in {open_quotes}Program Plan for the Resolution of Tank Vapor Issues.{close_quotes} Tank 241-U-107 was vapor sampled in accordance with {open_quotes}Data Quality Objectives for Generic In-Tank Health and Safety Issue Resolution.{close_quotes}

  10. An approximation for kanban controlled assembly systems

    NARCIS (Netherlands)

    Topan, E.; Avsar, Z.M.

    2011-01-01

    An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated

  11. Multiple electron capture in close ion-atom collisions

    International Nuclear Information System (INIS)

    Schlachter, A.S.; Stearns, J.W.; Berkner, K.H.

    1989-01-01

    Collisions in which a fast highly charged ion passes within the orbit of K electron of a target gas atom are selected by emission of a K x-ray from the projectile or target. Measurement of the projectile charge state after the collision, in coincidence with the K x-ray, allows measurement of the charge-transfer probability during these close collisions. When the projectile velocity is approximately the same as that of target electrons, a large number of electrons can be transferred to the projectile in a single collision. The electron-capture probability is found to be a linear function of the number of vacancies in the projectile L shell for 47-MeV calcium ions in an Ar target. 18 refs., 9 figs

  12. Closed cycle high-repetition-rate pulsed HF laser

    Science.gov (United States)

    Harris, Michael R.; Morris, A. V.; Gorton, Eric K.

    1997-04-01

    The design and performance of a closed cycle high repetition rate HF laser is described. A short pulse, glow discharge is formed in a 10 SF6:1 H2 gas mixture at a total pressure of approximately 110 torr within a 15 by 0.5 by 0.5 cm3 volume. Transverse, recirculated gas flow adequate to enable repetitive operation up to 3 kHz is imposed by a centrifugal fan. The fan also forces the gas through a scrubber cell to eliminate ground state HF from the gas stream. An automated gas make-up system replenishes spent gas removed by the scrubber. Typical mean laser output powers up to 3 W can be maintained for extended periods of operation.

  13. Exact solutions for fermionic Green's functions in the Bloch-Nordsieck approximation of QED

    International Nuclear Information System (INIS)

    Kernemann, A.; Stefanis, N.G.

    1989-01-01

    A set of new closed-form solutions for fermionic Green's functions in the Bloch-Nordsieck approximation of QED is presented. A manifestly covariant phase-space path-integral method is applied for calculating the n-fermion Green's function in a classical external field. In the case of one and two fermions, explicit expressions for the full Green's functions are analytically obtained, with renormalization carried out in the modified minimal subtraction scheme. The renormalization constants and the corresponding anomalous dimensions are determined. The mass-shell behavior of the two-fermion Green's function is investigated in detail. No assumptions are made concerning the structure of asymptotic states and no IR cutoff is used in the calculations

  14. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  15. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  16. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    Science.gov (United States)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  17. Renormalization in self-consistent approximation schemes at finite temperature I: theory

    International Nuclear Information System (INIS)

    Hees, H. van; Knoll, J.

    2001-07-01

    Within finite temperature field theory, we show that truncated non-perturbative self-consistent Dyson resummation schemes can be renormalized with local counter-terms defined at the vacuum level. The requirements are that the underlying theory is renormalizable and that the self-consistent scheme follows Baym's Φ-derivable concept. The scheme generates both, the renormalized self-consistent equations of motion and the closed equations for the infinite set of counter terms. At the same time the corresponding 2PI-generating functional and the thermodynamic potential can be renormalized, in consistency with the equations of motion. This guarantees the standard Φ-derivable properties like thermodynamic consistency and exact conservation laws also for the renormalized approximation scheme to hold. The proof uses the techniques of BPHZ-renormalization to cope with the explicit and the hidden overlapping vacuum divergences. (orig.)

  18. The auxiliary field method and approximate analytical solutions of the Schroedinger equation with exponential potentials

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre-Brac, Bernard [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France); Semay, Claude; Buisseret, Fabien [Groupe de Physique Nucleaire Theorique, Universite de Mons-Hainaut, Academie universitaire Wallonie-Bruxelles, Place du Parc 20, B-7000 Mons (Belgium)], E-mail: silvestre@lpsc.in2p3.fr, E-mail: claude.semay@umh.ac.be, E-mail: fabien.buisseret@umh.ac.be

    2009-06-19

    The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -{alpha}r{sup {lambda}}exp(-{beta}r) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential.

  19. The auxiliary field method and approximate analytical solutions of the Schroedinger equation with exponential potentials

    International Nuclear Information System (INIS)

    Silvestre-Brac, Bernard; Semay, Claude; Buisseret, Fabien

    2009-01-01

    The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -αr λ exp(-βr) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential

  20. Improved Dutch Roll Approximation for Hypersonic Vehicle

    Directory of Open Access Journals (Sweden)

    Liang-Liang Yin

    2014-06-01

    Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.

  1. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  2. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  3. Legendre-tau approximations for functional differential equations

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  4. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  5. Local density approximations for relativistic exchange energies

    International Nuclear Information System (INIS)

    MacDonald, A.H.

    1986-01-01

    The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented

  6. Non-Equilibrium Liouville and Wigner Equations: Moment Methods and Long-Time Approximations

    Directory of Open Access Journals (Sweden)

    Ramon F. Álvarez-Estrada

    2014-03-01

    Full Text Available We treat the non-equilibrium evolution of an open one-particle statistical system, subject to a potential and to an external “heat bath” (hb with negligible dissipation. For the classical equilibrium Boltzmann distribution, Wc,eq, a non-equilibrium three-term hierarchy for moments fulfills Hermiticity, which allows one to justify an approximate long-time thermalization. That gives partial dynamical support to Boltzmann’s Wc,eq, out of the set of classical stationary distributions, Wc;st, also investigated here, for which neither Hermiticity nor that thermalization hold, in general. For closed classical many-particle systems without hb (by using Wc,eq, the long-time approximate thermalization for three-term hierarchies is justified and yields an approximate Lyapunov function and an arrow of time. The largest part of the work treats an open quantum one-particle system through the non-equilibrium Wigner function, W. Weq for a repulsive finite square well is reported. W’s (< 0 in various cases are assumed to be quasi-definite functionals regarding their dependences on momentum (q. That yields orthogonal polynomials, HQ,n(q, for Weq (and for stationary Wst, non-equilibrium moments, Wn, of W and hierarchies. For the first excited state of the harmonic oscillator, its stationary Wst is a quasi-definite functional, and the orthogonal polynomials and three-term hierarchy are studied. In general, the non-equilibrium quantum hierarchies (associated with Weq for the Wn’s are not three-term ones. As an illustration, we outline a non-equilibrium four-term hierarchy and its solution in terms of generalized operator continued fractions. Such structures also allow one to formulate long-time approximations, but make it more difficult to justify thermalization. For large thermal and de Broglie wavelengths, the dominant Weq and a non-equilibrium equation for W are reported: the non-equilibrium hierarchy could plausibly be a three-term one and possibly not

  7. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  8. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  9. Biosphere II: engineering of manned, closed ecological systems.

    Science.gov (United States)

    Dempster, W F

    1991-01-01

    Space Biospheres and Ventures, a private, for-profit firm, has undertaken a major research and development project in the study of biospheres, with the objective of creating and producing biospheres. Biosphere II-scheduled for completion in March 1991-will be essentially isolated from the existing biosphere by a closed structure, composed of components derived from the existing biosphere. Like the biosphere of the Earth, Biosphere II will be essentially closed to exchanges of material or living organisms with the surrounding environment and open to energy and information exchanges. Also, like the biosphere of the Earth, Biosphere II will contain five kingdoms of life, a variety of ecosystems, plus humankind, culture, and technics. The system is designed to be complex, stable and evolving throughout its intended 100-year lifespan, rather than static. Biosphere II will cover approximately 1.3 hectare and contain 200,000 m3 in volume, with seven major biomes: tropical rainforest, tropical savannah, marsh, marine, desert, intensive agriculture, and human habitat. An interdisciplinary team of leading scientific, ecological, management, architectural, and engineering consultants have been contracted by Space Biospheres Ventures for the project. Potential applications for biospheric systems include scientific and ecological management research, refuges for endangered species, and life habitats for manned stations on spacecraft or other planets.

  10. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  11. Saddlepoint approximation methods in financial engineering

    CERN Document Server

    Kwok, Yue Kuen

    2018-01-01

    This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables.  The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...

  12. Improved approximate inspirals of test bodies into Kerr black holes

    International Nuclear Information System (INIS)

    Gair, Jonathan R; Glampedakis, Kostas

    2006-01-01

    We present an improved version of the approximate scheme for generating inspirals of test bodies into a Kerr black hole recently developed by Glampedakis, Hughes and Kennefick. Their original 'hybrid' scheme was based on combining exact relativistic expressions for the evolution of the orbital elements (the semilatus rectum p and eccentricity e) with an approximate, weak-field, formula for the energy and angular momentum fluxes, amended by the assumption of constant inclination angle ι during the inspiral. Despite the fact that the resulting inspirals were overall well behaved, certain pathologies remained for orbits in the strong-field regime and for orbits which are nearly circular and/or nearly polar. In this paper we eliminate these problems by incorporating an array of improvements in the approximate fluxes. First, we add certain corrections which ensure the correct behavior of the fluxes in the limit of vanishing eccentricity and/or 90 deg. inclination. Second, we use higher order post-Newtonian formulas, adapted for generic orbits. Third, we drop the assumption of constant inclination. Instead, we first evolve the Carter constant by means of an approximate post-Newtonian expression and subsequently extract the evolution of ι. Finally, we improve the evolution of circular orbits by using fits to the angular momentum and inclination evolution determined by Teukolsky-based calculations. As an application of our improved scheme, we provide a sample of generic Kerr inspirals which we expect to be the most accurate to date, and for the specific case of nearly circular orbits we locate the critical radius where orbits begin to decircularize under radiation reaction. These easy-to-generate inspirals should become a useful tool for exploring LISA data analysis issues and may ultimately play a role in the detection of inspiral signals in the LISA data

  13. The approximation function of bridge deck vibration derived from the measured eigenmodes

    Directory of Open Access Journals (Sweden)

    Sokol Milan

    2017-12-01

    Full Text Available This article deals with a method of how to acquire approximate displacement vibration functions. Input values are discrete, experimentally obtained mode shapes. A new improved approximation method based on the modal vibrations of the deck is derived using the least-squares method. An alternative approach to be employed in this paper is to approximate the displacement vibration function by a sum of sine functions whose periodicity is determined by spectral analysis adapted for non-uniformly sampled data and where the parameters of scale and phase are estimated as usual by the least-squares method. Moreover, this periodic component is supplemented by a cubic regression spline (fitted on its residuals that captures individual displacements between piers. The statistical evaluation of the stiffness parameter is performed using more vertical modes obtained from experimental results. The previous method (Sokol and Flesch, 2005, which was derived for near the pier areas, has been enhanced to the whole length of the bridge. The experimental data describing the mode shapes are not appropriate for direct use. Especially the higher derivatives calculated from these data are very sensitive to data precision.

  14. Active Fault Diagnosis in Sampled-data Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2015-01-01

    The focus in this paper is on active fault diagnosis (AFD) in closed-loop sampleddata systems. Applying the same AFD architecture as for continuous-time systems does not directly result in the same set of closed-loop matrix transfer functions. For continuous-time systems, the LFT (linear fractional...... transformation) structure in the connection between the parametric faults and the matrix transfer function (also known as the fault signature matrix) applied for AFD is not directly preserved for sampled-data system. As a consequence of this, the AFD methods cannot directly be applied for sampled-data systems....... Two methods are considered in this paper to handle the fault signature matrix for sampled-data systems such that standard AFD methods can be applied. The first method is based on a discretization of the system such that the LFT structure is preserved resulting in the same LFT structure in the fault...

  15. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  16. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  17. A Case Study on Air Combat Decision Using Approximated Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Yaofei Ma

    2014-01-01

    Full Text Available As a continuous state space problem, air combat is difficult to be resolved by traditional dynamic programming (DP with discretized state space. The approximated dynamic programming (ADP approach is studied in this paper to build a high performance decision model for air combat in 1 versus 1 scenario, in which the iterative process for policy improvement is replaced by mass sampling from history trajectories and utility function approximating, leading to high efficiency on policy improvement eventually. A continuous reward function is also constructed to better guide the plane to find its way to “winner” state from any initial situation. According to our experiments, the plane is more offensive when following policy derived from ADP approach other than the baseline Min-Max policy, in which the “time to win” is reduced greatly but the cumulated probability of being killed by enemy is higher. The reason is analyzed in this paper.

  18. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  19. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use ... Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, People's Republic of China ...

  20. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.

    2008-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  1. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.; Holub, J.; Zdárek, J.

    2006-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  2. Time line cell tracking for the approximation of lagrangian coherent structures with subgrid accuracy

    KAUST Repository

    Kuhn, Alexander

    2013-12-05

    Lagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations. © 2013 The Authors Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  3. Approximation of the semi-infinite interval

    Directory of Open Access Journals (Sweden)

    A. McD. Mercer

    1980-01-01

    Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.

  4. DOE methods for evaluating environmental and waste management samples

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K. [eds.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, {open_quotes}Draft{close_quotes} or {open_quotes}Verified{close_quotes}. {open_quotes}Draft{close_quotes} methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. {open_quotes}Verified{close_quotes} methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy.

  5. 'LTE-diffusion approximation' for arc calculations

    International Nuclear Information System (INIS)

    Lowke, J J; Tanaka, M

    2006-01-01

    This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode

  6. Bayesian posterior sampling via stochastic gradient Fisher scoring

    NARCIS (Netherlands)

    Ahn, S.; Korattikara, A.; Welling, M.; Langford, J.; Pineau, J.

    2012-01-01

    In this paper we address the following question: "Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?". An algorithm based on the Langevin equation with stochastic gradients (SGLD) was

  7. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  8. Permeability of gypsum samples dehydrated in air

    Science.gov (United States)

    Milsch, Harald; Priegnitz, Mike; Blöcher, Guido

    2011-09-01

    We report on changes in rock permeability induced by devolatilization reactions using gypsum as a reference analog material. Cylindrical samples of natural alabaster were dehydrated in air (dry) for up to 800 h at ambient pressure and temperatures between 378 and 423 K. Subsequently, the reaction kinetics, so induced changes in porosity, and the concurrent evolution of sample permeability were constrained. Weighing the heated samples in predefined time intervals yielded the reaction progress where the stoichiometric mass balance indicated an ultimate and complete dehydration to anhydrite regardless of temperature. Porosity showed to continuously increase with reaction progress from approximately 2% to 30%, whilst the initial bulk volume remained unchanged. Within these limits permeability significantly increased with porosity by almost three orders of magnitude from approximately 7 × 10-19 m2 to 3 × 10-16 m2. We show that - when mechanical and hydraulic feedbacks can be excluded - permeability, reaction progress, and porosity are related unequivocally.

  9. Nonlinear approximation with general wave packets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...

  10. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are

  11. Approximation properties of haplotype tagging

    Directory of Open Access Journals (Sweden)

    Dreiseitl Stephan

    2006-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.

  12. Approximation for the adjoint neutron spectrum

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)

  13. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    Science.gov (United States)

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  14. Operator approximant problems arising from quantum theory

    CERN Document Server

    Maher, Philip J

    2017-01-01

    This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.

  15. Quirks of Stirling's Approximation

    Science.gov (United States)

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  16. Statistical significance approximation in local trend analysis of high-throughput time-series data using the theory of Markov chains.

    Science.gov (United States)

    Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu

    2015-09-21

    Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.

  17. Micro-scaled high-throughput digestion of plant tissue samples for multi-elemental analysis

    Directory of Open Access Journals (Sweden)

    Husted Søren

    2009-09-01

    Full Text Available Abstract Background Quantitative multi-elemental analysis by inductively coupled plasma (ICP spectrometry depends on a complete digestion of solid samples. However, fast and thorough sample digestion is a challenging analytical task which constitutes a bottleneck in modern multi-elemental analysis. Additional obstacles may be that sample quantities are limited and elemental concentrations low. In such cases, digestion in small volumes with minimum dilution and contamination is required in order to obtain high accuracy data. Results We have developed a micro-scaled microwave digestion procedure and optimized it for accurate elemental profiling of plant materials (1-20 mg dry weight. A commercially available 64-position rotor with 5 ml disposable glass vials, originally designed for microwave-based parallel organic synthesis, was used as a platform for the digestion. The novel micro-scaled method was successfully validated by the use of various certified reference materials (CRM with matrices rich in starch, lipid or protein. When the micro-scaled digestion procedure was applied on single rice grains or small batches of Arabidopsis seeds (1 mg, corresponding to approximately 50 seeds, the obtained elemental profiles closely matched those obtained by conventional analysis using digestion in large volume vessels. Accumulated elemental contents derived from separate analyses of rice grain fractions (aleurone, embryo and endosperm closely matched the total content obtained by analysis of the whole rice grain. Conclusion A high-throughput micro-scaled method has been developed which enables digestion of small quantities of plant samples for subsequent elemental profiling by ICP-spectrometry. The method constitutes a valuable tool for screening of mutants and transformants. In addition, the method facilitates studies of the distribution of essential trace elements between and within plant organs which is relevant for, e.g., breeding programmes aiming at

  18. A coupled-cluster study of photodetachment cross sections of closed-shell anions

    Energy Technology Data Exchange (ETDEWEB)

    Cukras, Janusz; Decleva, Piero; Coriani, Sonia, E-mail: coriani@units.it [Dipartimento di Scienze Chimiche e Farmaceutiche, Università degli Studi di Trieste, via L. Giorgieri 1, I-34127, Trieste (Italy)

    2014-11-07

    We investigate the performance of Stieltjes Imaging applied to Lanczos pseudo-spectra generated at the coupled cluster singles and doubles, coupled cluster singles and approximate iterative doubles and coupled cluster singles levels of theory in modeling the photodetachment cross sections of the closed shell anions H{sup −}, Li{sup −}, Na{sup −}, F{sup −}, Cl{sup −}, and OH{sup −}. The accurate description of double excitations is found to play a much more important role than in the case of photoionization of neutral species.

  19. Quantum adiabatic approximation and the geometric phase

    International Nuclear Information System (INIS)

    Mostafazadeh, A.

    1997-01-01

    A precise definition of an adiabaticity parameter ν of a time-dependent Hamiltonian is proposed. A variation of the time-dependent perturbation theory is presented which yields a series expansion of the evolution operator U(τ)=summation scr(l) U (scr(l)) (τ) with U (scr(l)) (τ) being at least of the order ν scr(l) . In particular, U (0) (τ) corresponds to the adiabatic approximation and yields Berry close-quote s adiabatic phase. It is shown that this series expansion has nothing to do with the 1/τ expansion of U(τ). It is also shown that the nonadiabatic part of the evolution operator is generated by a transformed Hamiltonian which is off-diagonal in the eigenbasis of the initial Hamiltonian. This suggests the introduction of an adiabatic product expansion for U(τ) which turns out to yield exact expressions for U(τ) for a large number of quantum systems. In particular, a simple application of the adiabatic product expansion is used to show that for the Hamiltonian describing the dynamics of a magnetic dipole in an arbitrarily changing magnetic field, there exists another Hamiltonian with the same eigenvectors for which the Schroedinger equation is exactly solvable. Some related issues concerning geometric phases and their physical significance are also discussed. copyright 1997 The American Physical Society

  20. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  1. Diophantine approximation and Dirichlet series

    CERN Document Server

    Queffélec, Hervé

    2013-01-01

    This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...

  2. APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kambo, N. S.

    2012-11-01

    Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.

  3. Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems

    KAUST Repository

    Xinyu Tang,

    2010-01-25

    Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).

  4. Sampling study in milk storage tanks by INAA

    International Nuclear Information System (INIS)

    Santos, L.G.C.; Nadai Fernandes de, E.A.; Bacchi, M.A.; Tagliaferro, F.S.

    2008-01-01

    This study investigated the representativeness of samples for assessing chemical elements in milk bulk tanks. Milk samples were collected from a closed tank in a dairy plant and from an open top tank in a dairy farm. Samples were analyzed for chemical elements by instrumental neutron activation analysis (INAA). For both experiments, Br, Ca, Cs, K, Na, Rb and Zn did not present significant differences between samples thereby indicating the appropriateness of the sampling procedure adopted to evaluate the analytes of interest. (author)

  5. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    Science.gov (United States)

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  6. Improved radiative corrections for (e,e'p) experiments: Beyond the peaking approximation and implications of the soft-photon approximation

    International Nuclear Information System (INIS)

    Weissbach, F.; Hencken, K.; Rohe, D.; Sick, I.; Trautmann, D.

    2006-01-01

    Analyzing (e,e ' p) experimental data involves corrections for radiative effects which change the interaction kinematics and which have to be carefully considered in order to obtain the desired accuracy. Missing momentum and energy due to bremsstrahlung have so far often been incorporated into the simulations and the experimental analyses using the peaking approximation. It assumes that all bremsstrahlung is emitted in the direction of the radiating particle. In this article we introduce a full angular Monte Carlo simulation method which overcomes this approximation. As a test, the angular distribution of the bremsstrahlung photons is reconstructed from H(e,e ' p) data. Its width is found to be underestimated by the peaking approximation and described much better by the approach developed in this work. The impact of the soft-photon approximation on the photon angular distribution is found to be minor as compared to the impact of the peaking approximation. (orig.)

  7. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  8. School Climate, Teacher-Child Closeness, and Low-Income Children’s Academic Skills in Kindergarten

    Science.gov (United States)

    Lowenstein, Amy E.; Friedman-Krauss, Allison H.; Raver, C. Cybele; Jones, Stephanie M.; Pess, Rachel A.

    2015-01-01

    In this study we used data on a sample of children in the Chicago Public Schools in areas of concentrated poverty-related disadvantage to examine associations between school climate and low-income children’s language/literacy and math skills during the transition to kindergarten. We also explored whether teacher-child closeness moderated these associations. Multilevel modeling analyses conducted using a sample of 242 children nested in 102 elementary schools revealed that low adult support in the school was significantly associated with children’s poorer language/literacy and math skills in kindergarten. Teacher-child closeness predicted children’s higher language/literacy and math scores and moderated the association between low adult support and children’s academic skills. Among children who were high on closeness with their teacher, those in schools with high levels of adult support showed stronger language/literacy and math skills. There were no significant associations between adult support and the academic skills of children with medium or low levels of teacher-child closeness. Results shed light on the importance of adult support at both school and classroom levels in promoting low-income children’s academic skills during the transition to kindergarten. PMID:26925186

  9. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    Directory of Open Access Journals (Sweden)

    Qing Wang

    2013-01-01

    Full Text Available In this paper, a novel direction of arrival (DOA estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.

  10. Toward a consistent random phase approximation based on the relativistic Hartree approximation

    International Nuclear Information System (INIS)

    Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.

    1992-01-01

    We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data

  11. A separable approximation of the NN-Paris-potential in the framework of the Bethe-Salpeter equation

    International Nuclear Information System (INIS)

    Schwarz, K.; Haidenbauer, J.; Froehlich, J.

    1985-09-01

    The Bethe-Salpeter equation is solved with a separable kernel for the most important nucleon-nucleon partial wave states. We employ the Ernst Shakin-Thaler method in the framework of minimal relativity (Blankenbeckler-Sugar equation) to generate a separable representation of the meson-theoretical Paris potential. These separable interactions, which closely approximate the on-shell- and half-off-shell behaviour of the Paris potential, are then cast into a covariant form for application in the Bethe-Salpeter equation. The role of relativistic effects is discussed with respect to on-shell and off-shell properties of the NN-system. (Author)

  12. Computational methods and modeling. 1. Sampling a Position Uniformly in a Trilinear Hexahedral Volume

    International Nuclear Information System (INIS)

    Urbatsch, Todd J.; Evans, Thomas M.; Hughes, H. Grady

    2001-01-01

    Monte Carlo particle transport plays an important role in some multi-physics simulations. These simulations, which may additionally involve deterministic calculations, typically use a hexahedral or tetrahedral mesh. Trilinear hexahedrons are attractive for physics calculations because faces between cells are uniquely defined, distance-to-boundary calculations are deterministic, and hexahedral meshes tend to require fewer cells than tetrahedral meshes. We discuss one aspect of Monte Carlo transport: sampling a position in a tri-linear hexahedron, which is made up of eight control points, or nodes, and six bilinear faces, where each face is defined by four non-coplanar nodes in three-dimensional Cartesian space. We derive, code, and verify the exact sampling method and propose an approximation to it. Our proposed approximate method uses about one-third the memory and can be twice as fast as the exact sampling method, but we find that its inaccuracy limits its use to well-behaved hexahedrons. Daunted by the expense of the exact method, we propose an alternate approximate sampling method. First, calculate beforehand an approximate volume for each corner of the hexahedron by taking one-eighth of the volume of an imaginary parallelepiped defined by the corner node and the three nodes to which it is directly connected. For the sampling, assume separability in the parameters, and sample each parameter, in turn, from a linear pdf defined by the sum of the four corner volumes at each limit (-1 and 1) of the parameter. This method ignores the quadratic portion of the pdf, but it requires less storage, has simpler sampling, and needs no extra, on-the-fly calculations. We simplify verification by designing tests that consist of one or more cells that entirely fill a unit cube. Uniformly sampling complicated cells that fill a unit cube will result in uniformly sampling the unit cube. Unit cubes are easily analyzed. The first problem has four wedges (or tents, or A frames) whose

  13. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  14. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  15. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  16. Approximation algorithms for guarding holey polygons ...

    African Journals Online (AJOL)

    Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...

  17. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2010 Mathematics Subject Classification. 46L07. 1. Introduction. Given a countable discrete group G, some nice approximation properties for the reduced. C∗-algebras C∗ r (G) can give us the approximation properties of G. For example, Lance. [7] proved that the nuclearity of C∗ r (G) is equivalent to the amenability of G; ...

  18. Closed-form plastic collapse loads of pipe bends under combined pressure and in-plane bending

    International Nuclear Information System (INIS)

    Oh, Chang Sik; Kim, Yun Jae

    2006-01-01

    Based on three-dimensional (3-D) FE limit analyses, this paper provides plastic limit, collapse and instability load solutions for pipe bends under combined pressure and in-plane bending. The plastic limit loads are determined from FE limit analyses based on elastic-perfectly plastic materials using the small geometry change option, and the FE limit analyses using the large geometry change option provide plastic collapse loads (using the twice-elastic-slope method) and instability loads. For the bending mode, both closing bending and opening bending are considered, and a wide range of parameters related to the bend geometry is considered. Based on the FE results, closed-form approximations of plastic limit and collapse load solutions for pipe bends under combined pressure and bending are proposed

  19. Device for sampling radioactive and aggressive liquid and vaporous media

    International Nuclear Information System (INIS)

    Przibram, E.; Halm, G.

    1974-01-01

    The equipment enables the taking of samples even of radioactive media from a main pipeline in the through-flow in a closed system. A tap device is attached to the main pipeline which branches into two parts. The one branch contains the actual tap which is closed to both sides with snap closure coupling. It is only used for taking samples. The other branch bridges the tap position as a bypass so that a representative sample is always available. Both branches join up again and lead back to the main pipeline. The sampling can be used in a nuclear power plant for the determination of O 2 , CI, SiO 2 , and Cu. A millilitre collecting cylinder and a millipore filtration device can be connected to the tap for liquid sampling and solid analysis, respectively. The system can be extended to several tap positions. Permanent measuring equipment is attached to the bypass pipe to control the sample liquid. (DG) [de

  20. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.